Hello all, another question as it relates to Kafka our message broker.
Kafka utilizes a concept known as partitions in order to improve throughput of message consumption. This is caveated with the fact that each partition has a unique set of messages and when parallized against multiple application instances we risk messages being consumed out of order. The solution to this is to provide a message key to events where order matters.
I do not see an obvious way to do this for events produced through an Aggregate’s apply() or eventBus.publish().
A workaround might be to intercept all outgoing events and decorate them with the appropriate key. I need advice on how to best do this.
The second part of this approach is determining which key to use and the current thinking is to pass a value into MetaData and then read this value out and set it as the key.
I hope this is not too difficult and does not result in a significant performance impact.