And if we have not said it before, thanks for the awesome work on the Axon Framework : ) …
A thing that has bothered us for some time though, is how to handle re delivery of events to our event listeners from Rabbit. One such case is for example when a batch of events is fetched from Rabbit, handled in an event listener, persisted and committed to a database but Rabbit is down when they are about to get acked. When Rabbit gets back up the complete batch will be redelivered. One solution would be to use distributed transactions between the database and Rabbit, but that is a road we are a bit reluctant to go down…
Another solution is to make every event listener idempotent for them selves but wouldn’t it be even nicer if the infrastructure handled that for you?
This is where the question in the topic comes in. We would like to hook in a generic deduplication mechanism that for every cluster keeps track of which events has been handled and committed, preferably in the same transaction and data source as where the event handler stores the rest of its state and the best place to hook in such a feature that we find was a custom ClusterMessageListener. the DeDeplucatingClusterMessageListener would keep track handled events, silently ignore the once already handled and publishing the others to the cluster.
Do you have any input on the above reasoning and would it be possible to extend the ListenerContainerLifecycleManager with the possibility to configure a custom message listener, for example by fetching a prototype from the application context?