Strategy to handle duplicate events?

From what I am testing, Axon is able to guarantee events on its internal event bus are never duplicated and are delivered in-order. This is achieved by the aggregate lock on the database table. However with the Kafka extension (Kafka version < 0.11), it can’t guarantee exact once delivery. Thus there is a potential that the message could be duplicated on the event bus.

I am looking for suggestions as to how to dedup these events. One straightforward approach is to check in the listening saga itself (by maintaining an internal map e.g). Or any other suggestions (or dramatically different direction?)


My suggestion is to model/design your events in an idempotent manner, if possible.

That is, idempotent message/event contains a fixed value at a point in time such as an account balance for example, rather than a transformational instruction such as “add $10 to balance”. If the message is naturally idempotent, then reprocessing the message/event multiple times results in the same value, e.g. f(x) = f(f(x)).