[axon-kafka ext] - event replay not working for TEP

Trying to evaluate axon-kafka capabilities. Does it support event replay capabilities as AxonServer does ?
It does catch up the missed events, works well for below scenario -

  • During first startup, it creates the tracking token (use mongo token store here)

  • Events published, event consumed, and tracking token is updated

  • Shut down the consumer application, produce some events. Start consumer application, missed events are consumed

It does does NOT work for below scenario -

  • Shutdown consumer application

  • Delete token store document / table

  • Start consumer application. It does create token store but value of token is null. And it does not process past events.

Before we go to your point, I want to share something about replays in Axon in general.

It is not recommended to simply delete the token to start a replay, to begin with.
In doing so, your Axon application hasn’t got a clue that you intended to do a replay.

More specifically, it cannot differentiate between “a first start-up” or “reset to the beginning of time.”
Because of this, it will assume you start the processor for the first time.
Take a look at the Replay Events section of the Reference Guide if you want to know more.

Putting that aside, we can go back to your scenario.
The process you describe should lead to Axon thinking “this processor starts for the first time.”
As such, it will create an initial token of null: this reflects the beginning of the event stream.
Why it doesn’t start reading, in that case, isn’t clear to me.

I do know that any RDBMS Event Streaming solution would start reading from the beginning.
So, perhaps it’s something to do with your Kafka configuration?
What’s the retention period in there? Can it even go to the beginning of time (aka, the beginning of what your Kafka instance has stored)?

Lastly, note that Axon Server isn’t the sole implementation that supports replays.
In essence, if your message source stores events, it is capable to go back in time; hence why Kafka is a viable solution (given the setting of a reasonable retention period).

Thanks @Steven_van_Beelen
This was probably related to the upgrde of extension from 4.0-RC2 to 4.5.
After upgrading to version 4.5, the scenario working as expected.
Solution
Upgraded application to use extension version 4.5, with event processor registered using java config -

    @Autowired
    public void configure(final EventProcessingConfigurer configurer, StreamableKafkaMessageSource<String, byte[]> streamableKafkaMessageSource) {
        configurer.registerTrackingEventProcessor("order-processor", c -> streamableKafkaMessageSource);		
    }
1 Like