Behaviour of the EventTrackerStatusChangeListener

Hi guys,

I configured a tracking event processor with a custom TEP configuration and registered a simple Logging listener that logs the EventTrackerStatus. I’m using single threaded processing for simplicity, so as I expected, the map contains exactly one tracker status.

I was a little surprised regarding the value of the tracker status. Here is such a tracker status example:

{0=TrackerStatus{segment=Segment[0/0], caughtUp=true, replaying=false, merging=false, errorState=true, error=org.axonframework.axonserver.connector.AxonServerException: The Event Stream has been closed, so no further events can be retrieved, trackingToken=IndexTrackingToken{globalIndex=1468}, currentPosition=OptionalLong[1468], resetPosition=OptionalLong.empty, mergeCompletedPosition=OptionalLong.empty}}

As you see, the catchUp attribute is true, and both positions have the same values. The scenario was to strat a processor (manually) and let it catchUp with a stream (no stream appends in the meantime). A connection error happened and the tracker status listener fired with the corresponding error message - at the time of about 28000 events in event store (AxonServer).

Is it intended, that the catchUp attribute is true in that moment?

Or maybe another way around - if the tracker status contains an error, I should never check catchUpattribute?

Thanks

Simon

Hi Simon,

the caughtUp flag it at times interesting to interpret. This flag is set to true at the moment where the client tries to consume events, but no events are immediately available to it. But there is no way for the stream itself to know whether it caught up. It’s like asking a sailor on a river to notify you when the all the water of the river has gone by. I guess that’s only the case once the river has dried out. And even then there might still be water coming a bit later.

If the errorState is true, then I would certainly not take any value from the caughtUp flag. So yes, you should best ignore it at that point.

In recent Framework versions (I believe it’s 4.8, but would need to check), we’ve implemented a feature that always creates a “ReplayToken” when a processor starts for the first time. All events from the start until the position of the event store as it was at the creation of the token will be considered “replays”. But obviously, by the time it hits that replay-marker, the stream has moved on (Tortoise and Hare problem :slight_smile: )

The most reliable way to discover whether a processor has caught up (and is keeping up) with the stream, is to compare the position of the token of the processor to the head position of the stream it’s consuming. Note that this does require a round-trip to the producer of the stream. The difference in the two values is the approximate number of events to process until the head is reached.

Hope this clarifies things somewhat.