We are implementing Tracking Processors for a view projection. Our store currently has 10.000+ size gaps in the ids. After some hours of fiddling I got to know the behaviour of my setup a bit better, but still I am confused about configuring it.
GapAwareTrackingToken with default configuration (batch of 100, JdbcEventStorageEngine, Postgres). The current behaviour is that I see the token traversing over small gaps, <100. I expected a high maxGapOffset to help me a bit. However whenever the gap is >100 the process halts. Increasing the batch size fixes this, however I fear other negative side effects. Either way my biggest gap is 34000 at this moment, and I dont feel that implementing a batch size of 1.000.000 would be a good idea.
Basically it feels like the use of a batchSize in probing gaps in the event index might be too highly coupled with normal event store querying. Semantically the maxGapOffset property feels like the one and only property which should influence way gaps are tollerated and not the general querylimit (batch size). Or am I missing some point?