we have functional regression tests, which generate a huge load (because they run in parallel).
The test are taking very long, so we started to understand the deeps behind the used TrackingEventProcessor.
We use 4 pods/nodes and a balancer to fire the requests.
The current configuration is:
eventProcessingConfiguration .usingTrackingProcessors(configuration -> configuration .getComponent(TrackingEventProcessorConfiguration.class, () -> TrackingEventProcessorConfiguration.forParallelProcessing(4)) .andInitialSegmentsCount(4) .andBatchSize(32));
Now I have 2 questions:
Even after using segments, the processing only take place on one pod. We have 25 processors (Sagas, Groups) combined and with 2 aggregate roots. We have not changed the policy, so SequentialPerAggregatePolicy is used.
My understanding was, that more then one node should process the events with the tracking token. Am I wrong? Unfortunately, it is not working.
So when a thread runs the processor, it reclaims the token somewhat every ms and just updates the timestamp (also when event stream is empty). Why is that necessary? Would it be adequate to just update it when an event was processed?
I’m asking because this generates a huge load on a database and I’m thinking how to optimize that.
Thank a lot!