I was parsing through this blog post by Frans and have two questions:
When using batched processing, you need to be aware of the following when using a non-transactional read model: when processing fails mid-batch, the tracking token won’t be updated and all the events in the batch will be processed again in the next attempt, including the ones that were already processed. It’s advisable to make all projection methods idempotent to deal with this correctly.
Is this second attempt considered a replay? Is it safe to use handlers marked with @DisallowReplay
along with batching or will they be adversely affected by multiple attempts?
The downside of these optimizations is that these are not just configuration changes. This requires specific coding in our event handlers and is domain-dependent. The good thing here is that Axon does offer the APIs needed to implement this cleanly. For each batch, there will be a single UnitOfWork. This object can be injected into our event handler methods, by simply adding it as a parameter. It has a ConcurrentHashMap-style ‘getOrComputeResource’ method that allows us to attach other resources to the UnitOfWork. Using this approach, we can do processing at the end of the batch to implement the above-mentioned optimizations.
Does sample code exist for this concept? This seems like it could drastically reduce the number of database calls, which is a major bottleneck in event processing.
Thanks,
Joel