Transaction management for Command dispatch and handling in spring boot code

Hi Team,
We are using Axon Framework 4.7.3 and AxonServer 2023.1.0 in SE mode with SpringBoot 3.1.0.
We are using default configuration of eventbus,commandbus.
We also have @EventHandler written for Events published from Aggregates.
Before dispatching commands we perform DB updates to capture the request details.
We are using Spring Framework’s Transaction manager and have configured DB updates and command dispatching code as a part of single transaction boundary.
While dispatching commands we are using sendAndWait method. This was done because in case we get any exception before publishing the event in command handling code in Aggregate class, we wanted to rollback the DB updates done in the same transaction boundary prior to dispatching the command.
We have tested the rollback by throwing exception intentionally and have found that it is working as expected.
The @EventHandler methods for these published events access the same DB entries to decide if next set of commands to be dispatched.
Since we have added this transaction boundary we have found that, when @EventHandler methods are hitting DB, the records which are inserted prior to dispatching their respective command are still not committed.
This is happening intermittently.
We believed that the moment we execute AggregateLifecycle.apply in the aggregate class, the command handling is successful and thus the transaction boundary defined by is complete and it should commit. It is correct understanding?
How can we ensure that the aggregate’s command dispatch and handling codes transaction boundary is completed before it’s respective @EventHandler starts it’s execution?

Can this be fixed by using AsynchronousEventListener?

Based on your question about the AsynchronousEventListener, I assume you’re using the SubscribingEventProcessor.
Is that a correct assumption, @Shrirang_Khedekar?

If so, the easiest solution is to make your Event Processors “async native” by using a StreamingEventProcessor implementation instead.
The implementation with the best performance and scalability characteristics is the PooledStreamingEventProcessor.

To configure it in a Spring Boot environment, you can set the processors mode to pooled.
If you have Java configuration in place, you can use the EventProcessingConfigurer#registerPooledStreamingEventProcessor method.
The String you need to provide is the name of your processor.

The name equals the @ProcessingGroup annotation’s value in most cases. If you’re not using this annotation on your Event Handling Components (read: the classes containing your @EventHandler annotated methods), the processor’s name defaults to the package name of the Event Handling Component.

Besides the above, I also have a question for you concerning this point you’ve shared, @Shrirang_Khedekar:

Can you explain to me why you need to capture the entire request details and want to use those on the Event Handling side separately? Is there to much data to pass through the command and into the event, for example?

Thank you Steven for the reply.

I have some async processing to be done before initiating the command which I am capturing in DB. We do not want to maintain this information in event. Instead we want to fetch this from DB during event handling and execute business logic based on the same.
Will explore the PooledStreamingEventProcessor as mentioned by you and update on the same.

Sure thing, @Shrirang_Khedekar! Glad to help.
I have another question for you, though:

But why? :wink:
What is the reason for not adding this data to an event?
By not having some of the data in events, you lower one of the benefits of an Event Store, which is that it acts as your “single source of truth” (as all models are based on events).

However, sometimes it’s the best way forward. Events carrying a byte[] of a serialized document can exceed sizes that are simply impractical in events and not valuable enough to maintain forever.
I simply want to ensure you’re dealing with the latter so that you and the team are relieved from potential future predicaments.

Hi Steven, We have implemented PooledStreamingEventProcessor. It is working expected in happy scenarios.
In the event handler methods we are doing some DB updates. We have observed that when we get any exception in event handling method, it just logged.
The processor continues to process next set of events. Here the DB updates done before occurring of exception are not rolled back.

Is it expected behaviour?

If yes then how can we ensure that in case of any exceptions in event handlers(which are registered to PooledStreamingEventProcessor) the transaction is rolled back.
This way when the event is replayed after findind and fixing the root cause of exception, we will only have expected updates in DB.