We are faced with an issue related to concurrent requests/transactions that leads to a problem with consistency in our view model. Data in the aggregate is fine and we have all events saved, but Event Handlers don’t work like we’d expect.
Our case:
We send two commands of the same type to the same Aggregate
Axon handles them sequentially, so we have remains consistent within the Aggregate
First Event get’s processed by the Event Handler and data was committed
Second event gets processed, but the Event Handler reads data without changes from step 3. Which means changes from step 3 get overwritten.
Normally this flow is fine, but in our case we have a non-idempotent operation. The order of the events doesn’t really matter, but we shouldn’t overwrite data.
Looks like we have this problem, because of the length of the transaction and second transaction started before first was committed, it means second transaction doesn’t use the updated data. Could you please give us some comments about the behaviour? Currently we are not sure how to solve this issue properly.
I also attached some logs. Maybe they will be helpful.
Our setup:
Axon version: 3.4.1
Spring Boot: 2.0.3.RELEASE
DB: MySQL with isolation level READ_COMMITTED
Are you using Subscribing or Tracking Event Processors?
We are using Subscribing Event Processors
Are you using the Axon Spring Boot Auto Configuration?
Yes
Are you instantiating a ‘org.axonframework.common.transaction.TransactionManager’?
No, but we call method transactionManager.executeInTransaction in one place to execute some logic when application starts.
Could you confirm whether or not our scenario is expected behaviour or does Axon have something in place to prevent this from happening?
it’s actually not weird for the second thread to start on step 1, while step 3 is forced to wait for the first process to finish. This has to do with the Repository requiring a lock on the aggregate.
However, having the isolation level set to read_committed, should have any updates made by the first thread be visible to the second. It that’s not the case, it looks like you are using repeatable read (which is the default for MySQL) instead.
How/where did you configure read_committed isolation level?
We set property to change isolation level for our db to read_committed
datasource:
hikari:
transaction-isolation: 2
Without the changes we wasn’t even able to handle commands, we had exception
Stack trace: org.axonframework.commandhandling.model.ConcurrencyException: An event for aggregate […] at sequence [57] was already inserted
at org.axonframework.eventsourcing.eventstore.AbstractEventStorageEngine.handlePersistenceException(AbstractEventStorageEngine.java:165)
In that case, I don’t understand why an event handler would not be able to see the changes, while the event store is.
The order in which things happen, clearly show that thread 1 committed everything well before thread 2 starts handling the event. The fact that the transaction started earlier shouldn’t matter, as you’re using read_committed.
are the changes visible for handlers that started their transaction after the commit of the first handler?
A minor detail (don’t think that’s the issue): the transaction-isolation property is expected to be the name of the isolation level, not a numeric value.
From the docs: “Set the default transaction isolation level. The specified value is the constant name from the Connection class, eg. TRANSACTION_REPEATABLE_READ.”
You can also have Hikari print all its actual settings. I’d recommend doing that.
Actually setting transaction-isolation by int or by String does not make any difference. At the end it converts the value to int anyway.
I printed Hikari settings:
Apprarently, it is not guaranteed that READ_COMMITTED will return the last committed state, unless you use explicit locks.
In the past, we used to wrap the connection pool in a LazyConnectionDataSourceProxy, to prevent connections from being opened until a first load was executed (as opposed to when a transaction starts).