Resolving org.axonframework.repository.ConcurrencyException: Concurrent modification detected for Aggregate identifier with distributed command bus

Hi ,
We are facing concurrency exception org.axonframework.repository.ConcurrencyException: Concurrent modification detected for Aggregate identifier issue when we are running our application on multiple instances using Distributed command bus with MetaDataRoutingStrategy. Both the instances are trying to modify same aggregate at the same time. We tried with conflict-resolver=“conflictResolver” and locking-strategy as “OPTIMISTIC” as well as PESSIMISTIC but issue persist.

Can you help to resolve this issue as in our application we have multiple instances which are accessing same aggregate. In axon documentation with the use of pessimistic lock strategy we can avoid concurrencyException. I am using below configuration ,is there any other configuration changes required ?

<axon:event-sourcing-repository id=“repository”
aggregate-type=“aggregate” event-bus=“eventBus”
event-store=“eventStore” cache-ref=“aggCache” conflict-resolver=“conflictResolver” locking-strategy=“PESSIMISTIC”>
</axon:event-sourcing-repository>

If it is possible to solve concurrencyException with “PESSIMISTIC” locking strategy is there any performance impact of the same.

Hi,

Be careful not to confuse the ConcurrencyException with the ConflictingModificationException. The latter is one you can solve with a ConflictResolver. The first happens because a command was not executed against the latest version of an aggregate. It is detected by the EventStore, which is attempting to append an event with a sequence number that already exists for a given aggregate.

It’s important to route commands for you aggregates consistently. If you use the default routing strategy, that will always be the case. If you use MetaDataRoutingStrategy, make sure that commands for the same aggregate are always routed to the same node. If you don’t you’ll have to cope with the ConcurrencyExceptions.

Optimistic and pessimistic locking are always local to the JVM. Locking an aggregate on one node will not lock it on another. Effectively, you always have Optimistic Locking between nodes. Pessimistic locking is recommended in most cases.

Cheers,

Allard

Hi Sankalp Sontakke,
what file configuration name? and where is it?
My project haven’t that configuration.

Vào 15:23:44 UTC+7 Thứ Tư, ngày 01 tháng 4 năm 2015, Sankalp Sontakke đã viết: