Hi Ben, Mavlarn,
Assuming we’re in a Axon+Spring context, and we’re set up for event sourcing by having an EventBus which is an EventStore, the behavior is as follows: if Axon detects an @Aggregate class, it knows that a repository is needed for that aggregate. If will check whether one already exists (for a class named MyAggregate, this would by default be a bean called myAggregateRepository, but the name can be overridden). If that bean doesn’t exist, it will set up an EventSourcingRepository. This doesn’t do any caching, so this explains the behaviour that Mavlarn is correctly observing.
If you do want/need caching, the way to do this is to configure a CachingEventSourcingRepository bean for the aggregate explicitly.
@Mavlarn - With or without caching, there’s no risk of aggregate corruption when concurrent access to the same aggregate happens. It’s the job of the event store implementation (RDBMS, Mongo, AxonDB) to ensure uniqueness of the aggregateId + sequencenumber combination. On RDBMS and Mongo event stores, this is configured using indices. In AxonDB, it’s a built-in feature (which of course also uses an index). So if a cache holds a version of an aggregate that is older than what is in the event store, writing the events to the event store will fail and command processing will fail with a transient exception, and the cache entry will be invalidated. Retrying the command will cause a read of the current state and correct processing. While this obviously introduces some inefficiency, it doesn’t lead to corruption.
@Ben - so while your setup will function, it may be inefficient if there’s a bunch of incoming requests triggering commands to the same aggregate being load balanced across the two nodes. Distributing the command bus between the nodes would solve that, it would allow Axon to consistently handle commands targeting the same aggregate instance on the same node, regardless of the node that was processing the incoming external request.