Event Dispatch on the update of Transient Aggregate Attributes

Hi,

I am having some problems understanding how the event dispatch
mechanism works.

Within my Aggregate I make state changes and then usually apply the
relevant events on the setterMethods of those attributes.

So..I might have something like..

public void setSomeAttribute(String someAttribute){
this.attribute = someAttribute
apply(new AttributeChangedEvent(aomeAttribute))
}

What I am finding however is that when I apply the Domain Event within
a setter method of a TRANSIENT parameter - the event that only
published when it is applied within the same unit of work that the
Aggregate is first created and saved.

It appears that when the only change to the aggregate within a unit of
work is to a transient parameter, that the event that is applied as
part of this unit of work is never published.

Is this correct? I would have thought that the application of domain
events on an aggregate would be independent of the state of the
aggregate, that whatever process has generated the event has done so
for good reason and that the event should be published on commit.

Any guidance would be appreciated.

Simon

Hi Simon,

just to be sure, given your problems with the GenericJpaRepository before, I assume you are not using event sourcing?
I notice you use the apply() method in your aggregate, which is an event sourcing method. “apply” does a “registerEvent” (which registers it for publication) and then a “handle”, which allows you to call the event handlers on the aggregate. If my assumption about you not using Event Sourcing is correct, you might want to consider using “registerEvent” instead. It’s a bit faster.

Let’s start at the very beginning:
1- You dispatch a command. This starts a UnitOfWork.
2- The Command Handler is invoked and loads an aggregate.
3- The repository registers that aggregate with the current Unit of Work (UoW).
4- Your command handler calls methods on the aggregate, changing its state and/or registering events. The “registerEvent” (and thus also apply() ) simply registers the event in the aggregate.
5- The Command Handler finishes execution and the CommandBus commits the UoW it created in step 1.
6- The UoW notifies the repository that an aggregate should be stored. The repository will register all events from the aggregate with the UnitOfWork for publication on the EventBus.
7- The UoW dispatches all events to the EventBus

In this whole process, it doesn’t really matter whether the aggregate has any state changes when it is being saved. All events that have been registered are simply published.
There is one situation where events are not dispatched, though: exceptions. When an exception occurs, the UoW is rolled back, and no changes are persisted. To change this behavior, you can set a “RollbackConfiguration” instance on the SimpleCommandBus. It allows you to define for which exceptions you want to commit, and which should cause a rollback.

I have created a small test to try to reproduce the situation, but I see all events being published correctly.

Can you provide a bit more detail about the (Axon) infrastructure you are using?

Cheers,

Allard

Hi Allard,

Thanks again for your response.

As you have rightly assumed - I am using the GenericJpa abstraction with Axon 1.3 and I am not interested in event sourcing at this time - I want to crawl before I run… :stuck_out_tongue:

I understand the UoW process just as your described - however there are a couple of things in my implementation that might explain the odd behavior.

  1. Given the large number of writes in my application - I am using an Ehcache in front of my repository to help scaling. This is implemented as a ‘write through’ cache which will ask for the Aggregate (repository.load(aggregate)) from the repository in the case of a cache miss. In this case, the application generates a new aggregate if the aggregate can not be found in the repository ( I now handle the AggregateEventNotFoundException as mentioned in the previous discussion in this group) and this is added to the repository all within the same UoW.The UoW then commits and all events are published as expected.

  2. When the cache returns an aggregate successfully and then I apply events - this is when i seem to have trouble. Events only seemed to published as part of the initial Aggregate construction and the invocation of the repository.add(aggregate) method - but at no other time.

The only thing I can suggest is that perhaps there is something wrong with using the cache in front of the repository.

So this is my process…

  1. Look for aggregate in Ehcache
  2. If not found then attempt to load from repository
  3. If not found in repo (AggregateNotFoundException is thrown) then generate a new Aggregate.
    3 a) Cache.put(new Aggregate) which then invokes the repo.add(new Aggregate)
    3 b) Unit of Work closes and all events publish successfully
  4. If aggregate is found in cache then changes and events applied to Aggregate
    4 a) UoW completes but the events are not registered or published.

Any thoughts?

Cheers

Simon

Hi Simon,

most people don’t like crawling, so they just run and see where it leads them. The result is that there is no explicit caching support in the ORM-style repositories. There is in the event sourced versions.

There are several solutions: use Hibernate (or whatever JPA implementation you use) second level cache. It will prevent the entityManager from going to the database is an aggregate is cached. Since aggregates are always loaded by their ID, this type of caching is very efficient. Do note that I am not sure if this solution clears “corrupted” aggregates if you have applied state but decided to roll back transaction.

The other solution is to override the “doLoad” method. You would want to check you cache first. If it is a miss, call super.doLoad() and store the result in the cache. Otherwise, simply return the cached result without calling super.doLoad(). The CachingEventSourcingRepository class contains an example of how to implement it in such a way that “corrupted” aggregates as automatically cleared from the cache.

The lack of cache support in the GenericJpaRepository is noted and is put on my “to do” list.

Cheers,

Allard

Hi Allard,

Your second suggestion over the override is exactly how I have implemented. So you think this should have no bearing on whether events are registered?

Okay,

So when I comment out the 'putWithWriter() method on my Ehcache implementation which is invoked whenever the new Aggregate is generated and saved to the repository - no events are published at all.

It seems like the when the aggregate is in the cache and events are applied - that these events are only published if the aggregate is subsequently saved by explicitly calling repository.add(aggregrate) - which would almost make the point of the cache redundant.

Is this what you would expect ?

Are you sure you overloaded the “doLoad” method, and not the “load” method? In the CachingEventSourcingRepository, I noticed that an aggregate is also stored in the cache on “doSaveWithLock” and “doDeleteWithLock”. You might want to overload these methods as well.

The only thing that repository.add() does, is registering the aggregate with the current unit of work. So if that call makes a difference, you’re probably putting the cache around the wrong method.

Cheers,

Allard

The oveloaded ‘doLoad’ method returns an Aggregate that is then put into the cache. I am not using method caching - I am explicitly ‘putting’ aggregates in the cache that are returned from this overloaded method.

How would I implement the overloaded doWithLock and doDeleteWithLock ?

Thanks Allard…

The CachingEventSourcingRepository class does exactly what you need, except that is is an Event Sourcing repository. You can apply the exact same logic on the GenericJpaRepository.

I’ve done a quick read on Hibernate second level caching. It looks like that would help you solve your issues as well. It might actually be a much simpler solution.

Cheers,

Allard