Refactor existing aggregates


what happens if we need to refactor an existing aggregate, which happens a lot in DDD as your first model is never right.

How the events can be rerouted to the new created aggregates? Are there any best practices or strategies in Axon?


I’d also like to know the answer to this question.
It is in a similar vein as my last one!topic/axonframework/yqJIfdjRO88.

Hi Matthias, Michael,

For event revisions, Axon provides the upcaster mechanism.
This however only copes with adjusting the event payload and payload type.

Switching the event around from which aggregate it originated is something which is not supported out of the box.
I have had to do this a couple of times and in short it requires you to write a more thorough dedicated upcasting tool which updates the aggregate type and identifier as well.
Doing this thus effectively rewrites your event store, which is what you’re looking for if the event model isn’t suitable for the aggregates you’ve created.
Such a tool is certainly doable, but understandably not an ideal. I’d regard this as one of the cons of event sourcing.

I think the most pragmatic solution for such a tool is to write a very small Axon application with a single Event Handling Component contained in a TrackingEventProcessor.
This event handling component would not handle events just on their payload type, but the entire EventMessage.
Doing so ensure you have all the information you need to correctly upcast it to it’s new aggregate instance, if necessary.
After the update of the message, you should then store the event in a second event store in it’s new format.

As of late I have found Event Storming a very worthwhile approach to minimizing the number of refactors of your command model as much as possible.

Hope this helps!


A little addition, as I have spoken to several people about this subject, which gave me some additional ideas on solving this.

Effectively, the problem is that you want to load an aggregate, but the history of that aggregate has been stored as the history of another aggregate (because we thought that was a good idea, at the time). While rewriting is certainly a possibility, and you could argue that you’re not rewriting history, but rather assigning the importance of different historical moments to other aggregates, there is a solution that keeps the old history ‘intact’.

You could add a special event as the first event of your new aggregate, that specifies which events of the other aggregates should be included in the stream. You could, in theory (haven’t ever tried this in practice) use an upcaster to replace this “special” event with the actual events of the old aggregate. Inside the special event, you could define filters you want to apply, so that not all events are included. If the upcaster API doesn’t work for you, then wrapping the DomainEventStream with this logic might work as well.

Things get slightly more complicated when two existing streams need to be merged into one new one. Still, it should be possible to do so using a “merge event”.

All that said, I’ve been doing CQRS and Event Sourcing for over 9 years now. In that time, I haven’t had any case where I really needed to change aggregate boundaries. Of course, that’s in no way “proof” that such scenarios don’t occur ;-). I do have a habit of careful design, especially when it comes to aggregate boundaries. Within aggregate boundaries, my models tend to change quite a lot. On occasion, there was a situation where it would have been nice to be able to do it, but the benefits weren’t all that big, so we decided to live with aggregate that would otherwise have been split into smaller segments.

Hope this adds to the discussion.