Treatment for aggregates with large number of events


We’ve been using axon (2.4 in production, 3.x to be pushed into production shortly) along with event sourcing with events being persisted in a relational database. Some of our aggregates seem to be accumulating a large number of events. We are aware that snapshotting is one way to make sure that aggregate load times remain predictable and performant.

However, we also make enhancements to these aggregates (new events, enhancements to existing ones, enhanced aggregate state, changes to state restoring logic in the event sourcing handlers - the last one is rare), requiring us to invalidate snapshots.

This means that there are quite a few occasions where we will need to load aggregates from event zero. At some point, we’d also like to think about archiving some of our old data.

Curious to understand how others are dealing with situations similar to the above?

Insights highly appreciated!

Hi Prem,

you’re raising two issues here, that I’d like to address separately.
First is the invalidation of snapshots when changing aggregate structure. Right now, that is the only way to go, indeed. This would mean that after a deployment with an uncompatible aggregate change, you’d have to load aggregates from its historic events. This may (and probably will) cause a delay when loading it for the first time. A workaround here might be to use an upcaster, instead of invalidating the snapshot. On our roadmap, we have an issue ( that will allow you to address this.

Removing/purging events is a different ball game. As these events are needed to reconstruct the aggregate, you’d need to replace them with a “permanent” snapshot. By default, the Event Store will remove snapshots after creating a new one. This can (and should, in this case) be switched off.
Another option is moving archived events to a different storage. This is a feature that is by default available in AxonDB, but could also be constructed using components available in AxonFramework. The archive needs to be on-line, in the sense that it is still able to provide the data. Perhaps it is not as fast as the main event store, but it shouldn’t be necessary to access this data unless in a replay or when snapshots were “invalidated”. We have a customer that is using this strategy, and so far (over 30 billion events), it has worked well for them.

Hope this helps.


Hello Allard,

Thanks for your prompt reply as usual. We haven’t been able to look at AxonDB with any level of seriousness yet. Looks like this is something we should definitely look into.