Delete events in the event store


We are developing an application that uses Axon 3 and runs on premise.

Due to bugs and bigger refactorings we were able to remove some events from one of our aggregates (we don’t emit the events anymore and were able to remove their @EventSourcingHandler methods) and their projections.
Unfortunately, there is a big number of these now unused events stored in the event stores of our customers. We need to remove them because of the storage space they claim.
We found out that some customers have around 20.000.000 instances of these events and the events also have a rather big payload.

Is it risky to simply remove the events from the event store? Or should we only remove the events that were emitted before the latest aggregate snapshot?

Thank you for your help!


Hi Andreas,

if the events are really not used anymore, and in your environment it’s ok to pretend they never existed, then there is nothing stopping you from just deleting them from the Event Store.



Thank you for the answer!


Unfortunately, we found out that there is a potential issue with the JdbcEventStorageEngine when we delete many consecutive events from the event store.
The storage engine fetches the events from from the domainevententry table by selecting only rows where the global index lies in between the provided boundaries.
However, if there is no event found in between the boundaries the tracking event processors get stuck.
This happened in our case when we deleted many unused events from the event store.

For example:
Let’s say we have 500 events in the event store (globalindex 1 to 500).
If we delete events from 100 until 400 it is possible that the tracking event processors get stuck because the where clause that is used for fetching new events in the JdbcEventStorageEngine
looks like this:
WHERE (globalIndex > 100 AND globalIndex <= 200) ORDER BY globalIndex ASC

Do you have any ideas how we could circumvent this issue? Unfortunately, it is necessary for us to delete the events and we don’t know in advance if this issue will occur since this application runs on-premise.

We thought about switching the storage engine to a custom one that uses LIMIT in the where clause instead of the upper global index boundary, however we would prefer a different solution.

We use Axon 3.4.1 with postgres.

Thank you for your help,

Hi Andreas,

this is indeed a known issue. It has been fixed in 4.0.3. The solution was to perform an extra query when an empty batch is returned, which checks for the highest sequence number. If that number is higher than the upperbound of the query, the StorageEngine will know that it needs to jump a larger gap.


Hi Allard,

thank you for your help!
For now, we have created a custom event storage engine that overrides the readEventData methods and uses "LIMIT ? " with the provided batch size instead of the upperbound.
This appears to be working as expected for us.

Next time when we update Axon to 4.x we’ll remove our event storage engine again.