Hope you could help us understand the problem we’re having…
In our logs we’re seeing java.sql.SQLIntegrityConstraintViolationException related to the events being published. This is connected with a continuous large load on the system. We have a hypothesis why this might be happening.
When the unit of work is being committed exception gets thrown and the aggregate becomes blacklisted. Before the exception happened though we managed to publish the event for that command. Then when we process the aggregate again we republish the same event, does having the listeners to process them again and making attempt to insert duplicate entries to the database.
Is it possible that when aggregate is blacklisted and reprocessed, same events will be published twice or even more times?