I’m currently building an event-sourced microservice architecture using Axon, and I’m wondering what would be the best way to reliably publish to an event log. I plan to use Kafka to publish my events, but even then there is a remote possibility the event store and Kafka might become out of sync (e.g., power outage after Kafka publish and before DB commit).
Ideally, I would still like the ability to replay events so that different microservices could rebuild their projections. Couple of options come to mind:
- If microservice X is interested in microservice Y’s events, X could create a trackingeventprocessor against Y’s event store (would need to share db connection string) and source its events from there. I guess this eliminates the need for Kafka? This sounds like a very coupled solution infrastructure wise.
- Create a trackingeventprocessor in each microservice that has the sole responsibility to publish to Kafka through an event handler. My understanding is that only persisted events would then be published and it offers a possibility for event replay through token reset.
I’m also trying to work around the issue of our Kafka infrastructure only having a finite retention period.
Might be crazy ideas, but I’d like to explore all options before introducing other options like change data capture.