I’m working on putting a POC together using Axon that consists of two microservices (a user service that manages all user accounts, and an operation service that manages…well, operations).
When a user creates an operation, a pilot ID needs to be specified in the request which should be the ID of an existing user. I would like the operation service to be decoupled from the user service such that when an operation is created, the service doesn’t need to make a request to the user service to verify the pilot exists. The operation service is itself aware of all users that exist.
I plan on doing this by having the user service publish user events to a topic that fan out to a number of queues, one of them being consumed by the operation service. When a new user is created, the id of the user can be stored by the operation service. When a user is deleted, the id will be removed from the operation service…etc.
Now, let’s say I introduce the operation service after a bunch of users already exist. My question is, how do I get the operation service up to speed on all the users that already exist? My thought would be to replay all user events in the user service which would be routed to the operation service, but wouldn’t all queues/consumers also receive these events? Is this the right way to do this?
Thanks in advance!
Axon Framework’s solution to consuming events from a message source is through so-called Event Processors.
These processors come in two main flavors:
- Subscribing Event Processors (doc)
- Streaming Event Processors (doc)
For an Event Processor to have the freedom to choose where to start in the stream, you are required to use a Streaming Event Processor.
In short, the Streaming Event Processor allows the users to choose where it should start on the provided streamable message source.
Furthermore, it provides the capability of resetting its position on the stream, imposing a replay of events at any given time.
Now, with all that said, we can move to your questions:
Axon Framework defaults to using a
TrackingEventProcessor, which implements a Streaming Event Processor. Furthermore, newly introduced instances will default to start at the tail of the stream. Thus, your event handlers will automatically be brought up to speed with the events in your Event Store.
The solution of replaying all user’s events in the user service to me sounds like republishing all the user events. Since this wouldn’t be a desirable outcome, this isn’t how Axon Framework deals with the solution.
It’s not the event publisher replaying events, but the event consumers (the Event Processors in Axon are the event consumers). Due to this, you wouldn’t have duplicate publication of events.
I do have a follow-up question for you, @jackson.korba.
What type of Event Storage are you going for with this POC, actually?
Thanks for the response!
I’m using postgres via JPA as an event store and RabbitMQ for messaging. Given what you said, I’m thinking what I want won’t really work unless the events/processors are stored in a single location, i.e. AxonServer…
Sure thing, glad to help!
The AMQP process provided within the framework generates a so-called “Subscribable Message Source.” This makes RabbitMQ a message source that’ll only allow you to read the events as they happen. Simply because RabbitMQ doesn’t store the events for the consumers to decide where and when to start.
Using Axon Server as the message source would indeed count as a solution, as it is a “Streamable Message Source.” Thus, a source you can use for the
It is by no means the only solution, though. As long as the source is a streamable variant, you should be good.
Knowing this, you have the opportunity to use:
- A dedicated Event Store solution, like Axon Server
- A JPA-based Event Store
- A JDBC-based Event Store
- A Mongo-based Event Store
- Kafka, configured as a Streamable Message Source
That leaves option two as a valid use case to connect your applications, given that you’ve selected PostgreSQL over JPA as the Event Store.
What is of utmost importance for this step is to ensure that Operation Service and User Service belong to the same Bounded Context.
If they do, they should be free to store and read from the same Event Store.
That thus allows your services to connect to the same database for events.
And with that, you wouldn’t even need to use RabbitMQ anymore. The event table becomes the event communication means.
Just to repeat, only do this if these microservices belong to the same bounded context.
Another vital pointer, in this case, is to have a dedicated database just for your events. No query models/projections should reside next to the events, introducing further coupling.
I see. I was originally envisioning each service having its own event store db + projection db, but I think what you’re suggesting is having a single event store per bounded context, which can/will likely mean multiple services using the same event store db. I could be thinking about it wrong, but doesn’t that violate micro service independence?