Replaying Events in a Microservices Architecture where command and query microservices communicate through RabbitMQ.

Hi,

My application has a similar setup as this popular example of spring microservices and cqrs with axon : https://github.com/benwilcock/cqrs-microservice-sampler

So far my cqrs setup works as expected:

  • comes a request from the GUI-> goes to the command microservice which stores the event in a mongodb event store, propagates it to the query microservice through RabbitMQ and the query microservice will maintain its own Relational Database, which is then used
    for queries on the domain.

My problem now is what happens when I want to replay the events. The whole point of keeping an event store, is to be able to replay all events and reconstruct whole databases from scratch in other applications-microservices whatever.
RabbitMQ doesnt hold the events, if someone picks them, So some kind of replaying through RabbitMQ is as explained and in this group earlier, not possible(although some kind of persistence configuration is possible probably, I wouldn’t want to do it with rabbitmq).

From searching the group and the documentation I understand that I need to use tracking processors and the whole thing with a token store. Then, I can setup logic for replaying events, or even delete manually the token and force a replay(nice option too).
But , I understand that this is not possible with my existing architecture with RabbitMQ.

So to sum up, I see 2 roads here:

  1. Either use RabbitMQ and probably lose the replay of events. Searching the group, I couldn’t find a way to replay the events using RabbitMQ. The events are in the event store managed by the command side, so would it be possible somehow that the command side replays all or specific events to specific queues(so query services pick them up and some kind of reconstruction happens).
    As I understand , in this scenario with the AMQP broker, the query side uses subscribing processors, and is passive in simply receiving events, and never actually touch the event store directly.

  2. Either remove RabbitMQ(for axon specific work), and have the query side use tracking processors and access directly the event store, and do any replays by itself and also maintain a token store.
    In this scenario, command and query services communicate through the event store, meaning that the event store is updated by the command side, and query side reads from there.

My questions are:

  1. Do I miss something regarding rabbitmq and replay of events?Is it possible somehow?

  2. Which of the 2 roads would you suggest?
    While I would like the query side to be more passive and do not access directly the event store, from briefly searching about the topic, I see that the tracking processors with the tokens et all, are a safe reliable way to replay events.

Thank you

Vasilis

Your question is the same as described here? I just posted and I think that is the same.

https://groups.google.com/forum/m/#!topic/axonframework/RuOZulgbcGA

Hi, we have some related concerns, although I would still like my question to be addressed by the Axon team.
As you can see in my question, I partly answer your problem that it may not be possible at all(unless guys at Axon say otherwise and give us a hint) to combine rabbitMQ and tracking processor. So with RabbitMQ you use a subscribing Processor and you cannot have replays that way. So you could use a Streaming Platform such as apache Kafka which will keep track of events and do the replay outside of AXON which defies the whole thing with the Event Store actually.
What I am currently Doing is:

  • Keep the communication between command and query microservices with RabbitMQ and subscribing processor on the query side microservice.
    So the scenario is, come the commands ->stored as events in the Event Store of the Command Side, and propagated through RabbitMQ to all listening query side microservices which will then update their Database Records. From your brief question, I understand you have
    a similar Aproach.
  • Keep a tracking processor at query side which listens to the same events as the subscribing processor(duplicate code yes, but for the moment that is my least of concerns).
    This tracking processor to work , needs to have direct access to the Event Store, so I have additional configuration in the query side microservice to be able to connect to my event Store (Mongo DB).
    Now this tracking processor is registered based on an external boolean property and it is not registered by default Unless I trigger him by setting the property to true.

What I achieve with this:

  • Still use RabbitMQ for communication between the microservices
  • Be able to do a full manual safe Replay when things go south, by firing up the tracking processor and making sure I have cleared up the token Table (set from the TokenStore Bean) so that the tracking processor will be forced to do a full replay of all events found in the event Store.

So when we encounter an alert in our systems (we still investigate what those alerts will be), we will drop the query side DB, trigger the tracking processor, and have the events replayed and fresh data coming in). Then in normal scenarios, things play as usual with RabbitMQ and a subscribing processor only.

Keep in mind here, that the tracking processor on the query side, is capable of completely Replacing RabbitMQ, since it directly connects to the Event Store, This can be easily verified if you configure the tracking processor in the query side to listen to specific events,
and you add them from the command side on the event Store and disabling RabbitMQ propagation(or simply deactivating the Subscribing processor on the query side).You will notice the tracking processor getting the events and doing the actions(if configured) on the DB.
So in sort, My approach described here with the 2 processors on the query side, is only in order to keep RabbitMQ in the whole picture without losing the replay of events . We are still investigating if we still need RabbitMQ or we will rely on the tracking processor only.

Hi Vasilis,

your assumptions are correct: RabbitMQ is not suitable when you want to perform replays.

We very highly recommend using tracking processors and reading from the Event Store. As you might know, we have released AxonDB, a built-for-purpose Event Store server, beginning this year, and we’ve seen that clients that have moved to AxonDB (often in combination with AxonHub) have a much simpler messaging architecture, as all components read from the same source. Comparing it to a RabbitMQ (or any message queue) based solution, it is much simpler, as messages are stored and retrievable by any component, guaranteeing the order in which components observe these messages.

So for microservices, we very much recommend using a Bus/Broker where messages are stored inside the Bus/Broker and then consumed by the services that are interested. If you want to stick to open source components, you can achieve this with an EmbeddedEventStore in each component (and sharing the underlying database) or using a Kafka based message broker, in combination with an Event Store if you wish to do event sourcing as well.

Hope this helps.
Cheers,

Allard

Hi Allard,
Thank you very much for the response. We are aware of axonDb and we are currently considering using at first the developer version for evaluation .