Background replays in a cluster?

We just about have our Axon 2 app ready to run in a clustered configuration, but there’s one thing we haven’t gotten working yet.

To support zero-downtime deployment, we allow migrations that require event replays to run in the background while the application is serving request traffic. For migrations that need to build part of the read model from scratch, there’s an internal flag in the application that says whether the replay has finished, and if it hasn’t, the application continues to use whatever data it used before the migration was added. (In cases where we’re altering an existing table and need to repopulate the entire thing from scratch, we create a brand-new one with the new schema and continue populating the old one in parallel.) Then once the migration finishes, it sets the flag and everything switches over to the new read model.

In our current single-node environment, a BackloggingIncomingMessageHandler guarantees no events ever fail to be processed, and this setup is pretty safe and simple.

But in a clustered environment where events can be processed on any node, it gets trickier; there’s a race between the migration doing its final scan for additional events and another node inserting a new event. A naive implementation that just broadcasts a “migration is done” message to the cluster would risk an event never getting processed by the handler that did the replay.

Has anyone solved this, and if so, what did you do?

A simple approach that seems like it could work (and I’ll probably implement this in the absence of a better suggestion) is to use a message handler that forwards events to the node that’s doing the replay rather than queuing them up locally until the replaying node indicates the replay is finished. Then this kind of reduces to the single-node situation.

A more sophisticated, but also more complex, approach might be to take a page from Axon 3’s playbook and keep track of the last event the replay encountered before it finished. Broadcast that event’s ID and/or timestamp to the cluster when the migration is finished, and then on each remote node, something similar to the BackloggingIncomingMessageHandler could discard any events the replay already dealt with and pump the remaining backlogged events through the event handler before shifting over to normal operation.

Any other ideas I’m not seeing (other than “upgrade to Axon 3”)?


Hi Steven,

an approach that is similar to what Axon 3 will do is to have both nodes simultaneously start the replay. Since they each get a copy of each message during the replay, you can use a (hash)function to decide which node handles which message. They would both use a backlogging message handler that removes messages as they are received via the replay stream (whether they are for the local node, or not).
However, you do need to be careful on the ‘flip-to-realtime’ moment to ensure that both handlers read the same events from the store.

Your effectively ‘reducing to single node replay’ sounds safer.