How to correctly deploy multiple instances of the same service

Hi. I am investigating the Axon framework (free version) and I’m having a problem with deploying multiple instances of the same service in a way that only one of the instances handles each message/event. If I simply deploy multiple instances of the same service then all the service process the messages which is not what I want. I’ve tried to look into configuration on the Axon server, specifically segments (https://docs.axoniq.io/reference-guide/operations-guide/runtime-tuning/event-processing), but I can’t seem to get it right. My questions are as follows;

  • Does the Axon framework (free version) support my required use case without needing to build something in front of my services?

  • Are there examples in the docs of how to do this? (I’ve looked, couldn’t find any, but i might have missed that). Possibly GitHub examples?

  • What is the default algorithm for message processing (round robin?) What are the alternatives?

Thanks in advance.

Hi Nik,

I’ll give you some additional background from Axon’s messaging perspective.
We point out three different (main) types of messages, being:

  1. Commands
  2. Events
  3. Queries
    All three of these message have different routing needs (as Allard states in this webinar too):
  • Commands are always directly to a single instances
  • Event are by definition distributed to anyone who’s interested in it
  • Queries are either directed to a single handler or several
    Having stated that, you’re requesting what routing paradigm Axon uses for it’s “messages”.
    I hope however that I’ve pointed out that this question is quite ambiguous from the nature of Axon Framework (and Axon Server too).

Regardless, my assumption is that you’re talking about Events, as the notion of Events and Messages are frequently intermixed in our industry.
Basing on that point, I can provide you the following background.

Events in Axon are handled in ‘Event Handlers’.

The mechanical aspect in Axon to provide an Event Message to an Event Handler is dealt with by an Event Processor.
The framework provides two flavors of Event Processors, being the SubscribingEventProcessor and the TrackingEventProcessor.

The former will only handle events which are published on the local Event Bus.
Thus, by definition, it stays within it’s own JVM, which rules out this being the problem you’re facing.

A TrackingEventProcessor on the other hand tracks the Event Store on it’s own accord, in a separate thread.
This gives it several benefits, like segmenting the work over several threads.
Added, as it keeps track on which point in the Event Stream it is, you are able to reset it’s knowledge of this (read: this is what’s called a replay or reset).
It keeps track of this knowledge by storing a TrackingToken in a TokenStore.

Lastly, and importantly, a TrackingEventProcessor is required to have a claim on a TrackingToken to be able to handle events from the Event Store.
Without this, it would no be able to delegate the work between several threads/nodes containing the same TrackingEventProcessor.

Now, what I think is amiss in your distributed set up, is that you have a relatively simple application using the TrackingEventProcessors (as this is the default in the framework).
Within a single instance of the application everything works as expected, but as soon as you fire up a second node, both application start handling all the events from the store.

Based on this, I guess that you have not specified a persistent storage in your application to store the TrackingTokens in.
Without this, the duplicated application has now way to delegate the work between either of the two nodes you’ve set up.

Hence, I’d suggest to check whether you have set up a means to store tokens in.
Axon Framework provides three flavors of TokenStore for you, being:

  • JpaTokenStore
  • JdbcTokenStore
  • MongoTokenStore
    I hope this resolves the problem at hand Nik, but also provides you with some extra background on Axon’s perspective on messaging and handling events!

Cheers,
Steven