Multiple instances of the same Microservice


I am not very clear about how the following scenario works in Axon.

I have two instances of the same Microservices running behind a Spring Cloud API Gateway. When an HTTP request is sent to a web service endpoint, Spring Cloud API Gateway routes that request either to instance 1 or instance 2. So HTTP requests are balanced between two running instances.

When a microservice handles an HTTP request, it dispatches a Command. The Command results in an Event raised. Even though there are 2 instances of the same Microservice running, it is always the same/one instance that handles the Event… I am very happy that the Event is handled only by one instance. I do not want an Event to be processed by all running instances. But why is it “always” handled by the “same” instance of a Microservice? Why another instance is never used? Even if I run 3 instances of the same Microservice and the Command is dispatched by different instances(because of a load-balancer), the Event is always handled in the same instance…

Axon uses a so-called Event Processor to process all the events flowing through your system. It is this component that will invoke the @EventHandler annotated functions you write.

There are two implementations out there (with a third soon underway), called the SubscribingEventProcessor and the TrackingEventProcessor. The former would only handle events published inside a single JVM, as it is subscribed to the local EventBus. The TrackingEventProcessor (TEP) on the other hand tracks its own progress and polls events from the EventStore. The TEP is the default Event Processor inside your application.\

It is the TEP which in this case ensure your events are only handled once. It does so through its “keeping track”-solution. The TEP will store a TrackingToken inside the token_entry table. This TrackingToken defines the last event position in the event stream it has handled. Furthermore, a TEP thread can only handle events if it has a claim on such a TrackingToken.

By default, only a single TrackingToken would be created, regardless of the number of TEP instances, you are running. Duplicating your application, thus the TEP’s and thus the threads used by the TEP as such has zero impact on the event handling.

If you would want to parallelize the work, you would have to split the TrackingToken into several segments. This blog goes into great debts how this works and what it all entails.

Hope this clarifies things for you @interested-dev!

@Steven_van_Beelen out of curiosity what is the “third” type which Axoniq is planning on bringing :slight_smile:?

@interested-dev your question states you are happy the event is handled only by one instance. @Steven_van_Beelen has explained why that is the case (because of the TEP).

There is also the question of why it is always the same instance of a microservice handling the command/event. I think this is due to the routing policy based on the @TargetAggregateIdentifier (someone please correct me if I am wrong).

I think you would benefit from reading this specific section of the reference guide.

In particular if you search for the bit starting “Two commands with the same routing key will always be routed to the same segment, as long as there is no change in the number and configuration of the segments. Generally, the identifier of the targeted aggregate is used as a routing key.” and then read the rest of that section.

Essentially commands to the @TargetAggregateIdentifier will, by default, always be routed to the same instance because of the default policy. The reference docs also describe how you can change the default policy.

@Steven_van_Beelen, @vab2048 thank you very much for your response! I do understand your explanation but still not 100% clear about it… I will need to read the blog posts you have shared with me and try to understand it.

The latest episode of the “Exploring Axon” podcast answers your question.

1 Like

And otherwise, there’s a PR to be found on the matter of this new PooledTrackingEventProcessor. For those interested, you can check up on it here.

1 Like