Distributed Command Bus and Sagas

I’m trying to use a distributed command bus (implemented using Spring Cloud) to allow horizontal scaling in AWS.

My question revolves around my understanding that Sagas cannot currently be distributed and what does that mean when creating several nodes:

  • should I make sure that every Saga has only one instance? Possibly creating a separate node for Saga and have a cluster only for commands?
  • if I’m using a tracking event processor, would only a single instance of a Saga see the event, so it would be possible to have multiple instances of the same Saga safely?

Any help would be appreciated.


Hi Alex,

at the moment, this means that you can have 2 ways of running Sagas. In Subscribing mode (the default) Sagas are only triggered by events that were published on the machine they are running on. In Tracking mode, this means that only a single instance will be processing Events for a specific type of Saga at the same time. When that node stops, another node can (automatically) take over.

An exception is when Sagas run in Subscribing mode, but use another source than the EventBus/Store. For example, when using an AMQP broker as source, both instances will effectively run in competing consumer mode. This is not really recommended, as Event order cannot be guaranteed anymore, and Sagas may miss an event that way. In such a case, make sure the SpringAMQPMessageSource connects to your AMQP broker using an ‘exclusive’ connection.

We are currently working on a solution to process messages in parallel, while retaining necessary ordering guarantees.

Hope this helps.


Hello Allard,

Thanks for you reply, it does help :). Since I’m running Sagas in tracking mode I was considering creating nodes that have both sagas and command handlers since it would not force me to deal with the case when all command nodes are unreachable for a Saga. I was not completely sure if running multiple Saga instances in tracking mode would be safe. I understand from your reply that it should not be a problem.

Thank you,