Axon Cluster - Subscribe/Unsubscribe

Usecase:

  • We populate 2 different datasources based on an event
  • So we publish the event to a TopicExchange which inturn pushes to 2 Queues
  • We want only 1 of the datasources to be populated (updated) at a time (i.e when one is getting populated, other eventQueue will get piled up). This will keep switching periodically.

Approach 1:

  • Dynamically Create Axon-Clusters and Delete Clusters
  • This way the Cluster’s Axon.Meta config queueName can be switched between 2 rabbitMQ’s so only one rabbitMQ will get drained.
  • Is this feasible? If so are there any code-pointers to start this?
    Approach 2:
  • Have a static Cluster and switch the 2 sets of EventHandlers
  • The overhead is 2 sets of Event handlers need to be created (logic will be same except the target datasource)

May be this is a peculiar scenario and not clear on the first read, but we have a clear case for this.

Hi,

I would investigate the possibility to pause the incoming messages instead. Then you could pause one and unpause the other, alternating the queues you read from.
Deleting clusters is not something the ClusterEventBus is currently designed for. I’m not sure what kind of side-effects you might get.

Cheers,

Allard

Thanks Allard
. Assuming I’m going to start/stop the message consumption on an event, how do I get a handle to the SpringAMQP message listener to issue the commands? i.e how to access the listener for the named queue (or does it require extension to the ExtendedMessageListenerContainer)

And is there a similar mechanism to accomplish this for SimpleEventBus (non-clustered) (as we have the amqp as a configurable option - i.e simple vs amqp)

Note; We are are axon 2.3.2

Thanks,Jebu.

I deleted the previous post as it was partial.

Below steps are working. Can you highlight any impact/potential flaws with this approach?

Step 1: New Control Event to triggering switch between Queue 1 & 2 (which passes in the Queue name the Cluster should be listening to)

Step 2: In the EventHandler of the ControlEvent, I’m shutting down ‘queueToBeStopConsuming’ and starting ‘queueToStartConsuming’

Since the current implementation of ListenerContainerLifecycleManager doesn’t expose methods to start/stop individual consumers, have injected a customListener which extends the current implementation.

Code-snippet:

Shutdown Logic:

`
SimpleMessageListenerContainerlistenerContainer=listenerContainerMap.get(queueName);
if(null!=listenerContainer){
listenerContainerMap.remove(queueName);
listenerContainer.stop();
log.info(“StoppedQueue{1}-ConsumerCount{2}”,queueName,listenerContainer.getActiveConsumerCount());
}

`

Startup Logic (stripped down version of registerForCluster - barring all the checks)

`
SpringAMQPConsumerConfigurationamqpConfig=SpringAMQPConsumerConfiguration.wrap(config);
Map<String,SimpleMessageListenerContainer>containerPerQueue=getContainerMap();
StringqueueName=amqpConfig.getQueueName();

SimpleMessageListenerContainernewContainer=createContainer(amqpConfig);
newContainer.setQueueNames(queueName);
newContainer.setMessageListener(newClusterMessageListener(cluster,messageConverter));
containerPerQueue.put(queueName,newContainer);
newContainer.start();
log.info(“ContainerStartedforQueue{1}”,config.getQueueName())

`

Open issues (with this approach):

  • AutoDelete Queues (required for our usecase) are getting removed when we fire the cluster.stop() (as the last consumer gets shutdown). Need to explore an option to delete the queues on Server-Shutdown
  • Right now I’m using reflection to work with the ‘containerMap’ which is a private field. Need an accessor method in the ListenerContainerLifecycleManager to avoid this hack.

Thanks,
Jebu.

I don’t see any obvious flaws, but I don’t know the exact effects of this approach on Spring’s MessageListenerContainer.

To remove queues on server shutdown (assuming server=broker), simply use a non-durable queue.
You might also want to explore the possibility to pause a message listener container, instead of stopping, rmoving and starting it. Not sure if it’s possible, but you never know.

Cheers,

Allard