Performance tunning on high volume events

Hi,

We are running a load testing on Axon application and found performance issue. On debugging we have found the suspected place in our code.

There is a spring-boot kafka listner, reads the events from a kafka topic and sending to command handler through command gateway-

 @KafkaListener(
            topics = topic,
            groupId = ID,
            containerFactory = Constants.LISTENER_CONTAINER_FACTORY)
    public void fromKafka(final ConsumerRecord<byte[], byte[]> records,
                          @Header(KafkaHeaders.RECEIVED_PARTITION_ID) final String partitionID) {
        Optional<String> topic = Optional.ofNullable(records.topic());
        topic.ifPresent(
                s -> LOGGER.info("Read from the topic: {}, with Partition: {} ",
                        s, partitionID));

        try {
        ---------------------
            commandGateway.send(new MakeClassificationCmd(UUID,
                    tradeAd));
         ..........................
        }
    }

There is a command handler which subscribe the command from the command bus.

@CommandHandler
    public ClassificationAd handle(final MakeClassificationCmd cmd) {
    
    ........................
    .....................

    }

we are sending high volume events to the kafka topic. AS per current implementation, commad is subscribed and process sequnetially from command bus. This gives us challange in achieving required performance.

Configuration details:
Event store - mongodb
simple command bus.
All commads sending to the same aggregate identifier.

Can you please help us how we can achieve the performance in a scenario when we have to send commands to the same aggregate object and command handler.

Thanks and Regards,
Kundan Kumar

Mongo is not ideal as event store. But you might get higher performance by caching the aggregates. A cache can be added be adding the annotation and a bean. Examples are from the Giftcard demo that uses Axon Server, but this should also work with Mongo.

Hi Gerard,

Tried caching the aggregate by following the example. But it is creating issues. In the Aggregate, there are event sourcing handlers. and keeping some state within aggregate. For command in the command handler generating and storing events in the event store.

@CommandHandler
    public ClassificationAd handle(final MakeClassificationCmd cmd) {
    
    ........................
    .....................
   Aggregate.apply(new event(id, payload))

    }

But after Caching the aggregate events are getting lost from the event store and yield to inconsistent result.

What is happening when adiing a WeakCacheRefrence to an Aggregate?

What do you mean with events are getting lost? Did you get any errors from the MongoDB cliƫnt. You can read more about the cache in the javadocs.