Dispatching Axon Events to Non-Axon Actors via Kafka


I have a set of axon-based spring-boot microservices and need to dispatch events emitted from each microservice to a node.js consumer for the purposes of ML/AI.

Kafka seems to be a viable solution as these services would publish their events to Kafka, which the node application subscribes to.

How best do I go about this, given that I want to maintain Axon Server as the event store as well as the default EventBus?

Hi Kahiga,

One way to do this would be to use the Axon Framework Kafka extension. With the recent 4.6 release you can use a service like here. The 4.6 also supports sending the event to Kafka using the Cloud Events spec.

There are a lot of other ways, you could configure an event processor, and for example use spring Kafka to send the events.

A lot depends on the format you want to have for the messages, and for example how to keep track of the event stream. Using the Axon Framework extension the tokens are stored on Kafka, so on restart it can continue where it was left.

Please let us know of you have additional questions or concerns.

Hi Gerard,

I understand that this is a late reply; however, it was apparent that I didn’t understand Kafka in the first place. I can say that I’m now knowledgeable of the framework and have successfully set it up in a production environment, connected to the broker from multiple clients (JS, Python and Java/Spring); the axon-kafka-spring-boot-starter came in handy.

Your reply set me in the right direction, and for that, I am grateful.

Kahiga Kiguru.

1 Like

Now, my challenge has evolved; let me explain.

My startup’s set-up is such that I develop the core functionality, and I do that using axon-based spring boot microservices. My events are immutable Java Classes; we have hundreds of those, and my microservices share these events in a shared APIs module.

My ML team needs to ingest said events, which poses a challenge. How do we document these hundreds of events and feasibly keep track of their changing definitions for both teams?

This need compelled me to switch from axon-server to Kaka to allow a polyglot ‘multi-team’ accessible, fast, and reliable system/business-event-streaming.

The other reason (for this switch) was the complexity and overhead of upcasting events. Kafka offers a schema registry that best works for evolving event schema definitions, which is superior to Axon’s upcasting model. The Kafka schema registry also supports apache AVRO, which is superior to Axon’s current message serializers - Jackson and Xstream.

Holixon has provided documentation and tooling to enable compatibility for Axon with Avro here. It’s pretty challenging to follow as their project is still under development. As far as I know, they are yet to support confluent’s schema registry for publishing and upcasting schema definitions.

In conclusion;
Kafka has allowed us to stream events amongst axon-based microservices and between Axon and non-axon actors. However, the supported serialization tooling - Jackson and Xstream have become insufficient as we need a distributed source of truth for our event schemas. Avro is well suited for the job; however, its general support is somewhat ‘lacking’. We need full support (if possible) for Apache Avro and Kafka Schema Registries, as it would unlock tremendous value in addition to what Axon already offers.

I need as much help as possible in integrating our Axon Microservices with Apache Avro + Confluent’s Kafka Schema Registry + Upcasting.

Kahiga Kiguru

I would not go as far as saying working with schema’s is superior to upcasting. Also please note that with schema’s to be backwards compatible, you need default values.

With Axon, when you have default values, you don’t need an upcaster. Also you are indeed free to use a different serializer.

I don’t think it should be hard to wrap the Confluent Avro Serde to create a Axon Serializer. But please realize this is not a golden bullet. As now you need some way to have schema’s and update them when there are changes. Yes, this can be done in such an API module, but it can be quite complicated.

For example you need to map the schema’s to the topics you want to use. Which depending on the strategy chosen can be hard. You could also auto register schema’s, but in that case you need good tests to ensure the changes are backwards compatible (or how you configured schema registry).

At the end I’m not sure it’s worth it, over using Jackson, and guarantee not to make braking changes. As long as you don’t remove fields, new data can still be read with ‘old’ Java Pojo’s. And if you add defaults, ‘old’ data can be read with the new Java Pojo’s.

For ML you could also have a ‘pure’ JSON serializer, that doesn’t even case about the original class. This is something I just thought of now, and might be interesting to add to framework.