Event Handlers that translate to kafka avro events are actually what we do for “milestone” events that are consumed by other applications that care about our aggregate (forgot about those handlers heh). The schemas for those, in our case, aren’t axon events. Rather, we collect all the events into a larger “Aggregate Changed Event” with the whole aggregate in the payload every time. Our publisher sends one changed event per transaction, that way they don’t have to do any reconstruction on their end, they can take what they want from the event.
You might consider exactly what downstream consumers require from your app and a milestone event(s) might be useful to you (unless they really do want/need every tiny update). That way other apps aren’t so tied to the inner workings of your axon structure and you would have to do less coordination if you need to change your aggregate’s events. Also consider how potential event replays will interact with consumers. You can always choose to have it ignore events while replaying if you need to. In addition, using a milestone event removes the necessity of maintaining many tiny topics and makes it easier for consumers to ingest updates. They don’t have to know every single event name+topic combo and figure out what it all means to them.
I’m not familiar with axon+kafka as event bus, so can’t offer help there, but it does make sense it would only publish to one topic. Otherwise you wouldn’t be guaranteed event order.
Out of curiosity, what’s the event store in your situation?