Enforcing domain events don't change at compile time

Has anyone come up with a way to enforce that developers do not change event pojos at build time (maven plugin would be ideal)?

Just curious what people have tried (and would possibly not try again :-).


Hi James,

what about simply making all the fields final? I tell my teams to do that, but we don’t use a tool to enforce it.

Or do you mean something else?



Something else. Lets say I’ve created an event, committed it, released and deployed my system into production. Now, another developer comes along, and as part of a refactoring, accidentally makes changes to my event, rendering it incompatible with the version in production, then commits it.

We could definitely write some easily tooling to catch this, but before we do, I wanted to see if anyone else had done something similar.

Does that make sense?


The last time I worked on a system at scale which relied on serialization and deserialization across versions, we added tests to our test suite to verify that old data would still deserialize. That implicitly checks the schema.



the test cases generally work very well. Additionally, we generally do a pre-deploy sanity check. We simply read all the past events from the event store. If you can read them all, all is fine. Otherwise, you know you’ll get in trouble if you deploy it.




We are studying CQRS for a new project and especially the Axon Framework.

We don’t want to rely on developers to create a test per new events. Soon, one of them - perhaps me - will forget and this will lead to serious issues in a far future.
We either cannot replay our event stores to ensure past events compatibility because some event stores will be huge and because a hundred of application instances - each with its own event store - will be running in production.

It seems to me that the most efficient way in our case is to write a maven plugin that check events backward compatibility against the previous release.
Or does anyone think of a better solution?



maybe a look at traditional tool for database migration helps.

If you take Luqibase (a tool to have database DDL changes as code in the verison control system, and a binary/maven-plugin to apply those changes to databases), it stores a md5 hash for every changeset-file in a special luquibase-metadata database-table.

When luquibase is executed against a db, luquibase checks that older changefiles' md5 hash has not been changed. It simply stops if a develoer accidentially modified an already applied changeset.

Coming to events, one could write a script which stores the md5 hashes somewhere (could be another git/svn repo, a database, or even in the event-source-file itself). Then you have another script which is executed automatically (jenkins/hudson/team-city/pre-push-hook/...) abd checks the md5 hashes.

A complete other alternative is a Schema Storage like Confluence Avro Storage for Kafka (https://github.com/confluentinc/schema-registry). Your Events are your Schema, and changes to them could be tracked in such a system. Don't get me wrong, the Avro Storage is not for you, the concept could be.

In general, I miss for my own projects an automated solution for Event-Schema documentation, and automatic schema version checks. If you look at the Http-Api/REST world, they have Swagger and other solutions which try to address reflective access to user defined API/Schema.

I might loose a little bit the topic, but I wanted to share my thoughts.



Hi Réda,

For my projects, we’ve come up with this approach for testing the serialization of events (and commands)

  • A “[Aggregate]EventSerializationTest” per aggregate

  • For each FooEvent there’s a @Test testFooEvent(){} test case

  • The test has been simplified to a single line of code: testSerialization(new FooEvent(“a”,“b”,…));

  • The testSerialization method performs a number of validations, using the object and the JUnit test metdata, e.g. current test method name

  • Serialize the event using Jackson (our choice for the event serializer)

  • Deserialize the event using Jackson, to confirm the deserialization annotations

  • Compare the old and new files to each other, using XStream – a framework that can xml-serialize virtually any object, which gives us an independent reflection-based confirmation that the two objects are the same

  • Compare the serialized JSON to a file on disk (e.g. src/test/resources/events/aggregate/FooEvent.json)

  • The file name is automatically discovered based on the test method name

  • The JSON is reformatted for comparison to make it readable, e.g. Jackson’s “withDefaultPrettyPrint…” configuration- A final test verifies that every non-abstract event has a serialization test

  • It uses component scanning (spring) to find all classes in a specified package matching a name (".*Event") and not abstract

  • It uses reflection on the current test to discover all @Test methods, and uses the naming convention to decide which event is covered by the test

  • It does a final comparison of the found events and the discovered tests. If there are events without tests, it fails the test.
    This gives us pretty robust tooling:

  • Warnings when new events are added, but the serialization has not been documented yet

  • Warnings when events (or value objects contained within an event) are changed in ways affecting the serialization

  • Example JSON for all of our events (and commands), which is a great reference when writing e.g. upcasters (for events) or developing a javascript UI (commands)
    Developers know that if a serialization test breaks, an upcaster is going to be required.

As others have mentioned, after writing upcasters we also like to perform a “dummy” replay using domain events from a database written by previous app versions and the new upcasters/classes. We have a small test case that lets us do just that for our various environments (qa, staging, prod). The event visitor simply reads every event and attempts to print useful error messages should deserialization fail.


Having been bitten by incompatible event changes as well, i find this an interesting approach and i am wondering if Axon itself could provide a built-in mechanism here to detect at deployment time that an event has changed in a way that is not compatible with already serialized events.

A very naive approach would be to somehow discover all the events at startup and checksum them into a table. Detect event changes every startup using this checksum, then lookup in the eventstore if there are any events stored of this type, then use the event serializer to deserialize all these existing events. If fails -> compatibility was broken.

Alternatively if you can detect in exactly what way an event changed (for example attribute was renamed), you could probably just run a query on the database to see if any events actually stored this attribute. Requires native json datatype in the eventstore…

Interesting topic !