Reset tokens and projections for integration tests

Hi guys/gals,

We have a tiny issue with integration tests that I would like to share so that others having the same issue can solve it in an efficient and official way.

Our setup includes an Axon Server SE with development mode enabled and several Spring Boot applications, some of which contain projection models in JPA repositories. We use Postman to run integration tests. The issue is (as you would suspect) preparing data before each test and cleaning up afterwards (or beforehand).

A general structure of our integration test suite looks like:

  1. Erase event store
  2. Erase projections in app1
  3. Erase projections in app2
  4. Erase projections in app3
  5. Load “test” data for app1
  6. Load “test” data for app2
  7. Load “test” data for app3
  8. Run test(s) which send requests and check responses across one or more cooperating apps

As you can see, just preparing an environment to actually run a test requires seven steps while only having three apps. Our naive strategy was to shutdown all apps including Axon Server, delete all data files and databases and then start everything back up. With 10+ apps it takes like forever to setup an environment for a single test suite. And we have a lot of test suites with many more in the making as the applications grow. This gives us quite a lot of headaches and cost a lot of processing and our time.

As I understand it, the official way is to have a @ResetHandler method with something as simple as repository.deleteAll() in all of my event handling projectors. On top of that, it is also necessary to do a shutdown-resetEvents-start sequence on all TrackingEventProcessors, as noted for example here by @Corrado_Musumeci and elsewhere.

All of this seems easy to do with a simple application. However, having a multitude of applications and each having another multitude of projection repositories quickly leads to a lot of duplicated code.

Since I could not find anything reusable in the official guide nor on discuss, I have come up with my own reusable solution which requires only a few tweaks in each application and works kind of out-of-the-box. The Axon Server already provides an endpoint to erase events, so I followed the example of providing a similar DELETE endpoint to

  • Shutdown, reset events and start again all TrackingEventProcessors
  • Erase all data in all JPA repositories

All of the above is accomplished just by adding an @Import(ResetConfiguration.class) annotation to a correct place in each application.

Here is the source code and a tiny guide:

Would you consider such approach as sensible or completely wrong and recommend something different?

Thanks for any feedback and/or ideas,
David

1 Like

Hi David,

Indeed, integrations tests of a single component can be complex and time-consuming. Adding additional components to the environment multiplies the complexity and time needed to set up, run, and tear down, exactly as you mentioned. I assume that you’re reusing the environment for subsequent test runs.

Your approach to the teardown is valid. The code you’ve provided represents the approach of event replaying. If you erase the event store beforehand, which is allowed in the development mode, then there are no events to be replayed, and you end up with the clean state for test execution. Please remember that the exposition of the ResetController can be dangerous, and access to it should be appropriately protected, or the controller itself should be disabled using Spring’s profiles or excluded from the final build.

Alternatively, you can consider creating the testing strategy based on the testing pyramid. It allows you to clearly define what behavior or requirements should be tested at which level. By doing this, you can eliminate test cases that duplicate themselves at different levels, subsequently limit the number of integration test cases to the required minimum, and as a result, reduce the time needed for system verification. If possible, you can shift testing of some behavior to the lower level of the testing pyramid, e.g., to unit testing, which should be faster by its nature.
You can consider using Axon Server together with the testcontainers library at Spring integration test level. The example can be found here.

Another possible option could be to design your test cases to allow running them subsequently without conflicts. This method strongly depends on your testing scenarios and data. Suppose it is possible to generate unique test data for every test run that doesn’t clash with other test executions. In that case, you can clean data at less frequent intervals. Ultimately, this method allows to run test cases in parallel and gain further time savings.

I hope these ideas will be helpful for you,
Michal

Hi Michal,

I am really doing full integration tests as in POSTing a request to one microservice api results in GETing expected data from another microservice api. And we also have an in-house infrastructure for deploying our microservices.

Our previous strategy also included repeatable tests where each test creates its own data and thus is completely independent from other tests and also can run in parallel. However, due to the amount of tests and complexity of our business processes, this lead to a lot of work just to maintain the data preparation across for our already understaffed QA team. To overcome these obstacles, we are trying a different approach which relies on a quick reset-load automation.

As for the dangerous part about the controller, since it can be autoconfigured, we will inject the dependency on the controller into our microservices’ pom.xmls during development pipeline builds. This should physically prevent deploying the controller into a production environment.

Anyways, thanks for assuring me that I am on the right track! :slight_smile:

David

I haven’t tried this with Axon on large distributed systems but I’ve done something similar with other large products before (and with a small demo Axon app). My approach was to leverage Docker and particularly data volumes. I prepare data volumes having the initial test data and store those together with the integration tests. The first thing the tests do is to make a copy of the respective volume and start a container with it. After the test is done the copy is removed. I was doing this with shell scripts in the past but I guess these days testcontainers can be used instead. I don’t know if this can help in your case, just thought it’s worth mentioning.

Hi David,

we are doing something similar with integration tests for our product. We are using Testcontainers to run Axon Server SE (and other dependencies) to create end-to-end tests that actually send requests against an embedded application instance in our spring boot test like you describe.

What we do differently is, that we have split the tests for the writing (command) and reading (projection) side as our application is also split that way (we have a command and a projection microservice).
For the command side we send POST/PUT/DELETE/… requests and check for expected events appearing in the event store (we have written an “EventRecorder” for that purpose). For the projection side we issue our internal commands against axon server to create testdata and wait for the projections to process the resulting events, then send GET requests and check for expected response payloads. In both cases, after every test-case we reset the eventstore and all event processors for test isolation.

Maintaining the test data is the most effort here, basically we need the expected sequence of events for each test case. In that sense, the approach of @BribedSeeker904 to neglect the internals (events) and test commands against resulting projectsion seems a bit easier.

@milendyankov your approach using volumes to prepare testdata sounds interesting - one question I have though is how did you maintain the test data, seems to be quite complex to change an existing volume in case the preconditions for a test case change.

Best Regards,
Jakob

To add a bit to the discussion, we are trying to include Axon Server on testcontainers itself as you can see here.

Keep an eye out for news :wink:

Well, that depends on what you consider a precondition for a test case. The volumes have the “historical” or “supporting” data in the projects that I’m referring to. For example, users, permissions, configurations, previous user activities, default products, price lists, pre-existing content, etc. Such data does not need to change often, if at all. It is essentially a clean but not empty starting point that each test is aware of. If there is a need for specific preconditions for a specific test, those are added to the copy of the volume by the test itself and discarded later on. The purpose of the volumes is not to avoid having all tests add the same preconditions repeatedly (which in some cases may take a lot of time).

I the rare cases where you need to change the base data, you can copy the volume and alter it “manually” using whatever means the system provides. Then you make that the new/alternative base. This is also how you make multiple bases for various scenarios (for example, different amounts of X for load testing, malformed data for testing recovery scenarios, …) So it’s not a one-base-fits-them-all thing. But in my experience, you need just a few of those to cover say 90% of all test cases. Of course, if each test needs completely different preconditions, then I agree this approach makes no sense.