Legacy Application Evolution Advice


I’m looking for some advice on the best strategy to evolve a “Big Ball of Mud” legacy application to a new, modular, Event Driven architecture leveraging the Axon Framework. I’ve talked with my colleagues about Martin Fowler’s Strangler Pattern and we’ve generally agreed that this seems like the right way to approach the issue. However, as I read more about the Strangler pattern, I’m not sure that my ideas for executing it are on target. With that said, I want to present our domain as simply as possible and then outline our integration/strangulation strategy and hopefully get some feedback that will give us more confidence in our approach and/or some alternatives or other things to think about.

So first, our domain exists in the shipping industry and can be characterized as distressed package management and recovery. While distressed actually has many different meanings in the domain, the most common definition is when a shipping label becomes separated from its package or is otherwise rendered unreadable and we are left with a package that can not be delivered. When we apply DDD to this domain, some of the bounded contexts we recognize include: package detail data capture, inventory/warehouse management, proactive research (looking for clues in the captured details of a package) and reactive research (taking calls from customers who are looking for their lost package).

We have identified the package detail data capture bounded context as the best place to start the evolution because it’s a pretty simple. So far we have implemented the service back end, leveraging Axon 3 infrastructure, and we are presently working on a new Angular 6 UI.

The idea we have is that we will deliver a new data capture application and move all data capture agents to the new application at the same time. Meanwhile, all warehouse and research agents will remain in the legacy application, which will continue to use it’s own model for the data capture details. We will therefore need to implement an integration module in the new application that will listen for domain events in order to update the legacy application. In this sense, the legacy application will simply be one of the materialized views of our new application.

At the same time we release the new data capture application we will also need to remove all existing functionality from the legacy application which enables the user to modify the state of the captured package detail data. This point is critical in my mind in order to limit complexity and avoid the need for bidirectional synchronization of the two disparate data sources. However, as I’ve read about the strangler pattern it seems as though I should allow for bidirectional data flow. That seems like a nightmare…

Finally, assuming that the legacy application data model is simply a materialized view of the new package detail data capture application, I am concerned about managing materialization process. I’m trying to decide if I need a saga here or simply an event listener service. It feels like I should use a saga because I need to embrace the fact that the connection to the legacy system may not be 100% reliable or there could be bugs in the legacy system that prevents some data from materializing in the legacy system. It’s certainly possible to put exception handling in a non-saga listener service to deal with the unexpected, but it seems like I get more help with a saga.

I hope my questions are clear and I greatly appreciate you for taking the time to share your insights.



Hi Troy,

this is a topic that regularly makes an appearance. Fortunately, we’ve seen a few scenarios applied successfully, and we’re happy to share those. Of course, we’d also be interested to hear how about your approach and feedback, once you’ve done the migration.

The pattern that we have seen applied several times already is very similar to what you’re describing. Essentially, it’s a scenario where an old application is left operational, while a new, Axon based application is built next to it.
A slight varying factor is whether the new application takes over full control of the data, or that data may be still modified through the “old” application.

One thing that I heavily recommend, is to build the new application in such a way, that it is completely unaware of the fact that there is data somewhere else. Events are an important enabler in this.

A component would sit in the middle of both applications (it may be deployed as part of the new application, or separately) to coordinate data updates between each application. I would recommend against using Sagas, unless the coordination between old and new is a complex process (in terms of functionality). For example, when a certain combination of events must happen before a sync must be triggered.

The reliability of the sync process can be implemented using a Tracking processor. Configure it so that a failure in the communication with the legacy application will simply stop the processor’s progress until the connection is recovered. Axon will automatically use an incremental backoff, up to 1 try per minute.

Hope this helps you on your journey.


Thanks for the feedback Allard!