Erik, thank you! Too long since the last time I was able to climb on a soapbox.
Ok, there are a few things at play here. Believe me, what I’m writing about is backed by experience. I have worked on systems used for county administration and later at an insurance company, and I have been involved in precisely that type of situation you describe. More than once!
Basically what this type of procedure does, is to introduce is an unaudited data fix. If that doesn’t sound scary enough: what if other systems receive the initially incorrect data and start processing it? We can hope they fail due to the inconsistencies, but they are just as likely to introduce downstream problems. As a result we potentially need to fix a large set of data stored in several locations while processing the faulty data was already started and logged. The result is going to be that the audit logs we have already produced, will no longer match the data.
Having a “traditional” database allows us to do such a fix outside of the application, but the application itself needs to be fixed also, so we are looking at two high-prio fixes, which are likely going to claim senior developers as an alternative to a lot of tests, and business involvement to verify the resulting data. So the costs of this emergency fix are high! Please note that with a CQRS (Axon Framework) application we could still choose this approach by using state-stored aggregates.
If we look at the common practice for highly-audited systems, then a data fix is “just another transaction”. Fixing bugs by “just” fixing the data is risky, and we still need to fix the bug.
What I’m talking towards is actually an old hat: we need to drive down the cost of change, and the biggest factor is the speed with which we can move changes to production. This starts with introducing DevOps practices for automated builds and testing, supported by using Agile “methodologies”. Those quotes are on purpose, because (as I’ve seen in reality) just forcing all IT Project Managers to become Scrum certificated generally doesn’t provide long-term benefits. You need to bring the horizon of change closer, so the sizes of individual changes go down and deployments to production can happen more often. And, low and behold, this will also drive down the size of deployed components, so we don’t get hung up on all the dependencies between different parts of our codebase, the so called “Ball of Mud.” Tests become a lot easier also, and automation of tests using Cucumber brings us towards the point that even Business will accept that they cover sufficient ground for a production push, because they can specify them in (relatively) normal language, without having to resort to coding. And then the surprise when someone looks at the changed codebase and asks when you switched to a Micro-Services Architecture…
Cheers,
Bert