Business unique constraint violation

Hi Guys,

Are there any recommendations or approaches how to proceed with business unique constraint violation?

I tried to google here but didn’t find that someone discussed it already.

For example we have user profile aggregate and we should not allow to create several users with the same nick name, email address, real address or phone number?

Because it’s not part of aggregate unique identifier and we cannot query aggregate by such fields (and as far as I understand it’s actually not recommended)

I can see only one way to do it – create separate view and check it there, however it’s possible gap between new profile have been created and view is updated too.

Also for example for mongo, if we can keep even as json we can apply indexes in mongo (even unique constraint) for such fields but it’s vendor specific

and as far as I see current Mongo Event Store doesn’t store even message this way (please correct me if I’m wrong).

Would appreciate if someone will share their thoughts or experience.

Thanks,

Evgeny Kochnev

Unless I’m mistaken, in the command handler, you would simply issue a query to determine whether or not you would apply the command/event.

Hi Evgeniy, Brian,

Ideally we’d do these business constraint validations when handling the command, as the command handling is the decision making point within our application.
However, there are cases we’re you’re required to query the constraints from a different source, like you’re suggesting Evgeniy.

You could wire a repository as a bean into your Command Handling function (taking your in the Spring environment), as the SpringBeanParameterResolver gives you that functionality in Axon.
I however strongly advice against doing that if the bean you’re wiring performs long/blocking calls.
Since if you’d perform a blocking call on handling a command to query for certain constraints, you’d thus block the entire Aggregate from being used to handle other commands.

The solution I’d thus suggest, even though it removes this validation from the command handler, is to perform this form of validation prior to publishing the command.
In some scenarios that would thus be some form of Service which is called from your UI, but in some scenarios could also be a Saga (as we typically publish commands from Sagas).

That’s my 2 cents, hope it gives you some insights.

Cheers,

Steven

Beyond technical solutions, you first need to decide what level of consistency you really need. In the real world, nothing is absolutely consistent. Things happen concurrently and we work on solving conflicts when they occur. If you want a system to be scalable, you’ll have to approach it the same way.

A lot has been written about this problem in the CQRS/ ES domain. Just search for “event sourcing set validation” and you’ll find plenty of resources. This article sets out some of your options quite nicely: http://danielwhittaker.me/2017/10/09/handle-set-based-consistency-validation-cqrs/

Cheers,

Allard

Agreed with the comments so far: in general you want to avoid having this kind of constraint if it’s not a hard requirement.

However, sometimes it really is needed. This is something we’ve had to solve in our system as well. The approach we’ve ended up taking, which has worked out quite well for us but does require a bit of care, is to populate the query model with preliminary values before dispatching the command. Constraints on the query model tables give us uniqueness guarantees.

The basic flow:

  • Try to insert a row with the client-supplied ID.
  • If that fails, return a duplicate-ID error to the client (Axon isn’t involved at all in this case).
  • Otherwise, issue the creation command.
  • In an event handler, listen for the aggregate-created event and attempt to update the existing row. If the update doesn’t touch any rows, insert a new row instead.

That last item is needed in case of replays; if we’re rebuilding the query model, the code that does the initial insertion during client request handling won’t have run, so the event handler needs to do it.

Replays do have one subtlety: to avoid having a window where the client could request a duplicate ID that hasn’t been inserted into the query model yet, we always have our replays build brand-new tables alongside the existing older versions, and we do the initial insertion into both versions of the table. But we only need to do that because our application has to stay available 24x7; if you are allowed to take your application offline for maintenance, there’s no need to account for client requests arriving during a replay.

-Steve