best practice for validating commands with data from other aggregates?

Hi, I’ve a quick question/issue i’m working through. Generally I know you’re not supposed to look at other aggregates from ‘inside’ the current one, but this is typically in the post-apply event processing. The handy @CommandHandler on the aggregate itself makes this ok for command handlers only. I think? The other question has more to to with the best approach for grabbing that other data. I’ve seen the examples with the query Service/Repo on the faqs page, but it’s not clear if that is pulling data from say the ‘read side’. I’m wondering about as I thought that was generally a bad idea. Since with eventual consistency the read side may be behind a bit and you won’t have the freshest data with which to perform your validations, as opposed to querying the actual write side model (reconstituted events, etc).

Hi Erich,

if an aggregate needs information that is not part of the aggregate itself, there can be only one place to get it: a query model. That’s by definition. Where to perform that query is still open to suggestions. You could, for example, require that the command is validated before sending (perhaps as part of the infrastructure, e.g.CommandDispatch/HandlerInterceptor). You could also inject a Repository in your aggregate, so that it can fetch the necessary data.

Cheers,

Allard

Ok, but this query model/repo is not the repo that we use to fetch aggregates then? So if that’s the case, and it’s a ‘projection’/, read-side, etc then I’d I guess I’d need to look at my requirements? If the scenario requires up to date data for say command validation, then that particular query model would need to be transactionally consistent with the write model?

Hi Erich,

In a situation where you need to enforce consistency between two aggregate, you always have to deal with race conditions and concurrency in general. This complexity is either inherent in the problem domain and is simply exposed by the rigid aggregate boundaries, or it’s incidental complexity that only exists due to a particular choice in aggregate boundaries. If this is the latter, then maybe the aggregate boundaries need to be redrawn so that both entities involved in this process are part of the same aggregate.

Since you’re dealing with two aggregates, any validation that’s being performed should only be considered a formality to rule out obviously invalid requests, but never authoritative. Most of the time, checking a query model for the expected state of the other aggregate is good enough, especially when concurrency or the frequency of change of the other aggregate is low.

However, when consistency really matters, you have to consider all the scenarios where the value may change in either aggregate (before, during, and after command processing) and have processes for dealing with them. For the initial command, you could use a pattern similar to a two-phase commit.

  • Validate the data on the command using the data from a query model, and note the version number or timestamp recorded in the query model
  • Process the command in the aggregate, but don’t treat the value as authoritative.
  • e.g. "OtherAggregatesValueProposedEvent(thisAggregateId=“ID2”, otherAggregateValue=42, otherAggregateId=“ID1”, otherAggregateKnownVersion=5)

Have a saga / event listener pick up the OtherAggregateValueProposedEvent, and send a command to the other aggregate

  • RequestAggregateSynchonizationCommand(targetAggregate = “ID1”, otherAggregate=“ID2”, …)
  • In the other aggregate, you can now determine if the value that was proposed is actually still valid and either
  • apply an event noting that the value is correct and should be committed
  • apply an event noting that the value needs to be corrected to some alternative value
  • apply an event noting that the other aggregate must not use the value at all, and should go into an exception state
  • (note: it doesn’t need to be an event at all, it could be a value returned by the command handler, or a non-domain event published via the UOW. A domain event is simply convenient and records the decision permanently)
  • In either case, the other aggregate can record the “linked”, or dependent aggregate ID and consider future changes to the value in question and how it may impact the referenced aggregates

The saga/event handler would pick up the decision made by the linked aggregate and send another command to the initial aggregate:

  • Confirm Proposed Value Command: the aggregate now changes the state of the value from ‘proposed’ to ‘confirmed’, and can be certain that as of ‘other aggregate known version’ the value is in sync. It can also be sure that changes to the value in the future will eventually be consistent since the other aggregate (or a saga) has learned about the data dependency.
  • Reject Proposed Value Command: The value on the command turned out to be incorrect – but fortunately it hasn’t been used to make any decisions since it was only ‘proposed’. The current value (as of the saga execution time) can be applied instead, or an exception process can be initiated.

Going forward, changes to the other aggregate value would need to be synchronized to the aggregate in question, for as long as it needs to be (eventually) consistent. Once the aggregate reaches a state where consistency no longer matters, the synchronization can be ended.

Ok, yeah I’ve been giving the aggregate boundaries as a relook already, to see if this can be addressed by some refactoring. If it can’t going to see if some of your suggestions might help. Thanks!