Do we really want to inject a query repositories to the aggregate root?

Hi guys,
I see the axon reference guid says:
"SpringPrototypeEventSourcingRepository
Depending on your architectural choices, it might be useful to inject
dependencies into your aggregates using Spring. You could, for
example, inject query repositories into your aggregate to ensure the
existance (or inexistance) of certain values."

My question is do we really want to inject a query repositories to the
aggregate root?
In which scenario we want to inject a query repositories to the
aggregate root?

Thanks.

Hi,

in some circumstances, it is not possible to load an entire aggregate into memory. In that case, you can lazy-load some entites that would normally be part of the aggregate. To load those, you can inject a repository or DAO into your aggregate.

Personally, I haven’t come across this need yet. I prefer to keep aggregates as small as possible and make sure they can be loaded into memory entirely. This feature was added upon request of two different projects that did have this need.

So if you can, prevent the need for a SpringPrototypeEventSourcingRepository (or in Axon 2 the SpringPrototypeAggregateFactory).

Cheers,

Allard

Allard,

if I may follow-up on your answer - how would you enforce the rule “there must be at most one instance of a given aggregate root with the same name”?

In case one uses AggregateAnnotationCommandHandler, there is no other chance other than adding a Repository in the aggregate root.

Would you recommend an alternative approach?

Thanks.

/A

Hi Allessandro,

If I may, I’ve already stumbled upon this case and after trying many solutions, I found that implementing this in the repository is the best way to go. Typically, I do this by keeping a tally of existing aggregates in MongoDB, which is sortof the equivalent of what a query repository would do.

Nicolas,

many thanks for your prompt response - it is nice to see how active this community is.

Just a clarification - if I understand correctly, you implemented your own sauce of Aggregate Repository - are you using Event Sourcing in this case?

Indeed many thanks for your support.

/A

I am in fact using an EventSourcingRepository, but I override the add() method. It adds and verifies the name (I’m using “name” in my response but this could be any attribute which needs an unique value among sibling aggregates) of the aggregate in a Mongo collection and rejects the new aggregate if its name is already present in the collection.

This approach is far from perfect, as a valid business rule is implemented in the infrastructure layer, although one could argue that this is part of the contract exposed by the interface of the repository. I see this approach as a compromise. If you have a limited number of aggregates, it might be preferable to distill your domain and find a common A.R. for your entities which must have a unique name, if possible or practical. Since the set is populated by event sourcing, this is more consistent with the rest of the application.

Furthermore, it might also be a good idea to consider the name as a possible candidate for the unique identifier of the aggregate, in lieu of a uuid. I assume that if the name must be unique among sibling aggregates, then there is a perfectly valid and domain related reason for this. If it is the case, it is very possible that the name is a better aggregate identifier than a uuid, in which case your problem would be solved with already existing code, since it’s impossible to have two aggregates with the same aggregate identifier.

Hope this helps,
Nicolas.

Hi,

sounds like a very good approach, to me. Doing the ‘set validation’ inside the repository is a better approach than I used to take (although I prevent it as much as possible). The repository is conceptually part of a Domain, so there is nothing wrong with this logic there. As Nicolas states: it’s part of the Repository contract. The way he repository does the validation should leak out. That’s part of the infrastructure.

However, be very careful when using ‘natural keys’ as aggregate identifiers. It got me in big trouble (long time ago). Imagine a user name is a unique attribute. User A creates an account. After a while, the account is deleted. Now, another user want to use username A as well. Impossible. You cannot create the Event stream, because there already is one.

Nicolas, you’re probably not aware of it, but you’ve pointed me to one of the missing pieces of the puzzle. I wanted to implement set validation in Axon, but didn’t want developers to be bothered with yet more configuration to get it going. Your approach led to the idea of adding an annotation to a field (or method) on the aggregate. The repository would the automatically do the validation when adding or saving an aggregate. It’s too late to include it in 2.0, but 2.1 should be possible. Getting validation going is then as simple as annotating the fields that must be unique across instances.

Regardless, set validation should always be avoided when possible. It’s a concepts that limits scalability (it reduces the A and P guarantees of CAP). When possible, simply have the clients do a query before sending a command. When duplicate entries are detected, solve them by sending another command to fix the issue (e.g. block a user account). In many cases, it’s not really an issue at all.

Cheers,

Allard

Allard, Nicolas,

first of all, many thanks for your appreciated input.

The repository approach sounds reasonable to me. Only one question - when the code hits the Repository the Event has already been ingested by the overarching infrastructure; how should I respond? Using an exception? Will the event disappear from the event store? Most importantly, does it make sense to store invalid events?

Many thanks.

/A

Hi Alessandro,

if the uniqueness check is done before any events are sent to the event store, you’re safe. Throwing an exception (which is probably what you want to do anyway) will cause the unit of work to be rolled back. Axon will ensure that none of the events are published.

Cheers,

Allard

For your info, I managed a rather clean design by putting an AOP advice before the add method; the advice scans for @BeforeAdd methods and executes them before giving control to the add method.

/A

I like this idea, nice one.