Struggling with DDD

Hello.

I’ve been trying to teach myself some of the DDD concepts, with event sourcing as well as CQRS.

I made million of bad decisions in the past months that i were able to identify myself and change my design to fit in a little better. I’ve been trying to grasp my head around on what gurus in the DDD community such as Eric Evans, Greg Young, Martin Fowler, Udi Dahan mean when they are talking about DDD some of their thoughts are simple enough to understand, others are too broad to apply in specific context of my problems.

Now, I’m stuck and i understand that i’m about to probably repeat question that has been submitted before, but funny enough someone answering to my version of it helps alot not by just optimizing the time i would need to spend researching on it but also direct criticism towards my claims is powerful.

I will try my best to list a few assumptions that my brain makes during the programming part and i would really appreciate if anyone could say if they identify problems that i might discover if i keep folowing them, its a scenario where i dont know if my thinking is valid.

  1. Aggregate

I think aggregate is a process, which sole purpose is to go from point A to point B, it’s lifecycle is decided by the commands sent to it let it be method invocations/command handlers.
Each action on such aggregate is guarded by the invariants/rules that can be asserted within it’s boundary, everything that the command handler needs to validate the state is within an aggregate.
Each action emits events, that are the ledger of the Aggregate’s life cycle, they mutate the state of the aggregate.

I think aggregate as Order, an order can be Created. Validated. Completed. These are going to be a steps that the whole Order need to go through to be useful.
I think aggregate as Wallet, a wallet can be Created. Credited. Debited. Each of those steps are guarded by invariant that says you cannot create wallets of the same id, you cannot credit the wallet twice for the same transaction, you cannot debit the wallet when you do not have enough funds.

  1. Entity

I think entity is just a way to separate big Aggregate into sub Processes, such as when i have a big aggregate i could distinguish few parts of it that can be split and handled in their own way, which makes logic spread across multiple parts that share consistent state driven by the events of the Aggregate. They all are persisted together.
I

  1. Process Manager

I think process manager is a component that automates the Aggregate, every time Order is created, it should listen for the cration event and try to do it’s best to validate such Order and send a command back to it that indeed, it was validated.

I think that the process manager is mutable state machine which’s state is computed similarly to the aggregate however the difference is that the event it receives can be from other aggregates.

For example it collects (aggregates) various data from around the aggregates and once some interesting thing such as OrderCreated is about to be handled it already has data that can say if that order is valid.

2.1 Stateless Process Manager

It has similar behavior to the previous process manager, however it does not persist any state, it reacts to the events emitted by Aggregate by dispatching next Command to that aggregate or another Aggregate, the target Aggregate is known by the event payload.
I have alot of those in my application, most of them drive the automation of specific Aggregate such as for example when some Aggregate has done some part of it’s process this Stateless Process Manager sends a command that it should do continue with next part.

Biggest struggles:

  1. Connecting two distinct Aggregates into another.
    Sometimes i have two Aggregates that do not know of each other but when i introduce new feature i would like to create another Aggregate when both of the Aggregates submit specific events.
    For example when i have A and B Aggregates i would like to create new Aggregate C that would drive some process involving both A and B.
    Problem with that is i do not understand how would i process the events and compose a state that would be sufficient to create such aggregate, because from my understanding when i use process managers is that i need to start such process and assign it with correlation id based on some event and then succeeding events are going to be accepted only when they are using this correlation. So when i start a process manager by handling A event i do not know the filter that would match upon B event.

My understanding is that I’m missing the process manager capability to accept any of the events and persist some state, every process manager that I’ve read which has it’s persistent state were using the correlation id, so it only accepted the events that were previously predicted by the process manager.

  1. Event mess.

I have a lot of events, i mean, a lot.
Every subtle change is recorded, i think it’s part of event sourcing i i try to make them as intelligent as possible however it grows and grows and sometimes it’s hard to grasp my head around those withing clear boundaries. I typically try to hide events that are subtle ones and only expose these that have bigger meaning to the services that may be using them, sometimes it requires to fire two events from the same change, meaning one is just for the local aggregate context that says for example the Order Validated and the second one says that some bigger part has been just done Order Placed

  1. Read model based on different bounded contexts

Recently I’ve realized that i have a lot of things that i would like to include in the read model that exist in different read models already.

Read models are generated by projecting stream of events however some data that the model should have exist in different bounded context, i have the access to the event of that context but the problem is that the data from that context was available before the read model of another one was actually started to be projected.

Let’s say the User has been created with some username then the User wallet was created. I would like to include the username of the owner of such wallet but i cannot make the projection of wallet start on the User created because it doesn’t really make sense, i would need to somehow ’ cache ’ the username of specific player id then retrieve that username when i project the wallet read model. This creates duplication, some kind of book keeping of the data because i do not have access to the repository of the User service where i could grab the username from, so i create smaller representation of users inside the Wallet service where i grab only the usernames when i project the wallet data. I’m not sure what is the better approach to it, can the projection be more intelligent?

Well it’s a lot of text. I will end here and also apologize if it is too chaotic to make appropriate response to (English is not my native language as you can tell, as well as the DDD language), i would however be grateful for any insight, thought on the things I’ve mentioned, as well links to online resources that could help me understand the DDD a bit better. Also if you strongly think that my understanding lacks critical parts of the DDD, please say so because i don’t know what i know.

Thank you so much in advance

Hi,

  1. Read model based on different bounded contexts

You can open an event stream to the user you are interested in with: eventStore.readEvents(walletCreatedEvent.userId)
Make sure you only depend on the schema when using events from another BC to minimize coupling - unless it is a shared kernel or something - (you are probably not even interested in all of the fields anyway), not on the implementation (e.g. create own classes to serialize into, like data class UserCreatedEvent(val userId: String, val userName: String) or something like that). In this case you also need to take care of snapshots (or you can use readEvents(String aggregateIdentifier, long firstSequenceNumber) that doesn’t use snapshots, but it’s slower when the event stream is big) and also changed events like UserNameChangedEvent in this case or everything you need from the event stream to reconstruct user state. Small remark: events use eventSerializer, snapshots don’t … so when only the eventSerializer is set to Jackson, snapshots will still be using XStream, that’s something I immidiately ran into when I first used this. Also, if there are event schema changes in the other BC’s events that might affect your projection which consumes those events, you need to take care of those too (and since you’re gonna be using own implementation classes, you can’t just reuse some existing upcasters).

How I use this approach currently: in the event handler of the WalletCreatedEvent of the projection, open an event stream of the user, reconstruct current state, and data can be stored then in the wallet read model. Dedicated event handlers for UserNameChangedEvent etc. are also necessary in the wallet projection, since from this point you want to track those changes, however you can use if (!replayStatus.isReplay) there because in case of a replay, the logic in the created listener will take care of everything, the event stream it opens contains the latest data, no need for also calling these listeners afterwards.
I reconstruct the desired state from the event stream manually, but there is also the AnnotatedEventHandlerAdapter class to convert an @EventHandler annotated bean into something that can just handle event messages, you can take a look at that too, I’ve never tried that approach.
This way projections remain independent, which is always desirable, coupling is minimal, and performance implication is also marginal as I experienced, so I guess no rule of thumb is violated,
Axon folks will correct me, if this is an incorrect approach, I’m also interested in if there is a better way of doing this :slight_smile:

To also help you with some links (I’ve an overabundance of them):

Some related, but not Axon specific discussions regarding dependence of projections, how to listen to multiple event streams etc:
https://groups.google.com/forum/#!searchin/dddcqrs/projection$20dependencies|sort:date/dddcqrs/bxTopNy42gI/3QL5uw75gyUJ
https://groups.google.com/forum/#!searchin/dddcqrs/replay$20threads|sort:date/dddcqrs/OCLE_GihS-c/WB62IRe31wwJ
https://groups.google.com/forum/#!searchin/dddcqrs/replay$20threads|sort:date/dddcqrs/OCLE_GihS-c/WB62IRe31wwJ
https://softwareengineering.stackexchange.com/questions/380160/how-to-model-domain-events-so-that-the-denormalizers-can-fill-highly-denormalize
https://stackoverflow.com/questions/41759311/in-ddd-cqrs-proper-design-on-syncing-read-model-with-multiple-aggregate-updat
https://stackoverflow.com/questions/53688339/cqrs-event-sourcing-projections-with-multiple-aggregates
https://stackoverflow.com/questions/40703348/handling-out-of-order-events-in-cqrs-read-side
https://blog.jonathanoliver.com/cqrs-out-of-sequence-messages-and-read-models/
https://eventstore.org/blog/20130309/projections-7-multiple-streams/index.html
https://stackoverflow.com/questions/47482906/cqrs-read-side-multiple-event-stream-topics-concurrency-race-conditions

A couple of hits from this mailing list regarging opening an event stream/AnnotationEventHandlerAdapter

https://groups.google.com/forum/#!topic/axonframework/PMN6zQBOw88
https://groups.google.com/forum/#!topic/axonframework/ec4TFliF77M

https://groups.google.com/forum/#!topic/axonframework/JjdQDKdmy-s
https://groups.google.com/forum/#!topic/axonframework/Akh_1NFs7mU

I think it’s still a bit hard to collect real word and production grade examples/best practices regarding ES …

Regards

  1. Event mess.
    I have a lot of events, i mean, a lot.
    Every subtle change is recorded, i think it’s part of event sourcing i i try to make them as intelligent as possible however it grows and grows and sometimes it’s hard to grasp my head around those withing clear boundaries. I typically try to hide events that are subtle ones and only expose these that have bigger meaning to the services that may be using them, sometimes it requires to fire two events from the same change, meaning one is just for the local aggregate context that says for example the Order Validated and the second one says that some bigger part has been just done Order Placed

I’m also trying to answer this. You are basically talking about using integration events among BCs, which is right, you typically don’t want to expose domain events outside of the BC. However, when saying "sometimes it requires to fire two events from the same change"it’s important that publishing the integration event should not be part of an aggregate, use a dedicated listener to catch domain event, transform it to whatever integration event you need, then send out the integration event.
Here is a talk from Allard, you might find intriguing: https://www.youtube.com/watch?v=n_IM3omQUyg&feature=youtu.be&t=1154
Some discussion about event modeling especially integration events:
https://groups.google.com/forum/#!searchin/axonframework/coupling|sort:date/axonframework/fdw8MQi8WVc/1ubgvEpyAgAJ

About „Connecting two distinct Aggregates into another.“:
Aggregates look like an entity from outside.
They are identified by an Id/key, represent state and - like entities - might have relationships to other aggregates.

A relationship could be: ChatroomCreated - UserEnteredChatroom
This way, relationships between aggregates can be established the same way state would be changed.
Your example: AggregateC-AggregateA-RelationshipAdded, AggregateC-AggregateB-RelationshipAdded

A process manager or saga is only needed, when there are several subsequent steps to be managed after a defined start event.
I hope, that helps and I didn’t misunderstood your question.

Best regards. J.