I’ve been trying to teach myself some of the DDD concepts, with event sourcing as well as CQRS.
I made million of bad decisions in the past months that i were able to identify myself and change my design to fit in a little better. I’ve been trying to grasp my head around on what gurus in the DDD community such as Eric Evans, Greg Young, Martin Fowler, Udi Dahan mean when they are talking about DDD some of their thoughts are simple enough to understand, others are too broad to apply in specific context of my problems.
Now, I’m stuck and i understand that i’m about to probably repeat question that has been submitted before, but funny enough someone answering to my version of it helps alot not by just optimizing the time i would need to spend researching on it but also direct criticism towards my claims is powerful.
I will try my best to list a few assumptions that my brain makes during the programming part and i would really appreciate if anyone could say if they identify problems that i might discover if i keep folowing them, its a scenario where i dont know if my thinking is valid.
I think aggregate is a process, which sole purpose is to go from point A to point B, it’s lifecycle is decided by the commands sent to it let it be method invocations/command handlers.
Each action on such aggregate is guarded by the invariants/rules that can be asserted within it’s boundary, everything that the command handler needs to validate the state is within an aggregate.
Each action emits events, that are the ledger of the Aggregate’s life cycle, they mutate the state of the aggregate.
I think aggregate as Order, an order can be Created. Validated. Completed. These are going to be a steps that the whole Order need to go through to be useful.
I think aggregate as Wallet, a wallet can be Created. Credited. Debited. Each of those steps are guarded by invariant that says you cannot create wallets of the same id, you cannot credit the wallet twice for the same transaction, you cannot debit the wallet when you do not have enough funds.
I think entity is just a way to separate big Aggregate into sub Processes, such as when i have a big aggregate i could distinguish few parts of it that can be split and handled in their own way, which makes logic spread across multiple parts that share consistent state driven by the events of the Aggregate. They all are persisted together.
- Process Manager
I think process manager is a component that automates the Aggregate, every time Order is created, it should listen for the cration event and try to do it’s best to validate such Order and send a command back to it that indeed, it was validated.
I think that the process manager is mutable state machine which’s state is computed similarly to the aggregate however the difference is that the event it receives can be from other aggregates.
For example it collects (aggregates) various data from around the aggregates and once some interesting thing such as OrderCreated is about to be handled it already has data that can say if that order is valid.
2.1 Stateless Process Manager
It has similar behavior to the previous process manager, however it does not persist any state, it reacts to the events emitted by Aggregate by dispatching next Command to that aggregate or another Aggregate, the target Aggregate is known by the event payload.
I have alot of those in my application, most of them drive the automation of specific Aggregate such as for example when some Aggregate has done some part of it’s process this Stateless Process Manager sends a command that it should do continue with next part.
- Connecting two distinct Aggregates into another.
Sometimes i have two Aggregates that do not know of each other but when i introduce new feature i would like to create another Aggregate when both of the Aggregates submit specific events.
For example when i have A and B Aggregates i would like to create new Aggregate C that would drive some process involving both A and B.
Problem with that is i do not understand how would i process the events and compose a state that would be sufficient to create such aggregate, because from my understanding when i use process managers is that i need to start such process and assign it with correlation id based on some event and then succeeding events are going to be accepted only when they are using this correlation. So when i start a process manager by handling A event i do not know the filter that would match upon B event.
My understanding is that I’m missing the process manager capability to accept any of the events and persist some state, every process manager that I’ve read which has it’s persistent state were using the correlation id, so it only accepted the events that were previously predicted by the process manager.
- Event mess.
I have a lot of events, i mean, a lot.
Every subtle change is recorded, i think it’s part of event sourcing i i try to make them as intelligent as possible however it grows and grows and sometimes it’s hard to grasp my head around those withing clear boundaries. I typically try to hide events that are subtle ones and only expose these that have bigger meaning to the services that may be using them, sometimes it requires to fire two events from the same change, meaning one is just for the local aggregate context that says for example the Order Validated and the second one says that some bigger part has been just done Order Placed
- Read model based on different bounded contexts
Recently I’ve realized that i have a lot of things that i would like to include in the read model that exist in different read models already.
Read models are generated by projecting stream of events however some data that the model should have exist in different bounded context, i have the access to the event of that context but the problem is that the data from that context was available before the read model of another one was actually started to be projected.
Let’s say the User has been created with some username then the User wallet was created. I would like to include the username of the owner of such wallet but i cannot make the projection of wallet start on the User created because it doesn’t really make sense, i would need to somehow ’ cache ’ the username of specific player id then retrieve that username when i project the wallet read model. This creates duplication, some kind of book keeping of the data because i do not have access to the repository of the User service where i could grab the username from, so i create smaller representation of users inside the Wallet service where i grab only the usernames when i project the wallet data. I’m not sure what is the better approach to it, can the projection be more intelligent?
Well it’s a lot of text. I will end here and also apologize if it is too chaotic to make appropriate response to (English is not my native language as you can tell, as well as the DDD language), i would however be grateful for any insight, thought on the things I’ve mentioned, as well links to online resources that could help me understand the DDD a bit better. Also if you strongly think that my understanding lacks critical parts of the DDD, please say so because i don’t know what i know.
Thank you so much in advance