Hi Christophe,
A lot of interesting questions. First, a few observations:
- Exploring the event log directly from the source is not something I would recommend. For starters, you will have to adapt your event store for reading OR limit the usability of that feature. What if you want to search through the events ? Surely, you can’t do a “like” on a blob of serialized Java objects and/or JSON. You will also have to apply any future upcasters in the application that reads the events. To me, this is a very big smell.
As such, I would recommend the standard approach, which is to use a projection (i.e.: a read side model). I think one of your best bets would be to index your events using a full-text search engine, like ElasticSearch (which is backed by Lucene) and index fields like the aggregate identifier and the sequence number.
-
Re-writing the event log should, in almost all circumstances maybe except regulatory requirements, be a big no-no. The event log is the absolute source of truth. Messing with it can have very dire consequences, many of which may reveal themselves when it is too late.
-
Something I would also like to stress (and which is implicit in my two previous observations) is that the pure, unprojected events are not meant, by design, to be used as part of the read model. The entire premise of CQRS is the separation between read and write models. As such, you should not design your requirements for the event store based on how the data will be used, but rather on the latency, durability, scaling and storage requirements of your WRITE side (e.g. the C in CQRS).
-
Although I am by no measure an expert in your domain, which means this observation could be entirely wrong, it seems strange to me that the project would be THE aggregate root. Since the write side needs to re-create the aggregate every time a command is processed, having very large aggregate roots in not recommended. Even though most of the entities in your application are probably completely meaningless without the “Project”, which is a big hint that they should be part of the “Project” aggregate according to canonical DDD, modelling purity must be balanced with effectiveness.
And to answer a few of your questions:
-
It is entirely possible to replay events, as long as your event store supports it. Afaik, the JPA and the Mongo implementation support it.
-
Depending on your scaling requirements, which seem very small, I would probably recommend the Mongo event store, which requires absolutely no configuration out of the box and supports replay.
And some questions:
- What do you mean by “recreating” aggregate roots ?
Hope this was helpful