Hi all,
We’re using Axon Framework without Axon Server, currently with Postgres as our event storage. The article at Why would I need a specialized Event Store? regarding Cassandra states
To guarantee sequence ordering in Cassandra you have to utilize its lightweight transaction feature which has considerable performance cost and, as described in the documentation, should be used sparingly.
and, regarding MongoDB, it states
MongoDB has recently announced support for multi-document transactions which somewhat mutes this point but this new feature currently comes with some limitations. We also have no easy method for pushing new events to clients so that we have optimal performance for processing events.
Finally, we run into trouble when generating sequence numbers for our events due to the eventual consistency in the cluster. MongoDB by design has no cluster-wide ACID transactions which would be required in order to achieve the sequencing guarantee.
Totally naive question, but I’ll ask nonetheless, because I think things have changed since those articles were written. If we were to set our Cassandra consistency level to be ALL
or our MongoDB tag-based write concern to a tag value that included all nodes, would that provide enough isolation & consistency to use these database types as an EventStore
’s EventStorageEngine
?
Further, if we were to use Postgres in a clustered manner, it seems like all of the same issues enumerated above with respect to the consistency-versus-availability tradeoff would apply, according to the CAP theorem.
As I see it, the performance impacts that you’d suffer in a clustered database exist regardless of implementation, and, if you need a clustered database, then you’re forced to deal with that issue. Fortunately, the problem is slightly lessened by virtue of the fact that the event store has append-only semantics, which means that, as long as you can decide which of a set of two concurrent events is considered to happen before the other, you’re ok. If you’re using a granularity of time down to the nanosecond, practically speaking, there is effectively a zero percent chance of truly coincident events (that is, events that occur within the same nanosecond). Your only issue, then, is to ensure that the processing of subsequent events on an aggregate waits until the prior event processing is complete.
I’d appreciate any input here.
Thanks,
Matthew