Just wondering if anyone has experience replaying events to AsynchronousCluster? It all seems to work "fine", but since our event handling is generally slower than reading from the event store (which is the whole reason to make it asynchronous), the replay will get ahead of the cluster and happily read the entire database of events into RAM and crash before the cluster catches up. Or at least, that's what it looks like to me because I can't see any kind of throttle or bounded BlockingQueue being configurable in AsynchronousCluster.
Maybe I'm misunderstanding AsynchronousCluster. Wondering: why does it use a custom scheduler instead of some kind of Disruptor/ring buffer (which would be bounded on the input side)? Is there a way to control the "batch size" for the async transactions?
As a workaround, was thinking of customizing ReplayingCluster and/or EventStoreManagement to subscribe to the cluster itself, periodically publish a fake "flush" event, and block until it sees it on the cluster. Haven't tried this yet -- was hoping for an opinion before customizing too much.
We've already customized EventStoreManagement to use multiple threads for reading, since DB2 scales almost linearly for reading from DOMAIN_EVENT_ENTRY. We can easily read 15k-20k events/second on our test hardware as opposed to 1,000/s for a single read thread -- reading is almost entirely limited by I/O and batch size/latency. We were hoping to treat this as kind of a streaming fork/join. We "join" before publishing to the cluster (or rather, calling the visitor) because we didn't want to reimplement all of the other nice features of AsynchronousCluster and ReplayingCluster, including SynchronousPerAggregatePolicy, ReplayAware, exception handling/retry, and backlogging. But obviously the other way to go in this issue for us is to totally customize a multithreaded replay stack. Our goal is to be able to replay millions of events in a reasonable amount of time. We don't have this many events in production yet but are exploring the limits before we do.