Truely Caching Repository

I hope to bounce an idea with the mailing list and receive criticism.

I want to mimic online-trading with event sourcing. Such a domain has clearly defined bounded contexts and event sourcing fits like a glove for messaging, auditing, etc.

So I set up a very simple first stab of this idea by having an Item and commands to Release a Bid and Release an Offer. All quite simple. But then I wanted to see how many orders I could process with this absolutely simple system.

The first test was to fire 500 bids and the time results were less than half a second.

I added to the test to fire 500 offers after the 500 bids. The speed went down the toilet by taking up to 3 seconds.

Then it dawned on me that as I dispatched more and more order commands that I was constantly evaluating every prior released order for the AggregateRoot to rebuild its state.

Snapshots didn’t really help when I’m slamming the system with orders. Plus I need to have all the orders for matching so their lists are required.

While I plan to use a JPA-backed event store what do you think about always having my Aggregate cached in memory? The aggregate only really needs to be in memory during the trading session. So when its Repository is asked to load then it’s already in memory and doesn’t need to reprocess prior events.

Thanks,
Randy

Hi Randy,

did you try the CachingEventSourcingRepository? If you use the Axon namespace in Spring, you can simply configure a Cache to be used by the repository.

Cheers,

Allard