JdbcEventStorageEngine much slower than JpaEventStorageEngine


I just tested using the JdbcEventStorageEngine with the axon-bank example and recognized
that events are much slower compared to the JpaEventStorageEngine.

In my case it was 500ms instead of 30ms per event. I think that big difference can only mean
basic problems maybe with the connection pooling. But as far as I know the setup is using
the tomcat pooling and looks fine.

Anyone else had such issues?


What I’m basically doing is:

public void testMassiveModifySubAccount() {
    logger.info("Datasource ist {}", dataSource);

    String bankAccountId = "MyBankAccountId2";

    long startTime = System.currentTimeMillis();

    // modify this big Aggregate a lot of Times
    startTime = System.currentTimeMillis();
    Random random = new Random();
    for (int i = 0; i < maxModifications; i++) {
            new AdjustSubAccountBalanceInCentsCommand(
                bankAccountId, random.nextInt(AxonBankApplicationCreationITest.MAX_SUB_ACCOUNT_TO_CREATE), random.nextInt(1000))));

    long stopTime = System.currentTimeMillis();
    logger.info("Modifying {} Subaccounts took {}ms on average", maxModifications, (stopTime-startTime) / maxModifications);

It turns out, that the slow operation is the loading of the (big) aggregate:

Aggregate<BankAccount> bankAccountAggregate = repository.load(command.getBankAccountId());

The JPA-Caching seems to make the JPA-Engine much faster.
Is there no caching implemented when using the JdbcEventStorageEngine?


Hi Dirk,

neither implementation makes explicit use of caches. The JPA cache probably improves performance significantly because you’re in a benchmark situation.
In Axon, caching can be applied on the Repository level.



Hi Allard,

after digging a bit more into it, I found that

protected TrackedEventData<?> getTrackedEventData(ResultSet resultSet,
                                                  TrackingToken previousToken) throws SQLException {

is pretty slow. In this method the following causes the pain:

trackingToken = GapAwareTrackingToken.newInstance(globalSequence, LongStream
    .range(Math.min(lowestGlobalSequence, globalSequence), globalSequence).mapToObj(Long::valueOf)

In my case I have a globalSequence that is at 90.000. The lowestGlobalSequence is configured to be null and therefore the default 1:

return new JdbcEventStorageEngine(
    jacksonSerializer(), NoOpEventUpcaster.INSTANCE, sqlErrorCodesResolver(), 50,
    dataSource::getConnection, new SpringTransactionManager(platformTransactionManager), byte[].class, eventSchema(),
    null, null);

So in my case there is a huge set created everytime when doing:

protected DomainEventData<?> getDomainEventData(ResultSet resultSet) throws SQLException {
    return (DomainEventData<?>) getTrackedEventData(resultSet, null);

This takes about 30ms for each event which causes the bad performance of the JDBCEventStorageEngine.
What am I missing? I think it shouldn’t be the case that this huge set gets created with each event?


Hi Dirk,

that method should only be used when opening a Stream from the beginning. As soon as you continue reading from a Stream, passing in a Token, that token is used to calculate the next state for the token.
If your database has the lowest globalSequence at 90.000, I’d strongly recommend setting that value as the ‘lowestGlobalSequence’ in your EventStorageEngine configuration. That basically tells Axon that the missing values below 90.000 aren’t a ‘gap’. They’re simply not there.

We do have plans to optimize the identification of ‘gaps’ using ranges. So instead of reporting each gap individually, a larger range can be reported as a single entry.

By the way, which Axon version do you use? In my codebase, the getDomainEventData method is different. It doesn’t call getTrackedEventData. That was changed in January this year (probably release as part of 3.0.3 or maybe 3.0.2).



Hi Allard,

thanks for the hint. I was indeed working on an older version of the JdbcEventStorageEngine. Switching to V3.0.4 seems to fix the performance problems we had.
Again - thanks for your help!