JdbcSagaStore / JdbcTokenStore Configuration Issue


I am trying to setup a JdbcSagaStore / JdbcTokenStore for my command side and discovered the following WARNING in the logs for the Sagas :

Fetch Segments for Processor ‘MySagaProcessor’ failed: Failed to create a SQL statement. Preparing for retry in 2s

I tried to track down the problem and found out via Code Inspection that the Jdbc*Stores are not creating the Schema in the database. At least for the JdbcTokenStore there is even a remark in the javadoc that the Token table is not crated and need to exist beforehand. For the JdbcSagaStore I did not find any hint in the Javadoc or Reference Guide Documentation. I filed an issue for the reference guide to include a hint for this (https://github.com/AxonIQ/reference-guide/issues/49).

This behaviour is clear to me and seems to be intended and I also know how to fix my configuration / application to work. Nevertheless I want to ask the following two Questions:

  • is there a recommended way to initialize the Schema (If not I would go with liquibase, since it is already included with spring)

  • do I only need to create the Table for the JdbcTokenStore without the columns as suggested in the Javadoc of JdbcTokenStore or do I need to create the full schema (same question applies to JdbcSagaStore)


  • Spring Boot 2.0.6

  • Axon Server 4.0.2

  • for the JdbcSagaStore, JdbcTokenStore and the QuartzDeadlineManager we use a local file-based HSQLDB (2.4.1)

Thanks and best Regards,

Since I encountered compatibility Issues with the SQL schemes of JdbcTokenStore / JdbcSagaStore in combination with HSQLDB i gave the JPATokenStore and JPASagaStore a go. Since the JPA*Stores rely on the right Hibernate Dialect (in this case for HSQL) this seems to work better.

I am creating the schemas for both Stores now with liquibase and have turned off the hibernate automatic ddl generation. If anyone is interested, I can provide the HSQLDB liquibase migrations for reference.

Best Regards,


The framework provides code to init the jdbc tables. That code doesn’t drop existing data so it’s safe to call it during your app initialisation.

Here’s the quick-and-dirty way we configure (using Spring+Kotlin) the sagaStore and tokenStore:

fun sagaStore(connectionProvider: ConnectionProvider, serializer: Serializer): SagaStore<Any> {
    val jdbcSagaStore = JdbcSagaStore(connectionProvider, PostgresSagaSqlSchema(), serializer)
            .also {
                try {
                } catch (e: SQLException) {
                    logger.info("ignoring $e")

    // use the JCache API directly to find the managed caches
    val assocCache = Caching.getCachingProvider().cacheManager.getCache<Any, Any>(assocCacheName)
    val sagaCache = Caching.getCachingProvider().cacheManager.getCache<Any, Any>(sagaCacheName)
    return if (assocCache == null || sagaCache == null)
                .also {
                    logger.warn("managed caches not found: $assocCacheName $sagaCacheName")
                    logger.info("using JdbcSagaStore without caching")
    else CachingSagaStore(jdbcSagaStore, JCacheAdapter(assocCache), JCacheAdapter(sagaCache))
            .also { logger.info("using CachingSagaStore over JdbcSagaStore") }

fun jdbcTokenStore(connectionProvider: ConnectionProvider, serializer: Serializer) =
        JdbcTokenStore(connectionProvider, serializer)
                .also { it.createSchema(PostgresTokenTableFactory()) }
                .also { logger.info("using JdbcTokenStore") }

As you point out, we’ll need to switch to a liquibase migration if the table structures ever change.

If you use axon-server you shouldn’t need to worry about maintaining these tables?!

Hope this helps!

Hi Jakob, Steven,

Thanks for pointing that out Steven, very helpful!

And as a side note in regards to your last comment:

If you use axon-server you shouldn’t need to worry about maintaining these tables?!

Axon Server only stores the events and snapshots.
Your Sagas, Association Values and Tokens will still live in database of your own application.



as Steven (van Beelen) pointed out, I think in my case using axon server it is vital to have a database for the tokens and sagas in my application, otherwise the application starts from scratch on every restart.
The documentation also points out that Axon 4.0 defaults to an InMemoryTokenStore which is not recommended for production (see https://docs.axoniq.io/reference-guide/1.3-infrastructure-components/event-processing#token-store).

Regarding migrations we did it with liquibase from the beginning because I to avoid relying on the schema initialization failing every time the application starts.

We now use the JPATokenStore and have configured SpringBoot with


to validate if the schema generated by liquibase is valid for the schema expected by Axon. Thus we would also safely detect problems on upgrading axon, since on the first startup hibernate validation will fail if the schemas are incompatible. Then we could generate the respecitve liqiubase migrations (e.g. with maven liqibase plugin).

Best Regards,

Hello again,

You can use JDBC as above without losing your data on every restart.

The sql in GenericTokenTableFactory uses CREATE TABLE IF NOT EXISTS.
The sql in PostgresSagaSqlSchema uses an unguarded CREATE TABLE statement - but your database should refuse to create the table if it does exist - which explains why the SqlException is logged and ignored.

So the real issue is that if the table definitions ever change you are responsible for schema migration of your production data.

Also, thanks SvB for the correction/clarification re Axon Server. We are still using 3.4.


We chose to create 2 separate DBs and use different DB technologies for each:

  1. Axon’s JDBC support for Axon managed entities: EventStorage, Saga, Token
  2. Spring Data JPA for our query model

We use Sagas frequently and so far JDBC+JCache appears to be performing well.

Configuring multiple database connections, Spring transaction managers. and wiring everything up correctly took a couple of hours - thankfully the Spring docs are helpful.
However, it does appear to be trickier and more error prone than it should be.

Anyone have any production experience they can share about axon’s JDBC support ?

Thanks again