It is recommended to store the tracking token in the same data store as your projection data. This allows you to either replay all events and re-create your projection (in-memory token store) or to continue where the projection left off the previous time (persisted token store).
Assume a µ-service architecture where some service don’t want to replay all events each time an instance is started. Those service maintains a token store per service so you might end up with a lot of database instances. In case some services don’t have any projections or other data to persist, you need a database solely for the purpose of tracking.
Since this seems a bit weird to me I was wondering if I misunderstood some of the basic concepts while using Axon Server?
It depends a lot on what you need.
By default a streaming event processor will start at the beginning. If it’s fine to miss events, you can also start at the end. When you combine this with an in memory token store you don’t need additional databases, but you might loose some events.
Another important consideration is what needs to happen when you run two instances of the same service. As the in memory token store is local only, they will both process all the events.
Please also note that you not necessarily need a database per event processor. It’s fine to put multiple on the same database, or even the same table. If it helps there is also a Mongo implementation of the token store.
Some wild idea I once had was an s3 based token store. While this is just a wild idea currently, would this be something you might use if it was available?
Very interesting point you make. Certainly something to keep in mind when running in e.g. K8s environments. Thanks for pointing that out!
I understand your point. Perhaps my post was confusing. Currently I use one database for each microservice (either in memory or persisted) to track tokens (or when needed to store additional service related data). This results in almost an equal number of database instances as there are microservices which seems a bit overkill and inefficient to me.
Would it make sense to use one common token store (either JPA or Mongo based) to manage all tokens for all microservices of an application independent of additional database instances to fulfil the services’ needs to store “other” data? Or will this introduce side-effects which could/are difficult to tackle?
To be honest, I see more value in either to option to specify a common token store at the level of Axon Server or to use some kind of K8s component allowing you to delegate the responsibility of persisting the token store to the orchestrator in a declarative way. For development purposes, a local filesystem based token store could be useful…
I don’t think one common store for all would be ideal.
For example if you do need a projection, it’s best to store it in the same place. As transactions (in the case of Mongo from the to be released 4.7.0 version) will help keep it consistent.
Depending on how the processors are configured you might run into limitations. For example the Pooled streaming even processor has 16 segments. For each it at least need to extend the claim, by default every 5 seconds, or store the new offset after processing one or multiple events. If you have many services this can lead to to many open connections.
While the discussion of having a token store in Axon Server comes up a lot, it’s probably not going to happen. As Axon Server is most concerned with the write/command model, and you only need a token store for the reading/query side. It’s also something that can’t be solved easily, because of the consistency boundaries.
I hope this helps in making a good choice of which token store to use. You might also be interested in Synapse. With this solution you can configure an endpoint to be called for each event.