Update Token Entries

Hi all,

At the moment we have more than 10 axon services connected to our main database server with lots of token processors involved.

In the token_entry table, the timestamp is updated by the client every second (or faster?). All those update queries are written into the binlog of our MySQL server and needs to be uploaded to all the slave replication servers, running in cloud and some for example in Asia.

I understand this is necessary to have a good locking mechanism … especially for the situation when a certain client dies and cannot release the lock.

Is there anything we can do to reduce the query load?

Kind regards

Hi Koen,

Understandable concern, and you are not the only one to voice it.
Steven Grimm actually marked this as an issue on GitHub, which Allard provided a pull request for about a week ago.

There is no way to completely clear out the necessity of the token claims and updates, but Steven and Allard both thought of a way to optimize it further.
The feature will by the way be part of Axon 4.2, which we hope to release soon.

Lastly, the way your MySQL server deals with replication could, I assume, also be optimized.
However, I am not that much of a database expert, so I would direct that question to another forum possibly.

Any how, I hope the shared pull request will alleviate some of your concerns Koen!


Thanks Steven. Keep me up to date :slight_smile:

Hi Steven,

I’ve just read that Axon 4.2 has been released.
Is this part of the release? If yes, what did change exactly?

Kind regards

Hello Koen,

yes, this was part of the 4.2 release. The change involves the timing in which the token is being updated. Originally, the process involved extending the token (thus obtaining a write lock) first and then updating it at the end of the batch with the new token position.

Now, if there is a Transaction Manager configured, the process is slightly different. The token is updated to the new state immediately, then the events are handled, and the transaction is committed. If processing the events took more than 50% of the token claim time-out, another update is executed to extend the claim on the token. This means that, in the vast majority of cases, only a single round-trip is needed to update the tokens, rather than 2.

However, if there is no inactivity, there will still round-trips to extend the claim on the token. Prior to 4.2, this was hard-coded to 1 second. Since 4.2, this is configurable (defaulting to 1 second) using the “TrackingEventProcessorConfiguration.andEventAvailabilityTimeout” method. Make sure this timeout and the “claimTimeout” on the Jpa/JdbcTokenStore are in balance. You want to be able to perform approximately 2 roundtrips to the token store before the claims time out, to make sure Tokens don’t get “stolen” too often.

Hope that makes sense.