Proposed change in EventScheduler interface

I’d like to discuss / propose a change in EventScheduler interface.

Instead of creating ScheduleToken by scheduler itself, create it by scheduler’s client and pass as a parameter to schedule method. Reason is basically the same as why CQRS promotes entity IDs to be generated by clients, rather than returned by repositories - increased consistency. With proposed api, aggregate can fire an event, handled by scheduler service, and it already knows a token, whereas now token needs to be somehow passed back to it in a command.

I checked, both existing implementations should support this well, actually implementation will be even simpler.

Hi Dmitri,

with client generated identifiers in commands, there is a big performance benefit, in that you can route a command to the correct node when dispatching over the network. Or at least, you can send multiple commands in a row, without the absolute requirement to wait until the create succeeded, because you need to know the ID.
I don’t see how those benefits apply here. Could you elaborate?

We already have plans to improve on the scheduling (called deadline API). See github: Your improvements could also become part of this API. We expect this API to eventually replace the Event Scheduling API.



The approach in issue #220 is not flexible enough, because it only allows handling of deadline ids known at compile time, but there are valid situations when you don’t know deadline ids till runtime. Also, it will require special handling added to both aggregates and sagas. What about eventscheduler, which can schedule both events and commands, depending on message type passed?

Hi Dmitry,

the name of the deadline is not the ID. It’s rather the type of deadline (or description). The context object should contain anything you need to identify different instances of schedules (if multiple exist within the scope of the same aggregate). That could be an identifier, or a combination of fields describing the data necessary to handle the deadline expiry.

For cancellation of a scheduled deadlines, you cancel the deadline with a given type and a predicate that must match the context object.

This should allow you to do what you were planning, right?



Hello Allard

I’d prefer to be able to pass identifier to the scheduler and get it back in the handler. The need to pass some context object and predicate to cancel schedule looks inefficient to me - scheduler will need to use context and predicate to compare context to all existing contexts, rather than efficiently retrieving one by key. Also I not sure quartz supports something like this.

See for example

By the way, I think it will be convenient to extend axon’s scheduler api to support groups, as for example when aggregate has several jobs scheduled, deleting aggregate should delete all its jobs at once, so groups can be handy, to prevent remembering/iterating individual schedule tokens.

After all, these are easy to implement aside of Axon.

There’s other problem with scheduler and distributed command bus. Simple scheduler implementation is transient, and quartz scheduler does not support consistent hashing for job distribution to always schedule job to the node, matched by consistent hash, rather quartz schedules it to arbitrary node. Thus, currently any node can receive event from distributed scheduler, then event handler can use distributed command bus to route it to correct node. If you don’t plan to use command bus to deliver commands from scheduler to aggregates, then there should be implemented similar mechanism, to either route deadline to the correct node, or add support of persistence and recovery to simple scheduler, so that every node recovered only schedules matcing to consistent hash.

Hi Dmitry,

I do think you have a valid point about the identifier. However, what I meant with the “context object”, is the object you would receive in your handler as the “deadline message”. Since a deadline is a special type of even, I want to make that obvious from the message as well.

Thanks for your insights.