I have some suggestion about how axon handles Exceptions. Currently,
all exceptions are runtime exceptions and services throw them as such.
BUT if an Aggregate Root throws a RuntimeExcpetion, you never see the
difference between a bug in your code or a RTE from axon.

So for example if you have a command and try to load a non-existing
AggregateRoot by UUID, you get a RTE from axon. But if the
AggregateRoot throws a RTE itself (e.g. NoSuchElementException for
some reason), this just gets handled in the same way.

I would like to have a Exception structure (checked on unchecked
exceptions, but preferable checked ones) which hold CQRQ-meaning.

What do you think?


Thanks for bringing up this discussion.

There are a few things going on here. First is the checked/unchecked discussion. I only use checked exceptions for exceptions that a caller should be able to expect and deal with during a normal flow of execution. The fact that an AggregateRoot could not be loaded is, in my eyes, not such an flow. Somehow, you must have obtained a UUID that does not match one that exists in the repository. I don’t want to explicitly expect logic for dealing with such a situation from a client. If I am not mistaken, loading an inexistent aggregate will result in an AggregateNotFoundException (or its subclass AggregateDeletedException), not a plain RTE.

A second issue is your remark that exceptions coming from loading an aggregate cannot be distinguished from exceptions raised from an aggregate. How is that? If you have a command handler that loads an aggregate, then executes a method on that aggregate and saves it back, you should be able to detect differences between exceptions raised by either of those calls. Albeit by putting them in separate try-catch blocks. Generally, my command handlers look as following:

public void handleSomeCommand(SomeCommand command) {
SomeAggregate aggregate = aWiredRepository.load(command.getSomeId());
try {
finally {

The try-finally block is to ensure that even when an exception is raised, all state changes are persisted. This also guarantees that any locks maintained by the repository are released (if you use a locking mechanism, which is the default with the EventSourcingRepository).

The third thing I see is the statement that aggregates should throw runtime exceptions. This is not the case. You can declare any type of exception you like on your aggregates. However, putting them on the @EventHandler methods wouldn’t make any sense, because they should not do any validation activities. You can perfectly choose to have your state changing methods throw a checked exception if that change is invalid for the current state.

I hope this clarifies my view on exceptions in the framework.



Hi Lionel,

the finally block is something I am not too happy with myself, so I’m working on a solution for that. With optimistic locking, you won’t have a problem at all. Pessimistic locking, however, will lead to unusable aggregates if something goes wrong.

The scenario of applying multiple events and having an exception after one of them, is not a regular case. It could only happen due to programming errors. The motivation for this, is that validation should occur exclusively in the “regular” methods called on the aggregate. The @EventHandler methods should only apply state changes based on the event. They should never, ever do any validation. The reason is simple: what if rules change, and you load aggregate state based on existing events? Throwing an exception could make it impossible to load an aggregate.

I hope this helps. If you have another view on things, please let me know.




For the finally block, it might be worth looking at the "loan
pattern" : http://scala.sygneca.com/patterns/loan (despite coming from
the scala world and relying upon closures, maybe that can be source of
inspiration ; closures are also planned for JDK 7 :D).
An aspect (AOP, through annotation) could be another idea to tackle
the problem (although implementation of this idea is less clear for me
than the loan pattern ; maybe an annotation at the CommandHandler
level, but with which parameters...).

Could you explain why with optimistic locking there would be no
problem with an exception between two calls to apply and a save in a
finally block ? (it may be obvious but I don't get it)

You're right to remind that validation shoud occur exclusively in the
"regular" methods called on the aggregate (i.e not the @EventHandler
annotated ones), it's a "golden rule" we already follow (and I
remember you evoked that at the Axon's workshop too ;)).
The use-case I was thinking of was not an exception in the
@EventHandler method but rather between calls of apply methods, e.g.:

public void myAggregateRegularDomainMethod(...) {
   // validations

   if (contextCondition1) {
MyDomainEvent1(somethingBuggyWhichInSomeCasesThrowsARuntimeException, ...))

   if (contextCondition2) {
      apply(new MyDomainEvent2(...))

I admit it seems to only happen due to programming errors (and bad
testing), but if that happens, the event store may end up with
incomplete stack of events for some aggregates, silently.


Hi Lionel,

you timing is quite excellent. This morning, I was thinking about adding a similar structure to the Repository interface. Instead of loading an aggregate and saving it again, you tell the repository what action you want to execute on an aggregate. Loading and saving is done for you.

You problem with multiple actions is typically taken care of by the transaction manager. If you have a transactional boundary around your command handler, your events are either all saved, or none at all. My main focus with Axon is not really preventing errors through programming mistakes. If only that were possible… I think a “rollback” method on repositories is quite inevitable. It will release any locks held on the aggregate, but not commit any events.

I am thinking about some “UnitOfWork” kind of mechanism, though. It would allow you to register all changes with the UnitOfWork and decide to either commit or revert the whole think at once. It will make transactional processing with multiple aggregates or multiple calls on a single aggregate a bit easier. In that case, your code would look similar to this:

MyAggregate aggregate = myRepository.loadAggregate(uuid);

With this mechanism, explicitly saving aggregate isn’t necessary anymore. Events are saved to the event store and published only after a UnitOfWork.commit() and only after all locks have been verified against the repositories.

It would also be easier to declaratively use the UnitOfWork by adding an annotation to the command handler method. Then, units of work are committed upon successful execution, and rolled back when execution fails.

@CommandHandler // will automatically start a UnitOfWork
public void handleMyCommand(MyCommand command) {
MyAggregate aggregate = repository.load(command.getMyId());
// no need to save or rollback anymore

If you have any comments about this mechanism, please let me know.



Hi Allard,

I find this "UnitOfWork aspect" automatically declared with the help
of the CommandHandler annotation very neat !

In my opinion this mechanism feels intuitive and close to the
transactional processing with Spring.
It would save us to call explicitely the "save" method on each
aggregate loaded by each command handler.

I don't see any drawback (except having to re-throw an exception if
exceptions are catched in a @CommandHandler method, to let the
UnitOfWork do the rollback... but that's a 1% case and it would be a
programming error if not done :p).