For every command, an aggregate handles it replays previos events

While using axon to develop an application I saw a behaviour, on which I need the confirmation about how axon works.

we have an aggregate class with handlers as shown below-

public class BucketAggregate {

    private static final Logger LOGGER =

    private String bucketId;

    private BucketState bucketState;

    private List<EligibleTradeData> bucketTrades;

    private Double netTrade;
    BucketAggregate() {

    public BucketAggregate(final PlaceBucketCommand placeBucketCommand) {"PlaceBucketCommand command received for bucketId - {}.",
        AggregateLifecycle.apply(new BucketPlaced(placeBucketCommand.getId(),
    public final void on(final BucketPlaced bucketPlaced) {"An BucketPlaced event occurred for bucketId - {}.",
        this.bucketId = bucketPlaced.getBucketMasterSyn().getBucketId();
        this.bucketTrades = new ArrayList<>();
        this.bucketState = BucketState.INIT;

    public final void handle(final TradeEligibleCommand tradeEligibleCommand) {"TradeEligibleCommand received for bucketId - {}.",
        AggregateLifecycle.apply(new EligibleTrade(tradeEligibleCommand.getId(),
    public final void on(final EligibleTrade eligibleTrade) {"An EligibleTradeEvent occurred for bucketId - {}.",
        this.bucketId = eligibleTrade.getEligibleTradeData().getBucketId();
        this.bucketState = BucketState.OPEN;

Aggregate handles the PlaceBucketCommand perfectly. but when this Aggregate receives the TradeEligibleCommand it always replay the previous event BucketPlaced of the same aggregate id. In a nutshell for any command it replay the previous events stored when the application is running. I see logs of BucketPlaced event handler when sending TradeEligibleCommand command. which I am not expecting of having verbose logs.


  1. Is it the way Axon works, replaying previous events when aggregate handles commands for the same aggregate identifier?
  2. For our use case we don’t want new command targeting with the same aggregate id to replay previous events with the same id. Is there any configuration we do at our end or any suggestion?
  3. In event source handlers, doing some processing with event and sending events to kafka. But for new command, it replay the previous events. And there, logic to send some data to kafka is written. It send same event to kafka again. So what would be the suggestion about the place wher I can write my kafka producer logic.


Hi Kundam,

In the command handler you should check if the command is valid for the current state, and ideally not much more. If you need to send some data to Kafka based on the event the command handler produced, it’s better to do this in a separate event processor. This way, you decouple validating the command from sending information to Kafka. If Kafka is down, for example, you could have the event processor retrying until Kafka is up again while still handling commands.

The event handlers in the aggregate should be as light as possible, ideally only changing the aggregate state based on some information in the event.

I think that answers the questions, but please let us know if anything is unclear.

Thanks Gerard for the clarifications and suggestions.