Inconsistent sequence number of snapshot and failing the snapshot creation

Hi,
Currently, I’m implementing snapshot for an aggregate as follows:

    @Bean
    public SnapshotTriggerDefinition
    demoSnapshotTrigger(final Snapshotter snapshotter,
        @Value("${spring.snapshot.threshold: 12}") final int threshold) {
        return new EventCountSnapshotTriggerDefinition(snapshotter,
            threshold);
    }

when I checks the sequence number for these events in MongoDB collection, It increases like this … 22, 33, 34, 45, 56, 67, 70, 81, 83
Here, In the above sequence, after 33 a new snapshot has been created at 34, instead of 44.
And the same repeats again.
At sequence no 94, getting below mentioned error:

11:15:05.634 WARN  org.axonframework.eventsourcing.AbstractSnapshotter -
                    An attempt to create and store a snapshot resulted in an exception. Exception summary: An event for aggregate [33591f70-da1a-49f5-9e46-87494e2570b6] at sequence [94] was already inserted

After this, no snapshot is created in our collection.

Note: All the processes are running on a single machine.
Followed this document

How can I fix this?

Thank you

Hi Kuldeep, here are a couple of pointers I want to share here:

  1. Axon Framework will make separate storage locations for Domain Events and Snapshots. Although a Snapshot is a particular type of event, I need to ask: when you’re talking about “these events in MongoDB collection,” are you talking about Snapshots or Domain Events?
  2. Why did you come up with “12” as the threshold? In my experience, a threshold below 100-150 doesn’t make much sense. The effort in constructing the snapshot and subsequently using it for sourcing does not outway simply reading 100-150 events. Hence, I recommend you first monitor the system to see how many events within an aggregate it takes for your performance criteria (e.g., the loading time of the aggregate) to no longer be met. That number should be the basis to define the threshold.

Now, to one of your pointers in the question:

This sounds pretty unfeasible to me, to be honest. A failure in storing one snapshot should not impact the creation of other snapshots. Granted, you’re apparently experiencing this, so let’s dive into this.

One thing to validate is the message you’re receiving:

An attempt to create and store a snapshot resulted in an exception. Exception summary: An event for aggregate [33591f70-da1a-49f5-9e46-87494e2570b6] at sequence [94] was already inserted

As you can read from the message, apparently, there already is an “event” at that position. Snapshot storage with the Mongo Extension from Axon Framework should move to a distinct collection.
You can check the AbstractMongoEventStorageStrategy#appendSnapshot method here, seeing it will replace the previous snapshot. The message seems to suggest different things, though.

All this makes me curious about your setup in more detail, to be honest.
As such, please construct a small sample project that consistently shows this behavior, @Kuldeep. That will greatly increase our/my capabilities to figure out where the predicament lies.
Furthermore, by extracting the core of your configuration, perhaps you will figure it out yourself.
Nonetheless, let’s figure out a solution!

Hi @Steven_van_Beelen
Here is the configuration

    @Bean
    public EmbeddedEventStore eventStore(
        final EventStorageEngine storageEngine,
        final AxonConfiguration configuration) { 
        return EmbeddedEventStore.builder()
            .storageEngine(storageEngine)
            .messageMonitor(configuration.messageMonitor(EventStore.class, "eventStore"))
            .build();
    }
   
    @Bean
    public EventStorageEngine storageEngine() throws NoSuchAlgorithmException {
        final MongoCredential credential = MongoCredential.createCredential(this.username, this.database, this.password.toCharArray());
        
        final MongoClientSettings settings = MongoClientSettings.builder().credential(credential).build();
        final MongoClient mongoClient = MongoClients.create(settings);

        String[] newString = new String[]{"org.axonframework.**", "**"};
        XStream xStream = new XStream();
        xStream.allowTypesByWildcard(newString);
        XStream getXStream = XStreamSerializer.builder().xStream(xStream).build().getXStream();
        return MongoEventStorageEngine.builder()
            .mongoTemplate(DefaultMongoTemplate.builder().mongoDatabase(mongoClient).build())
            .eventSerializer(JacksonSerializer.defaultSerializer())
            .snapshotSerializer(XStreamSerializer.builder().xStream(getXStream).build())
            .build();

    }

Snapshot config:

    @Bean
    public SnapshotTriggerDefinition
    demoSnapshotTrigger(final Snapshotter snapshotter,
        @Value("${spring.snapshot.threshold: 150}") final int threshold) {
        return new EventCountSnapshotTriggerDefinition(snapshotter,
            threshold);
    }

Can you please provide me with some resolution?

HI @Kuldeep!

No offense, but this isn’t a usable sample project I can check out.
Let me be a bit clearer. I was hoping you could make a public sample project on GitHub (I wager your own GitHub) and share the link with us.
By doing so, it is easier for everyone reading this to help you identify the predicament.

FYI, I don’t see anything specific from the provided configuration that would point toward the issue you are describing.
This is another reason why having a complete reproducible project would help you further (as it means we do not have to go back and forth requesting setup specifics).

Cheers,
Steven

Hi @Steven_van_Beelen
we have found the root cause of this issue:

Caused by: org.bson.BsonMaximumSizeExceededException: Document size of 31822301 is larger than maximum of 16793600.

In our case, an aggregate has a state/field with which while creating a snapshot on a threshold, crosses the maximum BSON document size.
But we are unable to find fix for it.
Any help or suggestions could be really helpful.

Thanks

Again, that’s hard or even impossible, without having the whole project. It does seems like the aggregate might be to big, or you unintentionally store more information in the aggregate than needed.

For example, you already have the events stored, so you only need to keep data in the aggregate that is needed to make decisions on commands.

Almost 32 mb seems pretty big for one aggregate. You can check the database to see what’s in there currently.

1 Like