Experiencing an issue where we moved from a mongodb running as a docker container to hosted azure mongodb (cosmos-db) Seems azure has limitations on size of query result (40Mb) https://docs.microsoft.com/en-us/azure/cosmos-db/faq (ErrorCode 16501)
2018-06-04 06:33:09.579 - WARN --- [ense-gateway]-0] o.a.e.TrackingEventProcessor Error occurred. Starting retry mode. [-]com.mongodb.MongoQueryException: Query failed with error code 16501 and error message 'Query exceeded the maximum allowed memory usage of 40 MB. Please consider adding more filters to reduce the query response size.' on server *********.azure.com:10255 at com.mongodb.operation.FindOperation$1.call(FindOperation.java:521) at com.mongodb.operation.FindOperation$1.call(FindOperation.java:510) at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:435) at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:408) at com.mongodb.operation.FindOperation.execute(FindOperation.java:510) at com.mongodb.operation.FindOperation.execute(FindOperation.java:81) at com.mongodb.Mongo.execute(Mongo.java:836) at com.mongodb.Mongo$2.execute(Mongo.java:823) at com.mongodb.OperationIterable.iterator(OperationIterable.java:47) at com.mongodb.FindIterableImpl.iterator(FindIterableImpl.java:151) at org.axonframework.mongo.eventsourcing.eventstore.AbstractMongoEventStorageStrategy.findTrackedEvents(AbstractMongoEventStorageStrategy.java:170) at org.axonframework.mongo.eventsourcing.eventstore.MongoEventStorageEngine.fetchTrackedEvents(MongoEventStorageEngine.java:202) at org.axonframework.eventsourcing.eventstore.BatchingEventStorageEngine.lambda$readEventData$1(BatchingEventStorageEngine.java:123) at org.axonframework.eventsourcing.eventstore.BatchingEventStorageEngine$EventStreamSpliterator.tryAdvance(BatchingEventStorageEngine.java:161) at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:294) at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:206) at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161) at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300) at java.base/java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681) at org.axonframework.eventsourcing.eventstore.EmbeddedEventStore$EventConsumer.peekPrivateStream(EmbeddedEventStore.java:380) at org.axonframework.eventsourcing.eventstore.EmbeddedEventStore$EventConsumer.peek(EmbeddedEventStore.java:341) at org.axonframework.eventsourcing.eventstore.EmbeddedEventStore$EventConsumer.hasNextAvailable(EmbeddedEventStore.java:318) at org.axonframework.messaging.MessageStream.hasNextAvailable(MessageStream.java:38) at org.axonframework.eventhandling.TrackingEventProcessor.checkSegmentCaughtUp(TrackingEventProcessor.java:294) at org.axonframework.eventhandling.TrackingEventProcessor.processBatch(TrackingEventProcessor.java:246) at org.axonframework.eventhandling.TrackingEventProcessor.processingLoop(TrackingEventProcessor.java:209) at org.axonframework.eventhandling.TrackingEventProcessor$TrackingSegmentWorker.run(TrackingEventProcessor.java:620) at org.axonframework.eventhandling.TrackingEventProcessor$WorkerLauncher.run(TrackingEventProcessor.java:715) at org.axonframework.eventhandling.TrackingEventProcessor$CountingRunnable.run(TrackingEventProcessor.java:547) at java.base/java.lang.Thread.run(Thread.java:844)
The particular event tracking processor that is failing is a tracking processor that uses an in-memory tracking token to rebuild state from the beginning of event stream. As mentioned before, this was working fine running against dockerized mongodb. We could consider changing our internal implementation and using a persisted tracking token (which would be far from ideal), but this will more than likely still be a problem when we’d like to do event-replays from the start of event store.
Any suggestions?