Every time we do a full read on our event store we encounter an OutOfMemory error in our query store.
Our event store is a Postgres database with ~19 million events. A lot of events contain zipped data, few of them (~100 events) contain zipped data > 2mb. The biggest event contains data ~18mb, the second biggest event ~9mb.
Our query store is (for debugging purposes) a very simple -Spring boot- query store with one handler, only counting the number of handled events.
We use a tracking event processor with 32 threads, and a batch size of 2000.
When the query store is half way reading the event store (~9m events) it generates an OutOfMemory error which we cannot explain. Especially because the event store does do anything special -except for counting events. If we analyse the heap dump we see a lot of threads keeping the same event alive: either the 9mb event or the 18mb event.
If we set the batch size to 1000 all events are processed without an OutOfMemory error. We really would like to understand why increasing the batch size results in an OutOfMemory error, especially half way through.
If more info is needed, we 're happy to share it.