parallel processing saga

It seems we have issue with parallel processing of saga’s

The first event (StartSaga) is matched to the segment with the assocation value ( as there is no saga yet), but the second and following is matched with the saga id.
That way, if the first event is in a segment which is handled later, it can be the case that the saga is not created when the other events are handled.

Someone else experienced this ??
Can you assign a SequencingPolicy to the saga event processor ?

Gerlo.

Hi Gerlo,

indeed the thread responsible for creating the Saga is identified using the association value. However, the Thread will then generate a Saga Identifier that matches the current Segment, to ensure that the creating thread remauns responsible for that instance.

It sounds however that you see otherwise. What symptoms do you see that make you believe there is a problem?

Kind regards,

Allard

Yes, i’ve examined the code you mentioned (repeating generating the saga id until it matches),… so i don’t understand it either.
What i see is that is goes ok when I use 1 thread (1 segment). but when i run parallel processors the saga sometimes sticks to the first event. Other events are not picked up.
Gerlo

Actually , it is not always after the first event , i notice now., but sometimes after 2 or 3 events.
Could be some sort of deadlock?
We are implementing a saga in component B which listens for events from an aggregate in component A, and also sends commands to component A. All via axonserver…

Hi Gerlo,

I just had the opportunity to look at the difference between the two store implementations. Unfortunately, they both only do “basic” storage things. The interesting (concurrency and batching related) stuff is all located in the SagaRepository, which is independent of the SagaStore implementation used.
Are you sure the SagaStore is the only difference in the two scenarios?

Just to narrow down the search scope, given your batch size, do the first and following events (the ones you expect to trigger) appear in the same batch, or different ones?

Meanwhile, I will attempt to reproduce…

Kind regards,

Hi, Allard,

Thanks for looking into this!
I was doubting myself abotu the JPA/JDBC difference, that’s why i deleted the post (i thought). I could not reproduce it with JPA but it is very difficult to reproduce it with JDBC either. Stil tweaking with all the different parameters to get a real producable test setup, so i can rule things out after that.
The missing event handling is certaintly in a different batch the the first ones… I will post an example below …
Could it be a conflict with the existing configuration ? Axon is brought in into this existing component. Some conflict, e.g. with the existing transaction handling ??? have a look below at springs DataSourceTransactionManager. Or could it have to do with the caching i have implemented ? I have posted (part of) the new and existing configuration below…
It also is good to know that afterwards (no idea how long you have to wait) new events are handled normally again on the same (corrupted) saga.

Thanks,
Gerlo

Events : (token, aggregateId, sequencenumber, payloadtype,)
handled:

400,100000040,1,nl.kadaster.koers.dossier.events.DossierAangemaakt (Start of saga)
401,100000040,2,nl.kadaster.koers.dossier.events.stuk.StukIdAangeboden
402,100000040,3,nl.kadaster.koers.dossier.events.stuk.TerInschrijvingAangebodenStukAangeboden
not handled:
505,100000040,4,nl.kadaster.koers.dossier.events.adres.ObjectLocatieBinnenlandToegevoegd
506,100000040,5,nl.kadaster.koers.dossier.events.persoon.NatuurlijkPersoonGegevensToegevoegd
507,100000040,6,nl.kadaster.koers.dossier.events.adres.ObjectLocatieBinnenlandToegevoegd

New axon saga configuration :

@Bean
public SagaStore sagaStore(@Qualifier("eventSerializer") Serializer eventSerializer, Cache sagaCache, DataSource kwrDataSource) {
    //use jaxb serializer for sagas
    return CachingSagaStore
        .builder()
        .associationsCache(sagaCache)
        .sagaCache(sagaCache)
        .delegateSagaStore(JdbcSagaStore.builder().serializer(eventSerializer).dataSource(kwrDataSource).build())
        .build();
}

@Bean
public ConnectionProvider connectionProvider(DataSource dataSource) {
    return new UnitOfWorkAwareConnectionProviderWrapper(new SpringDataSourceConnectionProvider(dataSource));
}

@Bean
public TokenStore tokenStore(ConnectionProvider connectionProvider) {
    return JdbcTokenStore.builder().connectionProvider(connectionProvider).serializer(JacksonSerializer.builder().build()).build();
}

//caching saga's

@Bean
public Cache sagaCache() {
    return new EhCacheAdapter(cacheManager().getCache(SAGA_CACHE));
}

@Bean
public CacheManager cacheManager() {
    return CacheManager.newInstance(getClass().getResource("/ehcache.xml"));
}

Part of existing configuration :

@Bean
public DataSourceTransactionManager getTransactionManager() {
    return new DataSourceTransactionManager(dataSource);
}

Its the caching !! Tried 1000 command requests (200 parallel users), hitting 11 mistakes with caching, none without caching
attached my echache.xml
Anyone suggestions to get saga caching working correctly ??

<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="ehcache.xsd"
         updateCheck="true"
         monitoring="autodetect"
         dynamicConfig="true">

    <cache name="sagaCache"
           maxEntriesLocalHeap="100"
           eternal="false"
           timeToIdleSeconds="60" timeToLiveSeconds="120"
           transactionalMode="off">
    </cache>

</ehcache>

We decided not to use the caching. We did some test and we didn’t see a remarkable performance loss in our case.