Distributed command bus with spring boot +spring could + spring kubernetes in OpenShift platform

Hi all,

I have configured spring boot so that command bus will be distributed, as discovery client, I use kubernetes client. Everything seems to be working well, pods and endpoints beeing recognized.
Codewise, I have a saga that begins with a rest call, and ends with one also . As routing strategy: AnnotationRoutingStrategy. Everyting works well until now.
First call, the one who starts saga is successful, the second one also, only thing is that is not routed on the correct pod/endpoint.
All segments seem to be local, and all calls also (I have an AOP logger that logs Member types).
ConsistentHash class seems to be the problem here, method is getMember(Stiring routingKey, CommandMessage commandMessage) - and selecting the segment seems to be based on command name, not routing key…or maybe I’m missing something…

Hi again,

I got the way that segments are chosen, but I still do not get why routing is not working.

Thanks,
Bogdan

miercuri, 14 martie 2018, 15:58:51 UTC+2, Timofciuc Bogdan a scris:

Hi Bogdan,

First off, which Axon Framework version are you running?

Selecting a segment is based on both the routingKey and the Command name.

The reason is that you can have several Axon nodes which are heterogeneous in command handling.

There is thus always a check if the segment at hand can handle that exact command or not.

The routing key is typically based of off the result from your Routing Strategy.

Assuming you’re using the default AnnotationRoutingStrategy, that means that the field annotated with the TargetAggregateIdentifier annotation is your routing key.

Thus, routing is based on both the result of your RoutingStrategy and the Command Messages.

I’d be hard pressed to guess why all your members are local members though…but I do believe that the Spring Cloud Kubernetes solution is still in it’s incubation phase.

I can also tell you I haven’t myself tested whether Axon’s Distributed Command Bus solution based on Spring Cloud works with the Spring Cloud Kubernetes solution,

Any how, if all your Members in the ConsistentHash are local, that’s due to a check performed in the SpringCloudCommandRouter based on the URI in the Local ServiceInstance (Spring Cloud it’s interface for a node) and the ServiceInstance received from the DiscoveryClient (Spring Cloud it’s component for the discovery service).

Let’s try to figure out why all your members are local members. Very interested to see this through.

Cheers,

Steven

Hi Steve,

Version of axon is 3.1.3.
I am well aware of how routing is made, but I cannot be 100% sure that all segments are local, but having an aspect on SpringCloudCommandRouter and SpringHttpCommandBusConnector and on DistributedCommandBus gave me the idea that most of them are: all ‘Member’ s have local()=true.
The kubernetes discovery client (the same as openshift client) ‘discover’ all endpoints in the openshift namespace, pods are recognized also.

My plan is to begin debugging the app in openshift, to exactly how segments are, I will get back to you :slight_smile:
Either way, it is fun to debug this :wink:

Regards,
Bogdan

joi, 15 martie 2018, 09:54:07 UTC+2, Steven van Beelen a scris:

Hi Bogdan,

Thanks for going the stretch to figure out if it’s on the Openshift side or the Axon Framework side!

Very much interested in your findings, keep us posted!

Cheers,

Steven

Hi Bogdan,

the SpringCloudCommandRouter expects command metadata to be part of the metadata it gets from the discovery server. Unfortunately, this is not supported by all implementations. We are already discussing possible solutions with the Spring team.
For now, it’s recommended to use the SpringCloudHttpBackupCommandRouter, which falls back to doing an HTTP request to discovered nodes, to find out which commands they can handle.

Also, we very recently announced our new product: AxonHub. We will be launching it in public beta very soon. AxonHub will take care of all the routing of messages for you, removing the need to configure all these components. It actually works very well in combination with Kubernetes/CloudFoundry/etc.

Hope this helps.
Cheers,

Allard

Sorry for bringing this up - but I had the same effect, that the hashing-function always hits local members only.
In my case reducing the segment-count from 100 to 1 solved it. In my case this could be done by setting the property “loadFactor” on the
ServiceInstance Metadata.

Before I had 2 members resulting in 200 segments.

After reducing them to 1 resulted in just 2 buckets. The md5 hash now seems to address these 2 buckets more regularly.
Maybe this is due my very small aggregate-ids: They are like “H1”, “H2”, “A2” and so on.