Query Model Design for Distributed Services

Hi all,

When designing bounded contexts, entities from one context may exist as value objects (VO) in another context. Generally those will be always include Id’s.

Consider context A and B, I thought of these two ways:

  1. I include some fields of the an entity of context A into a VO in context B.
    Display in B always has all the data available. But each updates of the entity in context A also requires an update in B

  2. I use only the Id of the entity in A in the VO in B. But each time I want to display data in B, I send an HTTP request to A to get the data

Which way is more commonly used? Which is less costly . Is there a third way?


Hi Harvey,

be careful with the use of the term “Bounded Context”. A “context” defines an area where words have a specific meaning, which may differ from the meaning of the same word in another context. You would want to design systems or components to work within a single context, to avoid pollution of your model.
From your description, I take it that you are designing 2 different components, which store information that may have some relationship. Essentially, the last section (strategic design) of the DDD book by Evans covers this. It a big section, so that’s a clear sign that there’s no easy answer to your question.

But to not completely leave you in the dark (with a blue book), here’s some ideas to hopefully get you going.

If component B needs information that component A “owns”, there are a few approaches:

  • Have component A emit events, which component B handles to update its own model
  • Have component B perform a query to component A, each time it needs information
  • Have component B maintain a copy or derived model, which it queries from A, triggered by an Event emitted by A.

Which choice is best depends on the relationship between A and B (same bounded context? same (sub)domain? same dev team?), their respective non-functional requirements (# updates, lifecycle, SLA) and probably many other factors as well. Note that you’re creating a form of (inevitable) coupling here, so it’s important to design how you want that coupling to be.



Thanks Allard. Yeah I usually use context instead of components, whereas all of them are found in the same bounded context. In my case however, each component is a standalone application.

I have an issue caused by the dependency between two components. Since one component depends on the other(shared events), when I activate spring security in one(the entry point component) , it blocks access to the other when testing it (the second) separately.

Hi Harvey,

regarding events, note that the dependency is primarily on the contract of the events, not the classes themselves. Of course, sharing event classes is easy and often convenient. When doing so, make sure to only share the relevant classes (events, commands and the value objects used in them), and not any implementation specific classes.

A better, more pure, approach would be to only share the contract for the serialized form of events, and have each application generate/build their own event classes. This allows a client to only model the data that’s relevant to that application, ignoring any fields that it doesn’t need.



Hi Allard,
Regarding this,

A better, more pure, approach would be to only share the contract for the serialized form of events, and have each application generate/build their own event classes. This allows a client to only model the data that’s relevant to that application, ignoring any fields that it doesn’t need

I don’t understand what you mean by the ‘contract for the serialized form of the events’. How is the generation of event classes done? Is there any example of that?


Hi Harvey,

What Allard means is, that in for example two separate applications, where [Application X] applies [Event X] which [Application Y] also listens, that you don’t want to tie both applications to each other based on the actual implementation of the event.
The only actual ‘contract’ both applications have with each other, is the serialized form of the event instead of the concrete implementation.

Generating events based on a serialized event form (for example a JSON Schema events or more concrete with Protocol Buffers) is doable, although I haven’t tried something like this.
There also isn’t an example (of which I’m aware of) for this on the Axon Framework GitHub, although there are definitely examples out their on how to generate java classes based on serialized message formats.

Hope this helps!


That’s indeed what I meant. Was out on a business trip, so unable to respond for a while.

However, I didn’t mean that you must generate classes from a schema. What I meant was that you must define classes (generated, shared or simple programmed ‘manually’) that are compatible with the schema (which is either implicit or explicit).

Sharing classes is easy, and will work, but as soon as you (implicitly or explicitly) change the schema, your classes may become backwards incompatible, forcing you to update the receiving end of the events as well.

Hope this helps.


Thanks Allard, Stephen.

I did it this way,
I created a JSONObject , added the fields that I needed for the other service, applied another event: apply(event).andThenApply(() -> jsonObject). The issue is that sometimes, the first event gets ignored, rightfully, but the second(JSONObject) doesn’t get to the queue for some reason.
I don’t know if this is recommended but I actually don’t like applying a second event when the first could have been used.
I tried looking at some examples of eventual consistency, but those still use the method I was trying to avoid; that is having a copy of the event class in both services, and just deserializing it into that object, which I think will not work in the case of axon anyway as the type of the rabbit message is the fully qualified class path of the event.

Anyway, I want this to be handled in a transaction so that if the operation in Service B fails , the operation in Service A is rolled back. Any pointers on how to do distributed Sagas?

You mention distributing Sagas, but I don’t see the relationship with your problem. A Saga is a component that coordinates activity. It is not a transaction. Part of that coordination involves dealing with errors during the process. A common approach to deal with errors is to perform compensating actions, to essentially undo what you had done before. A Saga doesn’t need to be distributed for this.



Thanks Allard. Maybe I misunderstood Sagas. So if I understand correctly, in my scenario, Application X applies event X1 which Application Y listens to, gets the required data and sends a command to the concerned aggregate which then applies an event Y1 to update its model. If an error occurs in Application Y during the process, application Y applies an undo operation ( command + event Y2) which Application X listens to and “undoes” the previous successful operation.

If that’s correct how does App X “wait” or block the data from being used till it confirms that App Y was successful in its operation?

Hi Harvey,

the premise of eventual consistency is that you don’t block. You assume things will be all right, eventually. It is still possible that certain scenarios require some clarity on what’s confirmed and what’s not. In that case, you can use the Reservation pattern (see http://freecontent.manning.com/reservation-pattern/). Basically say you’re “planning to do something” first, then do whatever side-effects you need, and then confirm. Using this pattern, each individual action is an ACID transaction, but the different actions as a whole apply eventual consistency.

A warning, though: ACID transactions are way over-rated. In many cases, they sounds very convenient, but aren’t necessary from a business perspective. The real world isn’t transactional either. Don’t over-engineer for error handling if they hardly ever occur, and when they do, the ‘damage’ is limited.

Hope this helps.