One consideration for this kind of API design is who the consumer of the API is and how much control you have over their expectations.
Our service has a public API that originally was fully asynchronous in the way Allard recommends, returning only a “Got your request and will work on it shortly” response. We clearly documented the asynchronous behavior in our API documentation.
But we still got bug reports from customers who were alarmed when they sent us a command, then within milliseconds of our HTTP response sent us a “get the status of request XYZ” query and got back a “request XYZ not found” response because our system hadn’t had time to update the query model yet.
Clearly a case can be made that that’s broken behavior on the part of the client, but despite the documentation, it seemed to be surprising behavior for some customers, so we eventually changed our code to populate the query model with an initial “request has been received” state before dispatching any commands to the command bus and before returning a response to the HTTP request. The code that updates the query model in response to domain events had to change a bit to cope with a row for a new request possibly already existing in the table, but possibly not (in the case of a replay).
This turns out to have had a minor side benefit: where previously we had to check for duplicate IDs by way of the “create aggregate” command bombing out, we can now treat the query model as a reliable indicator of whether or not an aggregate ID is already in use. Our API is batch-based with a separate unique ID for each item in the batch and it’s much much faster to do a single bulk insert into a query table and catch duplicate-key exceptions than it is to create aggregates one at a time and individually catch “aggregate already exists” exceptions. For batches in the thousands or tens of thousands of items, the difference becomes significant.
However, for APIs that are only used internally by our own code, we don’t bother with this because we know the consumer of the API is prepared to treat it as fully asynchronous. We know that duplicate aggregate IDs are more or less impossible due to the way our internal clients generate their IDs, so there’s less need to proactively check for that error condition early in the pipeline (though obviously we still have code to recover from it if it happens due to a client bug).