Does Send endpoint waits for the consumer to complete? - masstransit

I am starting with MassTransit and I have notice that when I call Send<T> in the ISendEndpoint it seems that the caller waits for the consumer to complete, is it an expected behavior? Since I am not using the mediator, shouldn't it just send the message to the endpoint and let the API (from which I am calling) free to process other requests?

Calling Send on a bus send endpoint does not wait for the consumer to complete. The returned Task is completed once the message has been acknowledged by the broker.
If you are using the request client, and calling GetResponse, that will wait for a response before completing the returned Task<TResponse>.
Using mediator, Send does wait.

Related

Request/Response pattern: Stop consumer to reprocess message on application restart

I’m using Masstransit request/response pattern. So I’ve a requester application and a consumer application, which is working very well. I didn’t configure any retry/redelivery as if there is any error happen into consumer , the requester will handle it or might be send another request. So far so good.
But if the consumer application crash and restart in the middle of the process , the consumer try to take the message from queue and start reprocessing it, which is not intended for my case. Because the requester will get error response (or timeout) when the consumer application crashed. I know that MessageRetry in MassTransit is entirely in-memory.
My question is, can we somehow stop consumer to reprocess message on application restart ? OR we need to remove the pending message from service bus queues?
There is no connection between a message sent by the request client and the request client. Using the request timeout as the default value, MassTransit sets the TimeToLive of the message to that same value. The transport should remove that message once the TimeToLive has expired.
If the consumer application crashes consuming a message, that message will remain on the queue. If that message repeatedly causes your application to crash, you could check the Redelivered property that is on the ReceiveContext (a property on ConsumeContext) and possibly handle that message another way if you believe the message is causing the process to crash.
Of course, the real solution is to fix the consumer so it doesn't crash the process...
UPDATE
You could configure the receive endpoint with a DeliveryCount of 1 if you want Azure Service Bus to move the message to the dead-letter queue if the consuming process crashes.

how to avoid gRPC async server streaming missing messages on start?

I've implemented an async server stream of messages and I have tests that call the rpc, then cause a message to be sent from the server, and then wait for the onNext().
Sometimes the tests fail because the rpc call arrives to the server after I cause a message. If I add a sleep of 300 millisecond on the server side, it fails consistently.
I've tried adding withWaitForReady() and it didn't help.
Is there a standard way to block an async stub until the method on the server side is finished?

Nats.io QueueSubscribe behavior on timeout

I'm evaluating NATS for migrating an existing msg based software
I did not find documentation about msg timeout exception and overload.
For Example:
After Subscriber has been chosen , Is it aware of timeout settings posted by Publisher ? Is it possible to notify an additional time extension ?
If the elected subscriber is aware that some DBMS connection is missing and cannot complete It could be possible to bounce the message
NATS server will pickup another subscriber and will re-post the same message ?
Ciao
Diego
For your first question: It seems to me that you are trying to publish a request message with a timeout (using the nc.Request). If so, the timeout is managed by the client. Effectively the client publishes the request message and creates a subscription on the reply subject. If the subscription doesn't get any messages within the timeout it will notify you of the timeout condition and unsubscribe from the reply subject.
On your second question - are you using a queue group? A queue group in NATS is a subscription that specifies a queue group name. All subscriptions having the same queue group name are treated specially by the server. The server will select one of the queue group subscriptions to send the message to rotating between them as messages arrive. However the responsibility of the server is simply to deliver the message.
To do what you describe, implement your functionality using request/reply using a timeout and a max number of messages equal to 1. If no responses are received after the timeout your client can then resend the request message after some delay or perform some other type of recovery logic. The reply message should be your 'protocol' to know that the message was handled properly. Note that this gets into the design of your messaging architecture. For example, it is possible for the timeout to trigger after the request recipient received the message and handled it but before the client or server was able to publish the response. In that case the request sender wouldn't be able to tell the difference and would eventually republish. This hints that such type of interactions need to make the requests idempotent to prevent duplicate side effects.

Spring AMQP synchronous reply and multithreading

I have a project where I send AMQP messages to a RabbitMQ Server. This messages are synchronous messages (I use sendAndReceive method). So, I have a SimpleMessageListenerContainer configured with the RabbitTemplate as MessageListener, and the response queue is fixed (setReplyAddress in the RabbitTemplate).
If I have a multithreading server (Tomcat) where it is possible to send some messages simultaneously, could there be a problem if the response doesn’t arrive in order or the application send a message before a response to another message has arrived?
The responses are paired with requests using correlationId header. Clients have to set it to unique value for every request and the server has to set the same value the response queue. That way a client can pair the messages even when they arrive in arbitrary order.
From the RabbitMQ tutorial:
In the method presented above we suggest creating a callback queue for every RPC request. That's pretty inefficient, but fortunately there is a better way - let's create a single callback queue per client.
That raises a new issue, having received a response in that queue it's
not clear to which request the response belongs. That's when the
correlationId property is used. We're going to set it to a unique
value for every request. Later, when we receive a message in the
callback queue we'll look at this property, and based on that we'll be
able to match a response with a request. If we see an unknown
correlationId value, we may safely discard the message - it doesn't
belong to our requests.

Calling xpc_connection_send_message_with_reply_sync in an xpc event handler

In an XPC service I'm developing, I would like to call xpc_connection_send_message_with_reply_sync or xpc_connection_send_message_with_reply from within the service's event handler (it requests some additional data from the client).
Instead of sending a message back to the client, it hangs. It seems like the message is waiting to be sent only after my event handler finishes.
Is there a way to communicate with the client without first retuning from the event handler?
Apparently it only hangs when sending a message to the same connection whose event handler my code is running in. When sending a message to a different connection it works fine.

Resources