is there a gRPC "completion queue" for synchronous server-side streaming? - ruby

I am working on implementing a gRPC service and I would like to use the server streaming functionnality. But since i am working with a Ruby client, from what I understand, the the request must be synchronious. Is there something to know when working with synchronious streams? what happens if the client is slower to process the streams than the server, are each messages sent in the stream queued somewhere so the client can process them? If yes, is there a size limit to that queue?
It seams I am able to process all the streams I receive, but what does that look like in a production environment? There is little information on what happens to a message when it is received by the client but before it is processed.

Related

Long Lived GRPC Calls

I am wondering the best practice for long lived GRPC calls.
I have a typical Client --> Server call (both golang) and the server processing can take up to about 20-30 seconds to complete. I need the client to wait until it is completed before I move on. Options that I see (and I don't love any of them):
Set timeout to absurd length (e.g. 1 min) and just wait. This feels
like a hack and also I expect to run into strange behavior in my
service mesh with things like this going on.
Use a stream - I still need to do option #1 here and it really
doen't help me much as my response is really just Unary and a stream
doesn't do me much good
Polling - (i implemented this and it works but I don't love it) - I
do most of the processing async and have my original GRPC call
return a transactionID that is stored in Redis and holds the state
of the transaction. I created a different GRPC endpoint to poll the
status of the transaction in a loop.
Queue or Stream (e.g. Kafka Stream) - setup the client to be a
listener into something like a Kafka topic and have my server notify
the (Queue || Stream) when it is done so that my client would pick
it up. I thought this would work but seemed way over-engineered.
Option #3 is working for me but sure feels pretty dirty. I am also 100% dependent on Redis. Given that GRPC is built on HTTP2 then I would think that maybe there is some sort of Server Push option but I am not finding any.
I fear that I am overlooking a simple way to handle this problem.
Thanks
Long-lived gRPC channel is an important use case and fully supported. However, one gRPC channel may have more than one TCP connection, and TCP can get disconnected due to inactivity. You can use keep-alive or HTTP/2 ping to keep TCP alive. See this thread for more details.
None of the options you mentioned address the issue that your server takes a while to respond. Unless there’s something I’m missing, nothing in your question is a gRPC issue.

Microservices asynchronous response

I come across many blog that say using rabbitmq improve the performance of microservices due to asynchronous nature of rabbitmq.
I don't understand in that case how the the http response is send to end user I am elaborating my question below more clearly.
user send a http request to microservice1(which is user facing service)
microservice1 send it to rabbitmq because it need some service from microservice2
microservice2 receive the request process it and send the response to rabbitmq
microservice1 receive the response from rabbitmq
NOW how this response is send to browser?
Does microservice1 waits untill it receive the response from rabbitmq?
If yes then how it become aynchronous??
It's a good question. To answer, you have to imagine the server running one thread at a time. Making a request to a microservice via RestTemplate is a blocking request. The user clicks a button on the web page, which triggers your spring-boot method in microservice1. In that method, you make a request to microservice2, and the microservice1 does a blocking wait for the response.
That thread is busy waiting for microservice2 to complete the request. Threads are not expensive, but on a very busy server, they can be a limiting factor.
RabbitMQ allows microservice1 to queue up a message to microservice2, and then release the thread. Your receive message will be trigger by the system (spring-boot / RabbitMQ) when microservice2 processes the message and provides a response. That thread in the thread pool can be used to process other users' requests in the meantime. When the RabbitMQ response comes, the thread pool uses an unused thread to process the remainder of the request.
Effectively, you're making the server running microservice1 have more threads available more of the time. It only becomes a problem when the server is under heavy load.
Good question , lets discuss one by one
Synchronous behavior:
Client send HTTP or any request and waits for the response HTTP.
Asynchronous behavior:
Client sends the request, There's another thread that is waiting on the socket for the response. Once response arrives, the original sender is notified (usually, using a callback like structure).
Now we can talk about blocking vs nonblocking call
When you are using spring rest then each call will initiate new thread and waiting for response and block your network , while nonblocking call all call going via single thread and pushback will return response without blocking network.
Now come to your question
Using rabbitmq improve the performance of microservices due to
asynchronous nature of rabbitmq.
No , performance is depends on your TPS hit and rabbitmq not going to improve performance .
Messaging give you two different type of messaging model
Synchronous messaging
Asynchronous messaging
Using Messaging you will get loose coupling and fault tolerance .
If your application need blocking call like response is needed else cannot move use Rest
If you can work without getting response go ahaead with non blocking
If you want to design your app loose couple go with messaging.
In short above all are architecture style how you want to architect your application , performance depends on scalability .
You can combine your app with rest and messaging and non-blocking with messaging.
In your scenario microservice 1 could be rest blocking call give call other api using rest template or web client and or messaging queue and once get response will return rest json call to your web app.
I would take another look at your architecture. In general, with microservices - especially user-facing ones that must be essentially synchronous, it's an anti-pattern to have ServiceA have to make a call to ServiceB (which may, in turn, call ServiceC and so on...) to return a response. That condition indicates those services are tightly coupled which makes them fragile. For example: if ServiceB goes down or is overloaded in your example, ServiceA also goes offline due to no fault of its own. So, probably one or more of the following should occur:
Deploy the related services behind a facade that encloses the entire domain - let the client interact synchronously with the facade and let the facade handle talking to multiple services behind the scenes.
Use MQTT or AMQP to publish data as it gets added/changed in ServiceB and have ServiceA subscribe to pick up what it needs so that it can fulfill the user request without explicitly calling another service
Consider merging ServiceA and ServiceB into a single service that can handle requests without having to make external calls
You can also send the HTTP request from the client to the service, set the application-state to waiting or similar, and have the consuming application subscribe to a eventSuccess or eventFail integration message from the bus. The main point of this idea is that you let daisy-chained services (which, again, I don't like) take their turns and whichever service "finishes" the job publishes an integration event to let anyone who's listening know. You can even do things like pass webhook URI's with the initial request to have services call the app back directly on completion (or use SignalR, or gRPC, or...)
The way we use RabbitMQ is to integrate services in real-time so that each service always has the info it needs to be responsive all by itself. To use your example, in our world ServiceB publishes events when data changes. ServiceA only cares about, and subscribes to a small subset of those events (and typically only a field or two of the event data), but it knows within seconds (usually less) when B has changed and it has all the information it needs to respond to requests. Each service literally has no idea what other services exist, it just knows events that it cares about (and that conform to a contract) arrive from time-to-time and it needs to pay attention to them.
You could also use events and make the whole flow async. In this scenario microservice1 creates an event representing the user request and then return a requested created response immediately to the user. You can then notify the user later when the request is finished processing.
I recommend the book Designing Event-Driven Systems written by Ben Stopford.
I asked a similar question to Chris Richardson (www.microservices.io). The result was:
Option 1
You use something like websockets, so the microservice1 can send the response, when it's done.
Option 2
microservice1 responds immediately (OK - request accepted). The client pulls from the server repeatedly until the state changed. Important is that microservice1 stores some state about the request (ie. initial state "accepted", so the client can show the spinner) which is modified, when you finally receive the response (ie. update state to "complete").

RabbitMQ keep messages in queue

I am streaming a tty's stdout and stderr to RabbitMQ (logs to be exact). These logs can be viewed on a website and while the content is streamed to RabbitMQ they are consumed by the webserver and forwarded to the client using WebSockets. Logs are immediately persisted after sending it to RabbitMQ.
When the user accesses the website the persisted logs are rendered and the consecutive parts are streamed using WebSockets. The problem is that there is a race condition as the persisted logs might be missing chunks of the log that occurred between rendering the site and receiving the first chunk via WebSocket.
My idea was to keep all chunks in the queue and send those via the WebSocket after connecting. Additionally I would add a worker to listen to some kind of a "finished" event which then takes everything in the queue and persists it at once.
The problem is that I don't know if this is possible using RabbitMQ or how. Any ideas or other solutions?
I don't think it really matters but my stack is using Ruby Sinatra and the Bunny RabbitMQ client.
While I agree with your general idea about picking up where you left off, after loading the intial page, what you're trying to do isn't something that should be done from RabbitMQ.
There are a lot of potential problems that this would cause, which I've outlined in a blog post, previously.
Instead of trying to do this w/ RMQ, I would do this from a database layer.
As you push things into the database, you have an ID - hopefully one that is sequential. If not, add a sequence to the entries.
When you load the page for the user, send the current ID that they are at down to the browser.
After the page finishes loading and you're setting up the websocket connection, send the user's current spot in the list of messages via the websocket. then the websocket connection can use that id to say "give me all the messages after this id, and start streaming them"
Again, this is not done via RabbitMQ (see my article on why this is a bad idea), but via your database and sequential IDs.

JMS consumer inside a Netty handler?

I'm designing a quite complicated system and was wondering what the best way is to put a jms consumer (activemq, vm protocol, non persitent) inside a netty handler.
Let me explain, i have several clients connecting to my netty server using websockets. For every client connection i create a jms consumer that listens for interesting messages on one or more topics. If a interesting message arrives i need to do a extra step (additional filtering) before sending the message to the client using the websocket.
Is the following a good way to do this:
inside a SimpleChannelInboundHandler i declare a private non static consumer
the consumer is initialized in channelActive
the consumer is destroyed in channelInactive
when a message is received by consumer i do the extra filter a send it using ctx.channel().write()
In this setup i'm a bit worried that the consumer might turn into slow consumer and slow everything down, cause the websocket goes over the internet.
I came up with a more complex one to decouple the "receiving of message by consumer" and "sending of message through a websocket".
inside a SimpleChannelInboundHandler i declare a private non static consumer
the consumer is initialized in channelActive
the consumer is destroyed in channelInactive
when a message is received by consumer i put it in a blockedqueue
every minute i let a thread (created for every client) look in the queue and send the found messages to the client using ctx.channel().write().
At this point i'm a bit worried about the extra thread per client.
Or is there maybe a better way to accomplish this task?
This is a classic slow consumer problem and the first step to resolving it is to determine what the appropriate action is when a slow consumer is detected. If it is acceptable that the slow consumer misses messages then the solution is some variation on dropping messages or unsubscribing them from the feed. For example, if it's acceptable that the client misses messages then, when one is received from JMS, check if the channel is writable. If it isn't, drop the message. If you want to give yourself a bit more of a buffer (although OS buffers are quite large) you can track the number of write completion future's that haven't completed (ie the messages haven't been written to the OS send buffer) and drop messages if there are too many outstanding write requests.
If the client may not miss messages, and is consistently slow, then the problem is more difficult. One option might be to divert messages to a JMS queue with a specific header value, then open a new consumer that reads messages from that queue using a JMS selector. This will put more load on the JMS server but might be appropriate for temporary slowness and hopefully it won't interfere with you main topic feeds. Alternatively you might want to stash the messages in a different store, such as a database, so you can poll for messages when they can be sent. If you do this right a single polling thread can cope with many clients (query for clients which have outstanding messages, then for each client, load a bunch of messages). However this isn't as convenient as using JMS.
I wouldn't go with option 2 because the blocking queue is only going to solve the problem temporarily, and you can achieve the same thing by tracking how many write operations are waiting to complete.

About JMS system structure

I’m writing a server/client game, a typical scenario looks like this: one client (clientA) send a message to the server, there is a MessageDrivenBean in server to handle such messages. After the MDB finished its job, it sends the result message back to another client (clientB).
In my opinion I only need two queues for such communication, one for input the other for output. Creating new queue for each connection is not a good idea, right?
The Input queue is relative clear, if more clients are sending message at the same time, the messages are just waiting in the queue, while there are more MDB instances in server, that should not a big performance issue.
But on the other side I am not quite clear about the output queue, should I use a topic instead of a queue? Every client is listening the output queue, one of them gets the new message and checks the property to determine if the message is to it, if not, it rollback the transaction, the message goes back to queue and be ready for other client … It should work but must be very slow. If I use topic instead, every client gets a copy of the message, if it’s not to it, just ignores the message. It should be better, right?
I’m new about message system. Is there any suggestion about my implementation? Thanks!
To begin with, choosing JMS as a gaming platform is, well, unusual — businesses use JMS brokers for delivery reliability and transaction support. Do you really need this heavy lifiting in a game? Shouldn't you resort to your own HTTP-based protocol, for example?
That said, two queues are a standard pattern for point-to-point communication. Creating a queue for a new connection is definitely not OK — message-driven beans are attached to queues at deployment time, so you won't be able to respond to queue creation events. Besides, queues are not meant to be created and destroyed in short cycles, they're rather designed to be long-living entities. If you need to deliver a message to one precise client, have the client listen on the server response queue with a message selector set to filter only the messages intended for this client (see javax.jms.Message API).
With topics it's exactly as you noted — each connected client will get a copy of the message — so again, it's not a good pattern to send to n clients a message that has to be discarded by n-1 clients.
MaDa;
You could stick one output queue (or topic) and simply tag the message with a header that identifies the intended client. Then, clients can listen on the queue/topic using a selector. Hopefully your JMS implementation has efficient server-side listener evaluation.

Resources