I am wondering the best practice for long lived GRPC calls.
I have a typical Client --> Server call (both golang) and the server processing can take up to about 20-30 seconds to complete. I need the client to wait until it is completed before I move on. Options that I see (and I don't love any of them):
Set timeout to absurd length (e.g. 1 min) and just wait. This feels
like a hack and also I expect to run into strange behavior in my
service mesh with things like this going on.
Use a stream - I still need to do option #1 here and it really
doen't help me much as my response is really just Unary and a stream
doesn't do me much good
Polling - (i implemented this and it works but I don't love it) - I
do most of the processing async and have my original GRPC call
return a transactionID that is stored in Redis and holds the state
of the transaction. I created a different GRPC endpoint to poll the
status of the transaction in a loop.
Queue or Stream (e.g. Kafka Stream) - setup the client to be a
listener into something like a Kafka topic and have my server notify
the (Queue || Stream) when it is done so that my client would pick
it up. I thought this would work but seemed way over-engineered.
Option #3 is working for me but sure feels pretty dirty. I am also 100% dependent on Redis. Given that GRPC is built on HTTP2 then I would think that maybe there is some sort of Server Push option but I am not finding any.
I fear that I am overlooking a simple way to handle this problem.
Thanks
Long-lived gRPC channel is an important use case and fully supported. However, one gRPC channel may have more than one TCP connection, and TCP can get disconnected due to inactivity. You can use keep-alive or HTTP/2 ping to keep TCP alive. See this thread for more details.
None of the options you mentioned address the issue that your server takes a while to respond. Unless there’s something I’m missing, nothing in your question is a gRPC issue.
Related
SetUp:
We have gRPC pods running in a k8s cluster. The service mesh we use is linkerd. Our gRPC microservices are written in python (asyncio grpcs as the concurrency mechanism), with the exception of the entry-point. That microservice is written in golang (using gin framework). We have an AWS API GW that talks to an NLB in front of the golang service. The golang service communicates to the backend via nodeport services.
Requests on our gRPC Python microservices can take a while to complete. Average is 8s, up to 25s in the 99th %ile. In order to handle the load from clients, we've horizontally scaled, and spawned more pods to handle concurrent requests.
Problem:
When we send multiple requests to the system, even sequentially, we sometimes notice that requests go to the same pod as an ongoing request. What can happen is that this new request ends up getting "queued" in the server-side (not fully "queued", some progress gets made when context switches happen). The issue with queueing like this is that:
The earlier requests can start getting starved, and eventually timeout (we have a hard 30s cap from API GW).
The newer requests may also not get handled on time, and as a result get starved.
The symptom we're noticing is 504s which are expected from our hard 30s cap.
What's strange is that we have other pods available, but for some reason the loadbalancer isn't routing it to those pods smartly. It's possible that linkerd's smarter load balancing doesn't work well for our high latency situation (we need to look into this further, however that will require a big overhaul to our system).
One thing I wanted to try doing is to stop this queuing up of requests. I want the service to immediately reject the request if one is already in progress, and have the client (meaning the golang service) retry. The client retry will hopefully hit a different pod (do let me know if that won’t happen). In order to do this, I set the "maximum_concurrent_rpcs" to 1 on the server-side (Python server). When i sent multiple requests in parallel to the system, I didn't see any RESOURCE_EXHAUSTED exceptions (even under the condition when there is only 1 server pod). What I do notice is that the requests are no longer happening in parallel on the server, they happen sequentially (I think that’s a step in the right direction, the first request doesn’t get starved). That being said, I’m not seeing the RESOURCE_EXHAUSTED error in golang. I do see a delay between the entry time in the golang client and the entry time in the Python service. My guess is that the queuing is now happening client-side (or potentially still server side, but it’s not visible to me)?
I then saw online that it may be possible for requests to get queued up on the client-side as a default behavior in http/2. I tried to test this out in custom Python client that mimics the golang one with:
channel = grpc.insecure_channel(
"<some address>",
options=[("grpc.max_concurrent_streams", 1)]
)
# create stub to server with channel…
However, I'm not seeing any change here either. (Note, this is a test dummy client - eventually i'll need to make this run in golang. Any help there would be appreciated as well).
Questions:
How can I get the desired effect here? Meaning server sends resource exhausted if already handling a request, golang client retries, and it hits a different pod?
Any other advice on how to fix this issue? I'm grasping at straws here.
Thank you!
I am running an Apache server that receives HTTP requests and connects to a daemon script over ZeroMQ. The script implements the Multithreaded Server pattern (http://zguide.zeromq.org/page:all#header-73), it successfully receives the request and dispatches it to one of its worker threads, performs the action, responds back to the server, and the server responds back to the client. Everything is done synchronously as the client needs to receive a success or failure response to its request.
As the number of users is growing into a few thousands, I am looking into potentially improving this. The first thing I looked at is the different patterns of ZeroMQ, and whether what I am using is optimal for my scenario. I've read the guide but I find it challenging understanding all the details and differences across patterns. I was looking for example at the Load Balancing Message Broker pattern (http://zguide.zeromq.org/page:all#header-73). It seems quite a bit more complicated to implement than what I am currently using, and if I understand things correctly, its advantages are:
Actual load balancing vs the round-robin task distribution that I currently have
Asynchronous requests/replies
Is that everything? Am I missing something? Given the description of my problem, and the synchronous requirement of it, what would you say is the best pattern to use? Lastly, how would the answer change, if I want to make my setup distributed (i.e. having the Apache server load balance the requests across different machines). I was thinking of doing that by simply creating yet another layer, based on the Multithreaded Server pattern, and have that layer bridge the communication between the web server and my workers.
Some thoughts about the subject...
Keep it simple
I would try to keep things simple and "plain" ZeroMQ as long as possible. To increase performance, I would simply to change your backend script to send request out from dealer socket and move the request handling code to own program. Then you could just run multiple worker servers in different machines to get more requests handled.
I assume this was the approach you took:
I was thinking of doing that by simply creating yet another layer, based on the Multithreaded Server pattern, and have that layer bridge the communication between the web server and my workers.
Only problem here is that there is no request retry in the backend. If worker fails to handle given task it is forever lost. However one could write worker servers so that they handle all the request they got before shutting down. With this kind of setup it is possible to update backend workers without clients to notice any shortages. This will not save requests that get lost if the server crashes.
I have the feeling that in common scenarios this kind of approach would be more than enough.
Mongrel2
Mongrel2 seems to handle quite many things you have already implemented. It might be worth while to check it out. It probably does not completely solve your problems, but it provides tested infrastructure to distribute the workload. This could be used to deliver the request to be handled to multithreaded servers running on different machines.
Broker
One solution to increase the robustness of the setup is a broker. In this scenario brokers main role would be to provide robustness by implementing queue for the requests. I understood that all the requests the worker handle are basically the same type. If requests would have different types then broker could also do lookups to find correct server for the requests.
Using the queue provides a way to ensure that every request is being handled by some broker even if worker servers crashed. This does not come without price. The broker is by itself a single point of failure. If it crashes or is restarted all messages could be lost.
These problems can be avoided, but it requires quite much work: the requests could be persisted to the disk, servers could be clustered. Need has to be weighted against the payoffs. Does one want to use time to write a message broker or the actual system?
If message broker seems a good idea the time which is required to implement one can be reduced by using already implemented product (like RabbitMQ). Negative side effect is that there could be a lot of unwanted features and adding new things is not so straight forward as to self made broker.
Writing own broker could covert toward inventing the wheel again. Many brokers provide similar things: security, logging, management interface and so on. It seems likely that these are eventually needed in home made solution also. But if not then single home made broker which does single thing and does it well can be good choice.
Even if broker product is chosen I think it is a good idea to hide the broker behind ZeroMQ proxy, a dedicated code that sends/receives messages from the broker. Then no other part of the system has to know anything about the broker and it can be easily replaced.
Using broker is somewhat developer time heavy. You either need time to implement the broker or time to get use to some product. I would avoid this route until it is clearly needed.
Some links
Comparison between broker and brokerless
RabbitMQ
Mongrel2
We have a web app making use of websocket with netty. We are trying to push through a large stream of data and noticing that once we hit about 300-400 responses a second, the channel begins to get marked as NOT_WRITABLE and we are unable to send some of the responses.
It seems as though netty is waiting for an ack on the socket. Is there any way to disable that, or any way to speed it up to be able to handle more? The javascript implementation in chrome is pretty bare-bones and it doesn't seem like there's much we can do to increase performance from that side.
Seems like your write faster then the data can be "send out" to the client. You can try to increase the write watermark. See [1].
Als you may want to experiment with some TCP opts like "tcpNoDelay" and "sendBufferSize". See [2]
[1] http://static.netty.io/3.5/api/org/jboss/netty/channel/socket/nio/NioChannelConfig.html#setWriteBufferHighWaterMark(int)
[2] http://static.netty.io/3.5/api/org/jboss/netty/channel/socket/SocketChannelConfig.html
Will web sockets when used in all web browsers make ajax obsolete?
Cause if I could use web sockets to fetch data and update data in realtime, why would I need ajax? Even if I use ajax to just fetch data once when the application started I still might want to see if this data has changed after a while.
And will web sockets be possible in cross-domains or only to the same origin?
WebSockets will not make AJAX entirely obsolete and WebSockets can do cross-domain.
AJAX
AJAX mechanisms can be used with plain web servers. At its most basic level, AJAX is just a way for a web page to make an HTTP request. WebSockets is a much lower level protocol and requires a WebSockets server (either built into the webserver, standalone, or proxied from the webserver to a standalone server).
With WebSockets, the framing and payload is determined by the application. You could send HTML/XML/JSON back and forth between client and server, but you aren't forced to. AJAX is HTTP. WebSockets has a HTTP friendly handshake, but WebSockets is not HTTP. WebSockets is a bi-directional protocol that is closer to raw sockets (intentionally so) than it is to HTTP. The WebSockets payload data is UTF-8 encoded in the current version of the standard but this is likely to be changed/extended in future versions.
So there will probably always be a place for AJAX type requests even in a world where all clients support WebSockets natively. WebSockets is trying to solve situations where AJAX is not capable or marginally capable (because WebSockets its bi-directional and much lower overhead). But WebSockets does not replace everything AJAX is used for.
Cross-Domain
Yes, WebSockets supports cross-domain. The initial handshake to setup the connection communicates origin policy information. The wikipedia page shows an example of a typical handshake: http://en.wikipedia.org/wiki/WebSockets
I'll try to break this down into questions:
Will web sockets when used in all web browsers make ajax obsolete?
Absolutely not. WebSockets are raw socket connections to the server. This comes with it's own security concerns. AJAX calls are simply async. HTTP requests that can follow the same validation procedures as the rest of the pages.
Cause if I could use web sockets to fetch data and update data in realtime, why would I need ajax?
You would use AJAX for simpler more manageable tasks. Not everyone wants to have the overhead of securing a socket connection to simply allow async requests. That can be handled simply enough.
Even if I use ajax to just fetch data once when the application started I still might want to see if this data has changed after a while.
Sure, if that data is changing. You may not have the data changing or constantly refreshing. Again, this is code overhead that you have to account for.
And will web sockets be possible in cross-domains or only to the same origin?
You can have cross domain WebSockets but you have to code your WS server to accept them. You have access to the domain (host) header which you can then use to accept / deny requests. This can, however, be spoofed by something as simple as nc. In order to truly secure the connection you will need to authenticate the connection by other means.
Websockets have a couple of big downsides in terms of scalability that ajax avoids. Since ajax sends a request/response and closes the connection (..or shortly after) if someone stays on the web page it doesn't use server resources when idling. Websockets are meant to stream data back to the browser, and they tie up server resources to do so. Servers have a limit in how many simultaneous connections they can keep open at one time. Not to mention depending on your server side technology, they may tie up a thread to handle the socket. So websockets have more resource intensive requirements for both sides per connection. You could easily exhaust all of your threads servicing clients and then no new clients could come in if lots of users are just sitting on the page. This is where nodejs, vertx, netty can really help out, but even those have upper limits as well.
Also there is the issue of state of the underlying socket, and writing the code on both sides that carry on the stateful conversation which isn't something you have to do with ajax style because it's stateless. Websockets require you create a low level protocol which is solved for you with ajax. Things like heart beating, closing idle connections, reconnection on errors, etc are vitally important now. These are things you didn't have to solve when using AJAX because it was stateless. State is very important to the stability of your app and more importantly the health of your server. It's not trivial. Pre-HTTP we built a lot of stateful TCP protocols (FTP, telnet, SSH), and then HTTP happened. And no one did that stuff much anymore because even with its limitations HTTP was surprisingly easier and more robust. Websockets bring back the good and the bad of stateful protocols. You'll learn soon enough if you didn't get a dose of that last go around.
If you need streaming of realtime data this extra overhead is warranted because polling the server to get streamed data is worse, but if all you are doing is user interaction->request->response->update UI, then ajax is easier and will use less resources because once the response is sent the conversation is over and no additional server resources are used. So I think it's a tradeoff and the architect has to decide which tool fits their problem. AJAX has its place, and websockets have their place.
Update
So the architecture of your server is what matters when we are talking about threads. If you are using a traditionally multi-threaded server (or processes) where a each socket connection gets its own thread to respond to requests then websockets matter a lot to you. So for each connection we have a socket, and eventually the OS will fall over if you have too many of these, and the same goes for threads (more so for processes). Threads are heavier than sockets (in terms of resources) so we try and conserve how many threads we have running simultaneously. That means creating a thread pool which is just a fixed number of threads that is shared among all sockets. But once a socket is opened the thread is used for the entire conversation. The length of those conversations govern how quickly you can repurpose those threads for new sockets coming in. The length of your conversation governs how much you can scale. However if you are streaming this model doesn't work well for scaling. You have to break the thread/socket design.
HTTP's request/response model makes it very efficient in turning over threads for new sockets. If you are just going to use request/response use HTTP its already built and much easier than reimplementing something like that in websockets.
Since websockets don't have to be request/response as HTTP and can stream data if your server has a fixed number of threads in its thread pool and you have the same number of websockets tying up all of your threads with active conversations, you can't service new clients coming in! You've reached your maximum capacity. That's where protocol design is important too with websockets and threads. Your protocol might allow you to loosen the thread per socket per conversation model that way people just sitting there don't use a thread on your server.
That's where asynchronous single thread servers come in. In Java we often call this NIO for non-blocking IO. That means it's a different API for sockets where sending and receiving data doesn't block the thread performing the call.
So traditional in blocking sockets when you call socket.read() or socket.write() they wait until the data is received or sent before returning control to your program. That means your program is stuck waiting for the socket data to come in or go out until you can do anything else. That's why we have threads so we can do work concurrently (at the same time). Send this data to client X while I wait on data from client Y. Concurrencies is the name of the game when we talk about servers.
In a NIO server we use a single thread to handle all clients and register callbacks to be notified when data arrives. For example
socket.read( function( data ) {
// data is here! Now you can process it very quickly without waiting!
});
The socket.read() call will return immediately without reading any data, but our function we provided will be called when it comes in. This design radically changes how you build and architect your code because if you get hung up waiting on something you can't receive any new clients. You have a single thread you can't really do two things at once! You have to keep that one thread moving.
NIO, Asynchronous IO, Event based program as this is all known as, is a much more complicated system design, and I wouldn't suggest you try and write this if you are starting out. Even very Senior programmers find it very hard to build a robust systems. Since you are asynchronous you can't call APIs that block. Like reading data from the DB or sending messages to other servers have to be performed asynchronously. Even reading/writing from the file system can slow your single thread down lowering your scalability. Once you go asynchronous it's all asynchronous all the time if you want to keep the single thread moving. That's where it gets challenging because eventually you'll run into an API, like DBs, that is not asynchronous and you have to adopt more threads at some level. So a hybrid approaches are common even in the asynchronous world.
The good news is there are other solutions that use this lower level API already built that you can use. NodeJS, Vertx, Netty, Apache Mina, Play Framework, Twisted Python, Stackless Python, etc. There might be some obscure library for C++, but honestly I wouldn't bother. Server technology doesn't require the very fastest languages because it's IO bound more than CPU bound. If you are a die hard performance nut use Java. It has a huge community of code to pull from and it's speed is very close (and sometimes better) than C++. If you just hate it go with Node or Python.
Yes, yes it does. :D
The earlier answers lack imagination. I see no more reason to use AJAX if websockets are available to you.
I'm working on a web application that submits tasks to a master/worker system that farms out the tasks to any of a series of worker instances. The work queue master runs as a separate process (on a separate machine altogether) and tasks are submitted to the master via HTTP/REST requests. Once tasks are submitted to the work queue, client applications can submit another HTTP request to get status information about tasks.
For my web application, I'd like it to provide some sort of progress bar view that gives the user some indication of how far along task processing has come. The obvious way to implement this would be an AJAX progress meter widget that periodically polls the work queue for status on the tasks that have been submitted. My question is, is there a better way to accomplish this without the frequent polling?
I've considered having the client web application open up a server socket on which it could listen for notifications from the work master. Another similar thought I've had is to use XMPP or a similar protocol for the status notifications. (Of course, the master/worker system would need to be updated to provide notifications either way but I own the code for that so can make any necessary updates myself.)
Any thoughts on the best way to set up a notification system like this? Is the extra effort involved worth it, or is the simple polling solution the way to go?
Polling
The client keeps polling the server to get the status of the response.
Pros
Being really RESTful means cacheable and scaleable.
Cons
Not the best responsiveness if you do not want to poll your server too much.
Persistent connection
The server does not close its HTTP connection with the client until the response is complete. The server can send intermediate status through this connection using HTTP multiparts.
Comet is the most famous framework to implement this behaviour.
Pros
Best responsiveness, almost real-time notifications from the server.
Cons
Connection limit is limited on a web server, keeping a connection open for too long might, at best load your server, at worst open the server to Denial of Service attacks.
Client as a server
Make the server post status updates and the response to the client as if it were another RESTful application.
Pros
Best of every worlds, no resources are wasted waiting for the response, either on the server or on the client side.
Cons
You need a full HTTP server and web application stack on the client
Firewalls and routers with their default "no incoming connections at all" will get in the way.
Feel free to edit to add your thoughts or a new method!
I guess it depends on a few factors
How accurate the feedback can be (1 percent, 5 percent, 50 percent) Accurate feedback makes it worth pursuing some kind of progress bar and comet style push. If you can only say "Busy... hold on... almost there... done" then a simple ajax "are we there yet" poll is certainly easier to code.
How timely the Done message has to be seen by the client
How long each task takes (1 second, 10 seconds, 10 minutes)
1 second makes it a bit moot. 10 seconds makes it worth it. 10 minutes means you're better off suggesting the user goes for a coffee break :-)
How many concurrent requests there will be
Unless you've got a "special" server, live push style systems tend to eat connections and you'll be maxed out pretty quickly. Having to throw more webservers in for a fancy progress bar might hurt the budget.
I've got some sample code on 871184 that shows a hand rolled "forever frame" which seems to work out well. The project I developed that for isn't hammered all that hard though, the operations take a few seconds and we can give pretty accurate percent. The code uses asp.net and jquery, but the general techniques will work with any server and javascript framework.
edit As John points out, status reporting probably isn't the job of the RESTful service. But there's nothing that says you can't open an iframe on the client that hooks to a page on the server that polls the service. Theory says the server and the service will at least be closer to one another :-)
Look into Comet. You make a single request to the server and the server blocks and holds the connection open until an update in status occurs. Once that happens the response is sent and committed. The browser receives this response, handles it and immediately re-requests the same URL. The effect is that of events being pushed to the browser. There are pros and cons and it might not be appropriate for all use cases but would provide the most timely status updates.
My opinion is to stick with the polling solution, but you might be interested in this Wikipedia article on HTTP Push technologies.
REST depends on HTTP, which is a request/response protocol. I don't think you're going to get a pure HTTP server calling the client back with status.
Besides, status reporting isn't the job of the service. It's up to the client to decide when, or if, it wants status reported.
One approach I have used is:
When the job is posted to the server, the server responds back a pubnub-channel id (one could alternatively use Google's PUB-SUB kind of service).
The client on browser subscribes to that channel and starts listening for messages.
The worker/task server publishes status on that pubnub channel to update the progress.
On receiving messages on the subscribed pubnub-channel, the client updates the web UI.
You could also use self-refreshing iframe, but AJAX call is much better. I don't think there is any other way.
PS: If you would open a socket from client, that wouldn't change much - PHP browser would show the page as still "loading", which is not very user-friendly. (assuming you would push or flush buffer to have other things displayed before)