Concurreny modelin grpc server in golang - go

I have created a sample gRPC client and server in golang (used protobufs). I understand the concurrency model in golang. However, I am trying to understand the concurrency model in a server accepting parallel requests from same client(multiple goroutines on client side)/ multiple clients.
More specifically:
When a new gRPC call comes, does server create a new goroutine?
What data is shared by these goroutines? Does grpcServer.Serve set the boundary for data shared across goroutines i.e. everything set before is shared? (I am thinking of threads in Java where the threads share global data)

When a new gRPC call comes, does server create a new goroutine?
Yes, and it's highly likely that it creates a lot of concurrent goroutines to handle every connection and request (especially streaming request).
What data is shared by these goroutines?
I think this question is too broad. There are too much code both in net/http2 and google.golang.org/grpc packages to answer your question without deep investigation. However, we can be sure, that these goroutines share at least the server itself, because ServeConn is not a free function, but a method defined on http2.Server type.

Related

How to reduce long WebSocket IO pauses?

I have a tool called Tendermint, which is written in Golang. It processes transactions and creates blocks (details are intentionally omitted). Transactions can be submitted through the WebSocket server. Blocks are configured to be created ~ every second.
Now, when I open two or more WS connections and submit more transactions than the application can handle, periodically, Tendermint gets stuck.
During this time, it does not create any blocks, but instead spends the significant portion of its time handling WebSocket IO.
I still don't understand the exact nature of these pauses. Maybe someone here knows or can ask the right questions? Also, I'm wondering what the ways to limit the IO are? Throttle each connection?
NOTE: I'm using https://github.com/gorilla/websocket for WebSockets. Our WS server can be found here.
Thank you for your time!
UPD 1: I've managed to flatten the pauses by batching responses in our WS server (see https://github.com/tendermint/tendermint/issues/3905#issuecomment-684860429)

Can I call same RPC func in many servers at the same time?

I try to find some fast algorithm of interprocess communication.
One of I need is an ability to send one command to multiple application instances at the same time. I had tried to find out for a day if I am able to start many instances of the same app (local-rpc-server-app) and call RPC from one client. I use ncalrpc protocol for this purpose.
I just want to start several instances of server and one instance if client, and then call the same RPC func one time on a client to evaluate this RPC func on every running server.
Yes, you can either use multiple client threads (each making a separate server call) or modify the .acf and mark the call with the [async] attribute. If you go the latter route you can then make multiple calls on a single client thread. Note that asynchronous RPC is a fair bit more complicated than synchronous RPC due to needing to deal with call completions.
Making calls to multiple server instances (even local instances) is also made more complicated by the fact that you will have to somehow discover those endpoints, and the RPC namespace functions (RpcNs*) are no longer available as of Windows Vista.

How many elasticsearch client connection should we create in the application

I'm using Golang & elastic client.
Bellow is my client creation logic:
if client, err := elastic.NewClient(elastic.SetURL(ElasticsearchURL)); err != nil {
// Handle error
logger.Error.Println(err)
return nil
} else {
return client
}
Whats the correct approach to:
keep the client object singleton across the application?
create and close the clients for each request?
I am kind of confused between counterintuitive answers in below links:
where-to-close-an-elasticsearch-client-connection- suggests one connection per app
how-many-transport-clients-can-a-elasticsearch-cluster-have - suggests one connection per app
elasticsearch-how-to-query-for-number-of-connections -- kind of indicates connections quickly die after serving a request
That depends on the application.
In 99% of the use cases you have a normal, long-running application. Then you should create just one client with elastic.NewClient. You can pass it around in your code and it should always work, even in different goroutines. This will create a long-running client which has several benefits. E.g. it will run health checks in the background that will prevent Elastic from sending requests to unhealthy or dead nodes.
However, if you have a short-running application (something like AWS Lambda oder Cloud Functions) you might need a "connection" on a request level. In that specific case you can use elastic.NewSimpleClient. It has a bit more overhead though as you're creating a new client every time. And it won't do any health checks and other things.
DO NOT create a new client with elastic.NewClient for every request, as any call to NewClient will create a set of goroutines and you'll quickly run out of resources if you do that.
Please read the documentation and the wiki for further details.

How can I use Riak connection pool with Beego Framework

I'm developing a back-end using Beego and Riak. I'm searching for a way to keep the riak connection pool alive but I cannot find nothing in documentation besides SQL related.
I'm really freshman to the Go language (started learning 2 days ago) and I don't know if connection pool is the write choice. As I understand, each Go app should be designed to work independently allowing easy scalability. If this is write maybe a single connection should be better choice. If this is the case, what is the best practice I can use?
I'm sorry in advance if my question seems noobie, but, with my Django background, I'm not used to manage db connections.
The riak connector I'm using is "github.com/tpjg/goriakpbc"
Whether or not to use a connection pool depends more on your usage pattern and workload that your choice of data store or client library.
Each time a TCP connection is established, there is a three-way handshake:
client --syn--> server
client <--syn-ack-- server
client --ack--> server
This usually takes a very small amount of time and network bandwith, and creates an entry in the conntrack table on each machine. If your application opens a new connection to the server for every request and will be sending many thousands of requests per second, you may overflow the conntrack table, blocking new connections until some previous connections close; or the overhead traffic of creating connections could limit how many requests you can handle per second.
If you decide to use a pool and use short-lived processes that handle a single request and then terminate, you will need some method of creating and maintaining connections separately from the request processes, and a method for the request processes to send requests and receive responses using a connection from the pool.
You may find that if your application does not generate a sufficient volume of traffic, the effort required to design your application to use a connection pool outweighs any benefits gained by using a pool.
There is not right or wrong answer, this is going to heavily depend on your use case, request volume, and network capabilities.

Web sockets make ajax/CORS obsolete?

Will web sockets when used in all web browsers make ajax obsolete?
Cause if I could use web sockets to fetch data and update data in realtime, why would I need ajax? Even if I use ajax to just fetch data once when the application started I still might want to see if this data has changed after a while.
And will web sockets be possible in cross-domains or only to the same origin?
WebSockets will not make AJAX entirely obsolete and WebSockets can do cross-domain.
AJAX
AJAX mechanisms can be used with plain web servers. At its most basic level, AJAX is just a way for a web page to make an HTTP request. WebSockets is a much lower level protocol and requires a WebSockets server (either built into the webserver, standalone, or proxied from the webserver to a standalone server).
With WebSockets, the framing and payload is determined by the application. You could send HTML/XML/JSON back and forth between client and server, but you aren't forced to. AJAX is HTTP. WebSockets has a HTTP friendly handshake, but WebSockets is not HTTP. WebSockets is a bi-directional protocol that is closer to raw sockets (intentionally so) than it is to HTTP. The WebSockets payload data is UTF-8 encoded in the current version of the standard but this is likely to be changed/extended in future versions.
So there will probably always be a place for AJAX type requests even in a world where all clients support WebSockets natively. WebSockets is trying to solve situations where AJAX is not capable or marginally capable (because WebSockets its bi-directional and much lower overhead). But WebSockets does not replace everything AJAX is used for.
Cross-Domain
Yes, WebSockets supports cross-domain. The initial handshake to setup the connection communicates origin policy information. The wikipedia page shows an example of a typical handshake: http://en.wikipedia.org/wiki/WebSockets
I'll try to break this down into questions:
Will web sockets when used in all web browsers make ajax obsolete?
Absolutely not. WebSockets are raw socket connections to the server. This comes with it's own security concerns. AJAX calls are simply async. HTTP requests that can follow the same validation procedures as the rest of the pages.
Cause if I could use web sockets to fetch data and update data in realtime, why would I need ajax?
You would use AJAX for simpler more manageable tasks. Not everyone wants to have the overhead of securing a socket connection to simply allow async requests. That can be handled simply enough.
Even if I use ajax to just fetch data once when the application started I still might want to see if this data has changed after a while.
Sure, if that data is changing. You may not have the data changing or constantly refreshing. Again, this is code overhead that you have to account for.
And will web sockets be possible in cross-domains or only to the same origin?
You can have cross domain WebSockets but you have to code your WS server to accept them. You have access to the domain (host) header which you can then use to accept / deny requests. This can, however, be spoofed by something as simple as nc. In order to truly secure the connection you will need to authenticate the connection by other means.
Websockets have a couple of big downsides in terms of scalability that ajax avoids. Since ajax sends a request/response and closes the connection (..or shortly after) if someone stays on the web page it doesn't use server resources when idling. Websockets are meant to stream data back to the browser, and they tie up server resources to do so. Servers have a limit in how many simultaneous connections they can keep open at one time. Not to mention depending on your server side technology, they may tie up a thread to handle the socket. So websockets have more resource intensive requirements for both sides per connection. You could easily exhaust all of your threads servicing clients and then no new clients could come in if lots of users are just sitting on the page. This is where nodejs, vertx, netty can really help out, but even those have upper limits as well.
Also there is the issue of state of the underlying socket, and writing the code on both sides that carry on the stateful conversation which isn't something you have to do with ajax style because it's stateless. Websockets require you create a low level protocol which is solved for you with ajax. Things like heart beating, closing idle connections, reconnection on errors, etc are vitally important now. These are things you didn't have to solve when using AJAX because it was stateless. State is very important to the stability of your app and more importantly the health of your server. It's not trivial. Pre-HTTP we built a lot of stateful TCP protocols (FTP, telnet, SSH), and then HTTP happened. And no one did that stuff much anymore because even with its limitations HTTP was surprisingly easier and more robust. Websockets bring back the good and the bad of stateful protocols. You'll learn soon enough if you didn't get a dose of that last go around.
If you need streaming of realtime data this extra overhead is warranted because polling the server to get streamed data is worse, but if all you are doing is user interaction->request->response->update UI, then ajax is easier and will use less resources because once the response is sent the conversation is over and no additional server resources are used. So I think it's a tradeoff and the architect has to decide which tool fits their problem. AJAX has its place, and websockets have their place.
Update
So the architecture of your server is what matters when we are talking about threads. If you are using a traditionally multi-threaded server (or processes) where a each socket connection gets its own thread to respond to requests then websockets matter a lot to you. So for each connection we have a socket, and eventually the OS will fall over if you have too many of these, and the same goes for threads (more so for processes). Threads are heavier than sockets (in terms of resources) so we try and conserve how many threads we have running simultaneously. That means creating a thread pool which is just a fixed number of threads that is shared among all sockets. But once a socket is opened the thread is used for the entire conversation. The length of those conversations govern how quickly you can repurpose those threads for new sockets coming in. The length of your conversation governs how much you can scale. However if you are streaming this model doesn't work well for scaling. You have to break the thread/socket design.
HTTP's request/response model makes it very efficient in turning over threads for new sockets. If you are just going to use request/response use HTTP its already built and much easier than reimplementing something like that in websockets.
Since websockets don't have to be request/response as HTTP and can stream data if your server has a fixed number of threads in its thread pool and you have the same number of websockets tying up all of your threads with active conversations, you can't service new clients coming in! You've reached your maximum capacity. That's where protocol design is important too with websockets and threads. Your protocol might allow you to loosen the thread per socket per conversation model that way people just sitting there don't use a thread on your server.
That's where asynchronous single thread servers come in. In Java we often call this NIO for non-blocking IO. That means it's a different API for sockets where sending and receiving data doesn't block the thread performing the call.
So traditional in blocking sockets when you call socket.read() or socket.write() they wait until the data is received or sent before returning control to your program. That means your program is stuck waiting for the socket data to come in or go out until you can do anything else. That's why we have threads so we can do work concurrently (at the same time). Send this data to client X while I wait on data from client Y. Concurrencies is the name of the game when we talk about servers.
In a NIO server we use a single thread to handle all clients and register callbacks to be notified when data arrives. For example
socket.read( function( data ) {
// data is here! Now you can process it very quickly without waiting!
});
The socket.read() call will return immediately without reading any data, but our function we provided will be called when it comes in. This design radically changes how you build and architect your code because if you get hung up waiting on something you can't receive any new clients. You have a single thread you can't really do two things at once! You have to keep that one thread moving.
NIO, Asynchronous IO, Event based program as this is all known as, is a much more complicated system design, and I wouldn't suggest you try and write this if you are starting out. Even very Senior programmers find it very hard to build a robust systems. Since you are asynchronous you can't call APIs that block. Like reading data from the DB or sending messages to other servers have to be performed asynchronously. Even reading/writing from the file system can slow your single thread down lowering your scalability. Once you go asynchronous it's all asynchronous all the time if you want to keep the single thread moving. That's where it gets challenging because eventually you'll run into an API, like DBs, that is not asynchronous and you have to adopt more threads at some level. So a hybrid approaches are common even in the asynchronous world.
The good news is there are other solutions that use this lower level API already built that you can use. NodeJS, Vertx, Netty, Apache Mina, Play Framework, Twisted Python, Stackless Python, etc. There might be some obscure library for C++, but honestly I wouldn't bother. Server technology doesn't require the very fastest languages because it's IO bound more than CPU bound. If you are a die hard performance nut use Java. It has a huge community of code to pull from and it's speed is very close (and sometimes better) than C++. If you just hate it go with Node or Python.
Yes, yes it does. :D
The earlier answers lack imagination. I see no more reason to use AJAX if websockets are available to you.

Resources