Relay data between two different tcp clients in golang - go

I'm writing a TCP server which simultaneously accepts multiple connections from mobile devices and some WiFi devices (IOT). The connections needs to be maintained once established, with the 30 seconds timeout if there is no heartbeat received. So it is something like the following:
// clientsMap map[string] conn
func someFunction() {
conn, err := s.listener.Accept()
// I store the conn in clientsMap
// so I can access it, for brevity not
// shown here, then:
go serve(connn)
}
func serve(conn net.Conn) {
timeoutDuration := 30 * time.Second
conn.SetReadDeadline(time.Now().Add(timeoutDuration))
for {
msgBuffer := make([]byte, 2048)
msgBufferLen, err := conn.Read(msgBuffer)
// do something with the stuff
}
}
So there is one goroutine for each client. And each client, once connected to the server, is pending on the read. The server then processes the stuff read.
The problem is that I sometimes need to read things off one client, and then pass data to another (Between a mobile device and a WiFi device). I have stored the connections in clientsMap. So I can always access that. But since each client is handled by one goroutine, shall I be passing the data from one client to another by using a channel? But if the goroutine is blocked waiting for a pending read, how do I make it also wait for data from a channel? Or shall I just obtain the connection for the other party from the clientsMap and write to it?

The documentation for net.Conn clearly states:
Multiple goroutines may invoke methods on a Conn simultaneously.
So yes, it is okay to simply Write to the connections. You should take care to issue a single Write call per message you want to send. If you call Write more than once you risk interleaving messages from different mobile devices. This implies calling Write directly and not via some other API (in other words don't wrap the connection). For instance, the following would not be safe:
json.NewEncoder(conn).Encode(myValue) // use json.Marshal(myValue) instead
io.Copy(conn, src) // use io.ReadAll(src) instead

Related

How to deal with back pressure in GO GRPC?

I have a scenario where the clients can connect to a server via GRPC and I would like to implement backpressure on it, meaning that I would like to accept many simultaneous requests 10000, but have only 50 simultaneous threads executing the requests (this is inspired in Apache Tomcat NIO interface behaviour). I also would like the communication to be asynchronous, in a reactive manner, meaning that the client send the request but does not wait on it and the server sends the response back later and the client then execute some function registered to be executed.
How can I do that in GO GRPC? Should I use streams? Is there any example?
The GoLang API is a synchronous API, this is how GoLang usually works. You block in a while true loop until an event happens, and then you proceed to handle that event. With respect to having more simultaneous threads executing requests, we don't control that on the Client Side. On the client side at the application layer above gRPC, you can fork more Goroutines, each executing requests. The server side already forks a goroutine for each accepted connection and even stream on the connection so there is already inherent multi threading on the server side.
Note that there are no threads in go. Go us using goroutines.
The behavior described, is already built in to the GRC server. For example, see this option.
// NumStreamWorkers returns a ServerOption that sets the number of worker
// goroutines that should be used to process incoming streams. Setting this to
// zero (default) will disable workers and spawn a new goroutine for each
// stream.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a
// later release.
func NumStreamWorkers(numServerWorkers uint32) ServerOption {
// TODO: If/when this API gets stabilized (i.e. stream workers become the
// only way streams are processed), change the behavior of the zero value to
// a sane default. Preliminary experiments suggest that a value equal to the
// number of CPUs available is most performant; requires thorough testing.
return newFuncServerOption(func(o *serverOptions) {
o.numServerWorkers = numServerWorkers
})
}
The workers are at some point initialized.
// initServerWorkers creates worker goroutines and channels to process incoming
// connections to reduce the time spent overall on runtime.morestack.
func (s *Server) initServerWorkers() {
s.serverWorkerChannels = make([]chan *serverWorkerData, s.opts.numServerWorkers)
for i := uint32(0); i < s.opts.numServerWorkers; i++ {
s.serverWorkerChannels[i] = make(chan *serverWorkerData)
go s.serverWorker(s.serverWorkerChannels[i])
}
}
I suggest you read the server code yourself, to learn more.

When making a go RPC call , the return type is channel

Firstly, here is a PRC server. Please notice one of the return type is chan:
func (c *Coordinator) FetchTask() (*chan string, error) {
// ...
return &reply, nil
}
Then the client makes a RPC call. Typically the caller will get a channel which type is *chan string.
call("Coordinator.FecthTask", &args, &reply)
Here is my question. If the server continuously write into the channel:
for i := 0; i < 100; i++ {
reply<- strconv.Itoa(i)
}
🎈🎈🎈 Can the client continuously read read from the channel?
for {
var s string = <-reply
}
I guess the client can't, cuz server and client are not in the same memory. They communicate via Internet. Therefore, even the variable reply is a pointer, it points different address in server and client.
I'am not sure about it. What do you think of it? Thanks a lot!!!!
🎈🎈 BTW, is there anyway to implement a REAL, stateful channel between server and client?
As you already mentioned, channels are in memory variables and it is not possible to use them in other apps or systems. In the other hand gRPC will pass and parse binary data which in this case again passing a channel pointer, will only returns the pointer address in server's memory. After client receiving that address it will try to point to that address in local machine's memory which will can be any sort of data unfortunately.
If you want to push a group of data (let's say an array of strings) you can use a Server streaming or Bidirectional streaming.
In the other hand if you want to accomplish some sort of stable and keep-alive connection you can consider websockets too.

Difference between NewChannel vs Request in ssh sftp server

I'm looking at go sftp server example code
https://github.com/pkg/sftp/blob/master/examples/go-sftp-server/main.go
There are section of code which are unclear to me
_, chans, reqs, err := ssh.NewServerConn(nConn, config)
if err != nil {
log.Fatal("failed to handshake", err)
}
fmt.Fprintf(debugStream, "SSH server established\n")
// The incoming Request channel must be serviced.
go ssh.DiscardRequests(reqs)
// Service the incoming Channel channel.
for newChannel := range chans {
...
}
First: With ssh.NewServerConn, if NewChannel(chans) represent new request to the channel what is Request reqs. So what is difference between chans and reqs here.
Second: Why is the need to ssh.DiscardRequests(reqs)
Looking at the documentation for ssh.NewServerConn it appears that it returns the following:
*ServerConn
<-chan NewChannel
<-chan *Request
error
The second returned value, NewChannel
represents an incoming request to a channel
The third returned value, Request
is a request sent outside of the normal stream of data
This doesn't really answer your questions but it does provide helpful clues as where to look.
So to answer you questions:
chans receives connections that are new to the server. Using the received value from chans, you can either accept and communicate with that connection or just reject the connection. This can be thought of multiple people logging into a remote machine via ssh and handling multiple sessions.
reqs holds global requests (which is defined here) sent to either the server or client that should not be sent to any specific channel. RFC4254 gives the example of a such a request as "start TCP/IP forwarding for a specific port".
You can see the internal usage of how the ssh package uses the incomingRequests channel here.
The documentation for ssh.NewServerConn explicitly states
The Request and NewChannel channels must be serviced, or the connection will hang.
In the event that this server does receive a global request it needs to be handled appropriately if the request is asking for a reply.
Apart from #will7200 answer I just want to add a couple of things which I found while my reading around this.
SSH has Global request called SSH_MESSAGE_GLOBAL_REQUEST and SSH_MESSAGE_CHANNEL_REQUEST or starts TCP/IP forwarding for a specific port
a channel is any specific terminal or how we see it when we send the data across the ssh server and client.
So reqs over here is the global request and all channel requests are wrapped inside the channel.
GLOBAL requests are requests that are not specific to a CHANNEL like TCPKeepAlive (as mention in ssh_config) or start TCP/IP forwarding for a specific port.
and DisdCardRequest essentially discard those request that does not want a reply

concurrent relaying of data between multiple clients

I am currently working on an application relaying data sent from a mobile phone via a server to a browser using WebSockets. I am writing the server in go and I have a one-to-one relation between the mobile phones and the browsers as shown by the following illustration.
.
However, I want multiple sessions to work simultaneously.
I have read that go provides concurrency models that follow the principle "share memory by communicating" using goroutines and channels. I would prefer using the mentioned principle rather than locks using the sync.Mutex primitive.
Nevertheless, I have not been able to map this information to my issue and wanted to ask you if you could suggest a solution.
I had a similar to your problem, I needed multiple connections which each send data to each other through multiple servers.
I went with the WAMP protocol
WAMP is an open standard WebSocket subprotocol that provides two application messaging patterns in one unified protocol:
Remote Procedure Calls + Publish & Subscribe.
You can also take a look at a project of mine which is written in go and uses the protocol at hand: github.com/neutrinoapp/neutrino
There's nothing wrong with using a mutex in Go. Here's a solution using a mutex.
Declare a map of endpoints. I assume that a string key is sufficient to identify an endpoint:
type endpoint struct {
c *websocket.Conn
sync.Mutex // protects write to c
}
var (
endpoints = map[string]*endpoint
endpointsMu sync.Mutex // protects endpoints
)
func addEndpoint(key string, c *websocket.Connection) {
endpointsMu.Lock()
endpoints[key] = &endpoint{c:c}
endpointsMu.Unlock()
}
func removeEndpoint(key string) {
endpointsMu.Lock()
delete(endpoints, key)
endpointsMu.Unlock()
}
func sendToEndpoint(key string, message []byte) error {
endpointsMu.Lock()
e := endpoints[key]
endpointsMu.Unlock()
if e === nil {
return errors.New("no endpoint")
}
e.Lock()
defer e.Unlock()
return e.c.WriteMessage(websocket.TextMessage, message)
}
Add the connection to the map with addEndpoint when the client connects. Remove the connection from the map with removeEndpoint when closing the connection. Send messages to a named endpoint with sendToEndpoint.
The Gorilla chat example can be adapted to solve this problem. Change the hub map to connections map[string]*connection, update channels to send a type with connection and key and change the broadcast loop to send to a single connection.

How can I orchestrate concurrent request-response flow?

I'm new to concurrent programming, and have no idea what concepts to start with, so please be gentle.
I am writing a webservice as a front-end to a TCP server. This server listens to the port I give it, and returns the response to the TCP connection for each request.
Here is why I'm writing a web-service front-end for this server:
The server can handle one request at a time, and I'm trying to make it be able to process several inputs concurrently, by launching multiple processes and giving them a different port to listen on. For example, I want to launch 30 instances and tell them to listen on ports 20000-20029.
Our team uses PHP, and PHP does not have the capacity to launch server instances and maintain them concurrently, so I'm trying to write an API they can just send HTTP requests to.
So, here is the structure I have thought of.
I will have a main() function. This function launches the processes concurrently, then starts an HTTP server on port 80 and listens.
I have an http.Handler that adds the content of a request to a channel,.
I will have gorutines, one per server instance, that are in an infinite loop.
The code for the function mentioned in item three would be something like this:
func handleRequest(queue chan string) {
for {
request := <-queue
conn, err := connectToServer()
err = sendRequestToServer(conn)
response, err := readResponseFromServer(conn)
}
}
So, my http.Handler can simply do something like queue<- request to add the request to the queue, and handleRequest, which has blocked, waiting for the channel to have something to get, will simply get the request and continue on. When done, the loop finishes, execution comes back to the request := <-queue, and the same thing continues.
My problem starts in the http.Handler. It makes perfect sense to put requests in a channel, because multiple gorutines are all listening to it. However, how can these gorutines return the result to my http.Handler?
One way is to use a channel, let's call it responseQueue, that all of these gorutines would then write to. The problem is that when a response is added to the channel, I don't know which request it belongs to. In other words, when multiple http.Handlers send requests, each executing handler will not know which response the current message in the channel belongs to.
Is there a best practice, or a pattern, to send data to a gorutine from another gorutine and receive the data back?
Create a per request response channel and include it in the value sent to the worker. The handler receives from the channel. The worker sends the result to the channel.

Resources