Golang ethclient.Client - how to make RPC calls? - go

How to call RPC endpoints using ethclient.Client ( https://github.com/ethereum/go-ethereum )?
Some methods don't have wrappers, and, as far as i can see, calling it directly is impossible
e.g.
client, err := ethclient.Dial(url)
// ok
client.BalanceAt(...)
// incorrect code, trying to access private field `c *rpc.Client`
client.c.Call("debug_traceTransaction", ...)
The only way i can think of is spinning up totally separate rpc client and keep both running at all times.
Is this the only way?

The ethclient.Dial function (which you mentioned) uses the rpc.DialContext function underneath, and the package also provides an ethclient.NewClient function to create a new ethclient.Client with an existing rpc connection.
A possible solution could be to create a new rpc connection, then pass it to the ethclient.Client, so you're using one connection, but can use the RPC connection itself and the eth client as well.
Something like this:
rpcClient, err := rpc.DialContext(ctx, url)
ethClient := ethclient.NewClient(rpcClient)
// use the ethClient
ethClient.BalanceAt(...)
// access to rpc client
rpcClient.Call(...)

Related

How can write a poll or epoll server in Golang?

In Golang, if we want to write a socket server, we can write like this :
listen, err := net.Listen("tcp", "****")
for {
conn, err := listen.Accept()
...
}
net.Listen() include create socket, bind, listen, and uses epoll in implementation.
in CPP, if we want to write a server, we can chose to use select, poll or epoll freely , so my question is: in Golang, how can I write a server using select or poll instead of `epoll.
If you are familiar with system calls used during connection creation; you will find the syscalls package useful for what you need to do where you can choose to use these system calls as you need. It consists of all the system calls you will need.
I also found this example gist which you can reference for making your own implementation using poll or select.

How to un-wedge go gRPC bidi-streaming server from the blocking Recv() call?

When serving a bidirectional stream in gRPC in golang, the canonical stream handler looks something like this:
func (s *MyServer) MyBidiRPC(stream somepb.MyServer_MyBidiServer) error {
for {
data, err := stream.Recv()
if err == io.EOF {
return nil // clean close
}
if err != nil {
return err // some other error
}
// do things with data here
}
}
Specifically, when the handler for the bidi RPC returns, that is the signal to consider the server side closed.
This is a synchronous programming model -- the server stays blocked inside this goroutine (created by the grpc library) while waiting for messages from the client.
Now, I would like to unblock this Recv() call (which ends up calling RecvMsg() on an underlying grpc.ServerStream,) and return/close the stream, because the server process has decided that it is done with this client.
Unfortunately, I can find no obvious way to do this:
There's no Close() or CloseSend() or CloseRecv() or Shutdown()-like function on the bidi server interface generated for my service
The context inside the stream, which I can get at with stream.Context(), doesn't expose user-accessible the cancel function
I can't find a way to pass in a context on the "starting side" for a new connection accepted by the grpc.Server, where I could inject my own cancel function
I could close the entire grpc.Server by calling Stop(), but that's not what I want to do -- only this particular client connection (grpc.ServerStream) should be finished.
I could send a message to the client that makes the client in turn shut down the conection. However, this doesn't work if the client has fallen off the network, which would be solved with a timeout, which has to be pretty long to be generally robust. I want it now because I'm impatient, and, more importantly, at scale, dangling unresponsive clients can be a high cost.
I could (perhaps) dig through the grpc.ServerStream with reflection until I find the transportStream, and then dig out the cancel function out of that and call it. Or dig through the stream.Context() with reflection, and make my own cancel function reference to call. Neither of these seem well advised for future maintainers.
But surely these can't be the only options? Deciding that a particular client no longer needs to be connected is not magic space-alien science. How do I close this stream such that the Recv() call un-blocks, from the server process side, without involving a round-trip to the client?
Unfortunately I don't think there is a great way to do what you are asking. Depending on your goal, I think you have two options:
Run Recv in a goroutine and return from the bidi handler when you need it to return. This will close the context and unblock Recv. This is obviously suboptimal, as it requires care because you now have code executing outside the scope of the handler's execution. It is, however, the closest answer I can seem to find.
If you are trying to mitigate the impact of misbehaving clients by instituting timeouts, you might be able to offload the work of this to the framework with KeepaliveEnforcementPolicy and/or KeepaliveParams. This is probably preferable if this aligns with the reason you are hoping to close the connection, but otherwise isn't of much use.

Relay data between two different tcp clients in golang

I'm writing a TCP server which simultaneously accepts multiple connections from mobile devices and some WiFi devices (IOT). The connections needs to be maintained once established, with the 30 seconds timeout if there is no heartbeat received. So it is something like the following:
// clientsMap map[string] conn
func someFunction() {
conn, err := s.listener.Accept()
// I store the conn in clientsMap
// so I can access it, for brevity not
// shown here, then:
go serve(connn)
}
func serve(conn net.Conn) {
timeoutDuration := 30 * time.Second
conn.SetReadDeadline(time.Now().Add(timeoutDuration))
for {
msgBuffer := make([]byte, 2048)
msgBufferLen, err := conn.Read(msgBuffer)
// do something with the stuff
}
}
So there is one goroutine for each client. And each client, once connected to the server, is pending on the read. The server then processes the stuff read.
The problem is that I sometimes need to read things off one client, and then pass data to another (Between a mobile device and a WiFi device). I have stored the connections in clientsMap. So I can always access that. But since each client is handled by one goroutine, shall I be passing the data from one client to another by using a channel? But if the goroutine is blocked waiting for a pending read, how do I make it also wait for data from a channel? Or shall I just obtain the connection for the other party from the clientsMap and write to it?
The documentation for net.Conn clearly states:
Multiple goroutines may invoke methods on a Conn simultaneously.
So yes, it is okay to simply Write to the connections. You should take care to issue a single Write call per message you want to send. If you call Write more than once you risk interleaving messages from different mobile devices. This implies calling Write directly and not via some other API (in other words don't wrap the connection). For instance, the following would not be safe:
json.NewEncoder(conn).Encode(myValue) // use json.Marshal(myValue) instead
io.Copy(conn, src) // use io.ReadAll(src) instead

concurrent relaying of data between multiple clients

I am currently working on an application relaying data sent from a mobile phone via a server to a browser using WebSockets. I am writing the server in go and I have a one-to-one relation between the mobile phones and the browsers as shown by the following illustration.
.
However, I want multiple sessions to work simultaneously.
I have read that go provides concurrency models that follow the principle "share memory by communicating" using goroutines and channels. I would prefer using the mentioned principle rather than locks using the sync.Mutex primitive.
Nevertheless, I have not been able to map this information to my issue and wanted to ask you if you could suggest a solution.
I had a similar to your problem, I needed multiple connections which each send data to each other through multiple servers.
I went with the WAMP protocol
WAMP is an open standard WebSocket subprotocol that provides two application messaging patterns in one unified protocol:
Remote Procedure Calls + Publish & Subscribe.
You can also take a look at a project of mine which is written in go and uses the protocol at hand: github.com/neutrinoapp/neutrino
There's nothing wrong with using a mutex in Go. Here's a solution using a mutex.
Declare a map of endpoints. I assume that a string key is sufficient to identify an endpoint:
type endpoint struct {
c *websocket.Conn
sync.Mutex // protects write to c
}
var (
endpoints = map[string]*endpoint
endpointsMu sync.Mutex // protects endpoints
)
func addEndpoint(key string, c *websocket.Connection) {
endpointsMu.Lock()
endpoints[key] = &endpoint{c:c}
endpointsMu.Unlock()
}
func removeEndpoint(key string) {
endpointsMu.Lock()
delete(endpoints, key)
endpointsMu.Unlock()
}
func sendToEndpoint(key string, message []byte) error {
endpointsMu.Lock()
e := endpoints[key]
endpointsMu.Unlock()
if e === nil {
return errors.New("no endpoint")
}
e.Lock()
defer e.Unlock()
return e.c.WriteMessage(websocket.TextMessage, message)
}
Add the connection to the map with addEndpoint when the client connects. Remove the connection from the map with removeEndpoint when closing the connection. Send messages to a named endpoint with sendToEndpoint.
The Gorilla chat example can be adapted to solve this problem. Change the hub map to connections map[string]*connection, update channels to send a type with connection and key and change the broadcast loop to send to a single connection.

Determining requester's IP address in RPC call

In Go using the standard net/rpc functionality, I would like to determine what the IP address an inbound RPC request is coming from. The underlying http functionality appears to provide this in the http.Request object, but I cannot see any way of getting at that from the default RPC handler (set using rpc.HandleHTTP).
Is there some hidden mechanism for getting at the underlying http.Request, or do I have to do something fancier with setting up a different HTTP responder?
As far as I know, it is not possible to grab the address from somewhere in the default server.
The service call method, which calls the request receiving function, does not provide any access to the remote data stored in the codec.
If http handlers could be registered twice (which they can't), you could have overwritten the DefaultRPCPath for the HTTP Handler setup by HandleHTTP. But that's simply not possible today.
What you can do, without much fuss, is to build a RPC server based on the default one with your own ServeHTTP method:
import (
"log"
"net"
"net/http"
"net/rpc"
)
type myRPCServer struct {
*rpc.Server
}
func (r *myRPCServer) ServeHTTP(w http.ResponseWriter, req *http.Request) {
log.Println(req.RemoteAddr)
r.Server.ServeHTTP(w, req)
}
func (r *myRPCServer) HandleHTTP(rpcPath, debugPath string) {
http.Handle(rpcPath, r)
}
func main() {
srv := &myRPCServer{rpc.NewServer()}
srv.HandleHTTP(rpc.DefaultRPCPath, rpc.DefaultDebugPath)
// ...http listen code...
}
The downside of this, is of course, that you can't use rpc.Register anymore. You have to write srv.Register.
Edit: I forgot that you'd need to write your own HandleHTTP as well. The reason for this is, that if you embed the RPC server and you write srv.HandleHTTP it is called on the embedded instance, passing the embedded instance to http.Handle(), ignoring your own definition of ServeHTTP. This has the drawback, that you won't have the ability to debug your RPC server using the debug path, as the server's HandleHTTP uses a private debug handler (rpc.debugHTTP) which you can't access.
You can also use https://github.com/valyala/gorpc instead of net/rpc, which passes client address to RPC server handler - see http://godoc.org/github.com/valyala/gorpc#HandlerFunc for details.
The net/rpc package is at a higher level of abstraction than tcp or http. Since it can use multiple codecs it doesn't make sense for it to offer a way to get at the ip address of the inbound rpc. It's theoretically possible someone could implement a code that talks on unix sockets instead or using radio transmitters.
If you want access to specifics of the transport layer you will have to drop a level in the stack and use net or net/http directory to make your rpc service.
It seems that there is currently no way to do this in rpc function.
See this link for more info
Here is a summary.
Q:
Right now RemoteAddr() method can be called to get the RPC client's address only on
net.Conn when the client dials to server, but suppose that your server has multiple
clients connected and each of this clients are calling an RPC exported method. Is there
a way to implement a method to get the caller's remote address from inside the RPC
method?
func (t *Type) Method(args *Args, reply *string) error {
//something like
*reply = Caller.RemoteAddr().String()
// who called the method now?
return nil
}
A:
I'm skeptical. It would take an API change (not necessarily a backwards incompatible one
but still a bit of a redesign) to supply this.

Resources