I was testing client-server connection with gRPC client written in C# and a bunch of servers (written in c++, C#, rust and go). As I was testing it locally everything went okay (average GO response around 0.12ms) but when I test it over local network it gets really slow, like REALLY slow. Average time goes to 40ms per request!
To be clear: I am using a simple HelloWorld proto with simplest connection possible. Other servers get about 1ms per request but Go - around 40ms.
My Go server code:
package main
import (
"context"
pb "descriptions"
"log"
"net"
"google.golang.org/grpc"
)
type server struct{}
// SayHello implements helloworld.GreeterServer
func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
//log.Printf("Received: %v", in.Name)
return &pb.HelloReply{Message: ""}, nil
}
func main() {
// lis, err := net.Listen("tcp", port)
lis, err := net.Listen("tcp", "0.0.0.0:50051")
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
log.Printf("Server listening on: " + lis.Addr().String())
pb.RegisterGreeterServer(s, &server{})
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
I do not suspect that it is client side issiue because it works well with other servers. Have anyone had the same issiue with golang? Please let me know!
I was also thinking if that might be like HTTP 1.1 issue but gRPC supports HTTP2 so I suspect it is already used when running this code.
According to the benchmark here, Go and C++ shouldn't have such big difference (over ethernet). Could you please file an issue in the grpc-go repo, https://github.com/grpc/grpc-go/issues/new? And it will be very helpful if you can provide more context about how client side works. Thanks!
Related
I wrote this simple proxy server with net package, which I expected to proxy connections from a local server at 8001 to any incoming connections via 8000. When I go to the browser and try it, I get a refused to connect error.
package main
import (
"net"
"log"
"io"
)
func main() {
l, err := net.Listen("tcp", "localhost:8000")
if err != nil {
log.Fatal(err)
}
for {
conn, err := l.Accept()
if err != nil {
log.Fatal(err)
}
go proxy(conn)
}
}
func proxy(conn net.Conn) {
defer conn.Close()
upstream, err := net.Dial("tcp","localhost:8001")
if err != nil {
log.Print(err)
return
}
defer upstream.Close()
io.Copy(upstream, conn)
io.Copy(conn, upstream)
}
But if I change the following lines in the proxy function
io.Copy(upstream, conn)
io.Copy(conn, upstream)
to
go io.Copy(upstream, conn)
io.Copy(conn, upstream)
then it works as expected. Shouldn't the io.Copy(upstream, conn) block the io.Copy(conn, upstream)? As per my understanding, the conn should be written only after upstream has responded back? And how does having a go routine for io.Copy(upstream, conn) part solve this?
Shouldn't io.Copy block?
Yes. "Copy copies from src to dst until either EOF is reached on src or an error occurs.". Since this is a network connection, this means it returns after the client closes the connection. If and when the client closes the connection depends on the application protocol. In HTTP it may never happen, for instance.
How does having a goroutine solve this?
Because then the second Copy can execute while the client is still connected, allowing the upstream to write its response. Without the goroutine nothing is reading from the upstream, so it is likely blocked on its write call.
The client (presumably) waits for a response, the proxy waits for the client to close the connection, and the upstream waits for the proxy to start reading the response: no one can make progress and you're in a deadlock.
I have recently started working in golang for a project, where I have to use gRPC for push notification on my server to connect to an Android device.
I have created a simple multiplexer mux := http.NewServeMux() which is working fine with my server code:
serverWeb := http.Server{
Addr: constants.ServerIPWeb,
//Handler: grpcHandlerFunc(grpcServer, mux),
Handler: mux,
}
serverWeb.ListenAndServe()
As from the examples on gRPC.io I have also created a simple gRPC client/server as a standalone project connected to my android device, with out any TLS configuration and its working fine.
type server struct{}
func (s *server) DeviceData(ctx context.Context,req *pb.GetDeviceRequest) (*pb.SetDeviceResponse, error){
util.P("Device is: ",req) // simple fmt.PrintF(a ... interface)
return &pb.SetDeviceResponse{Message:"Success"}, nil
}
func main(){
lis, err := net.Listen("tcp", ":8080")
if err!=nil{
log.Fatalf("Failed to listen: %v", err)
}
s := grpc.NewServer()
pb.RegisterDeviceInfoServer(s,&server{})
reflection.Register(s)
if err := s.Serve(lis); err != nil{
log.Fatalf("Falied to server: %v", err)
}
}
The problem is I could not connect my current gRPC server to my serverStruct above. I have tried to add Handler: grpcHandlerFunc(grpcServer, mux) that will connect my server to the following code, as explained here ( gRPC with open API )
func grpcHandlerFunc(grpcServer *grpc.Server, otherHandler http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// TODO(tamird): point to merged gRPC code rather than a PR.
// This is a partial recreation of gRPC's internal checks https://github.com/grpc/grpc-go/pull/514/files#diff-95e9a25b738459a2d3030e1e6fa2a718R61
util.P("ResponseFunction: GRPC: ",r.URL) // simple fmt.PrintF(a ... interface)
if r.ProtoMajor == 2 && strings.Contains(r.Header.Get("Content-Type"), "application/grpc") {
grpcServer.ServeHTTP(w, r)
} else {
otherHandler.ServeHTTP(w, r)
}
})
}
The above code will run my Service running from my mux i.e connecting to http, but it won't run services from gRPC, i.e is connecting to my Android Device.
I believe it requires, TLS connection, but I don't want to get into the implementation of securing my web and Android side code, as that would require me to change all the code at my Android Side and Web, which I want to avoid.
So I am looking for a way to connect this grpcServer to my current serverStruct without using any TLS configuration.
More Over myResearcH:
I have also searched there is a repo called cmux (Connection Mux) which will do the same job, but I don't understand how would I use it with my current serverStruct, as I have a fully functional web app running on this, and I just need to add gRPC with my current code.
gRPC Service can be achieved with out the implementation of TLS by using go routines.
Create a Function, Serving your gRPC
func RunGRPC() {
grpcServer := grpc.NewServer()
pb.RegisterGlassInfoServer(grpcServer,&GlassApkBean{})
pb.RegisterDeviceInfoServer(grpcServer, &DeviceInfoBean{}) // for multiple services
lis, err := net.Listen("tcp", ":50051")
if err !=nil{
log.Fatalf("Error: %v\n" , err.Error())
}
grpcServer.Serve(lis)
}
use this function, to recive client request
func main(){
.
. // Your Code
.
go RunGRPC() // code to run grpc server in go routine.
serverWeb := http.Server{
Addr: "localhost:8080",
Handler: mux,
}
serverWeb.ListenAndServe()
}
requestHandler := func(ctx *fasthttp.RequestCtx) {
time.Sleep(time.Second*time.Duration(10))
fmt.Fprintf(ctx, "Hello, world! Requested path is %q", ctx.Path())
}
s := &fasthttp.Server{
Handler: requestHandler
}
if err := s.ListenAndServe("127.0.0.1:82"); err != nil {
log.Fatalf("error in ListenAndServe: %s", err)
}
multiple request,and it cost time like X*10s.
fasthttp is single process?
after two days...
I am sorry for this question,i describe my question not well.My question is caused by the browser,the browser request the same url by synchronization, and it mislead me, it make think the fasthttp web server hanlde the request by synchronization.
I think instead of fasthttp is single process?, you're asking whether fasthttp handles client requests concurrently or not?
I'm pretty sure that any server (including fasthttp) package will handle client requests concurrently. You should write a test/benchmark instead of manually access the server through several browsers. The following is an example of such test code:
package main_test
import (
"io/ioutil"
"net/http"
"sync"
"testing"
"time"
)
func doRequest(uri string) error {
resp, err := http.Get(uri)
if err != nil {
return err
}
defer resp.Body.Close()
_, err = ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
return nil
}
func TestGet(t *testing.T) {
N := 1000
wg := sync.WaitGroup{}
wg.Add(N)
start := time.Now()
for i := 0; i < N; i++ {
go func() {
if err := doRequest("http://127.0.0.1:82"); err != nil {
t.Error(err)
}
wg.Done()
}()
}
wg.Wait()
t.Logf("Total duration for %d concurrent request(s) is %v", N, time.Since(start))
}
And the result (in my computer) is
fasthttp_test.go:42: Total duration for 1000 concurrent request(s) is 10.6066411s
You can see that the answer to your question is No, it handles the request concurrently.
UPDATE:
In case the requested URL is the same, your browser may perform the request sequentially. See Multiple Ajax requests for same URL. This explains why the response times are X*10s.
I am sorry for this question,i describe my question not well.My question is caused by the browser,the browser request the same url by synchronization, and it mislead me, it make think the fasthttp web server hanlde the request by synchronization.
According to the gRPC documentation, deadlines can be specified by clients to determine how long the client will wait on the server before exiting with a DEADLINE_EXCEEDED error. The documentation mentions that different languages have different implementations and that some languages do not have default values.
Indeed, a quick CTRL+F for "deadline" on the Go gRPC documentation reveals no results. What I did discover was a WithTimeout on the dialer for the TCP connection.
Implemented as follows (from the helloworld example):
package main
import (
"log"
"os"
"time"
"golang.org/x/net/context"
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
)
const (
address = "localhost:50051"
defaultName = "world"
deadline = 20
)
func main() {
// Set up a connection to the server with a timeout
conn, err := grpc.Dial(address, grpc.WithInsecure(), grpc.WithTimeout(time.Duration(deadline)*time.Second)
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
c := pb.NewGreeterClient(conn)
// Contact the server and print out its response.
name := defaultName
if len(os.Args) > 1 {
name = os.Args[1]
}
r, err := c.SayHello(context.Background(), &pb.HelloRequest{Name: name})
if err != nil {
log.Fatalf("could not greet: %v", err)
}
log.Printf("Greeting: %s", r.Message)
}
The code will raise an error only if the client cannot connect after 20 seconds. The output will be something as follows:
2016/05/24 09:02:54 grpc: Conn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::1]:3265: getsockopt: connection refused"; Reconnecting to "localhost:3265"
2016/05/24 09:02:54 Failed to dial localhost:3265: grpc: timed out trying to connect; please retry.
2016/05/24 09:02:54 could not greet: rpc error: code = 2 desc = grpc: the client connection is closing
As noted in the question title, the system I'm working with is peer to peer, so there is no central, always up server and therefore the retry system that gRPC implements is wonderful. However, I'm actually looking for deadlines because if the remote does connect, but the server takes > 20 seconds to respond, no exception will be raised in the WithTimeout context.
A complete win for me would be a timeout/deadline system where:
if the client cannot connect, an error is returned after timeout
if the client connects, but the server doesn't respond before timeout, an error is returned.
if the client connects, but the connection drops before timeout, an error is returned.
My feeling though is that I will need some combination of connection management plus gRPC deadline management. Does anyone know how to implement deadlines in Go?
According to the WithTimeout example of context
package main
import (
"context"
"fmt"
"time"
)
func main() {
// Pass a context with a timeout to tell a blocking function that it
// should abandon its work after the timeout elapses.
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()
select {
case <-time.After(1 * time.Second):
fmt.Println("overslept")
case <-ctx.Done():
fmt.Println(ctx.Err()) // prints "context deadline exceeded"
}
}
You can change the helloworld example client code for 100ms timeout:
ctx, _ := context.WithTimeout(context.Background(), 100 * time.Millisecond)
r, err := c.SayHello(ctx, &pb.HelloRequest{Name: name})
if err != nil {
log.Fatalf("could not greet: %v", err)
}
log.Printf("Greeting: %s", r.Message)
You should look at the context package more closely. GRPC was built with contexts as a fundamental part of it. You might need the grpc.Dial context and client.SayHello context to be built off related info, but that should be fairly straight forward.
I am playing wit golang and orientdb to test them. i have written a tiny web app which uppon a request fetches a single document from local orientdb instance and returns it. when i bench this app with apache bench, when concurrency is above 1 it get following error:
2015/04/08 19:24:07 http: panic serving [::1]:57346: Get http://localhost:2480/d
ocument/t1/9:1441: EOF
when i bench Orientdb itself, it runs perfectley OK with any cuncurrency factor.
also when i change the url to fetch from this document to anything (other program whritten in golang, some internet site etc) the app runs OK.
here is the code:
func main() {
fmt.Println("starting ....")
var aa interface{}
router := gin.New()
router.GET("/", func(c *gin.Context) {
ans := getdoc("http://localhost:2480/document/t1/9:1441")
json.Unmarshal(ans, &aa)
c.JSON(http.StatusOK, aa)
})
router.Run(":3000")
}
func getdoc(addr string) []byte {
client := new(http.Client)
req, err := http.NewRequest("GET", addr, nil)
req.SetBasicAuth("admin","admin")
resp, err := client.Do(req)
if err != nil {
fmt.Println("oops", resp, err)
panic(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
return body
}
thanks in advance
The keepalive connections are getting closed on you for some reason. You might be overwhelming the server, or going past the max number of connections the database can handle.
Also, the current http.Transport connection pool doesn't work well with synthetic benchmarks that make connections as fast as possible, and can quickly exhaust available file descriptors or ports (issue/6785).
To test this, I would set Request.Close = true to prevent the Transport from using the keepalive pool. If that works, one way to handle this with keepalive, is to specifically check for an io.EOF and retry that request, possibly with some backoff delay.