How to handle GRPC Golang High CPU Usage - go

We have suspicious high cpu usage in our golang function that we using grpc to stream our transaction. The function is simple, when we got request of ORDER ID data changes from frontend, then we consume and stream back.
Here the code
func (consumer OrderChangesConsumer) Serve(message string) {
response := messages.OrderChanges{}
if err := json.Unmarshal([]byte(message), &response); err != nil {
logData := map[string]interface{}{
"message": message,
}
seelog.Error(commonServices.GenerateLog("parse_message_error", err.Error(), &logData))
}
if response.OrderID > 0 {
services.PublishChanges(response.OrderID, &response)
}
}
// PublishChanges sends the order change message to the changes channel.
func PublishChanges(orderID int, orderChanges *messages.OrderChanges) {
orderMutex.RLock()
defer orderMutex.RUnlock()
orderChan, ok := orderChans[orderID]
if !ok {
return
}
orderChan <- orderChanges
}
How we can improve and test the best practice for this case?

Would update your PublishChanges code to the following and see if that helps:
// PublishChanges sends the order change message to the changes channel.
func PublishChanges(orderID int, orderChanges *messages.OrderChanges) {
orderMutex.RLock()
orderChan, ok := orderChans[orderID]
orderMutex.RUnlock()
if !ok {
return
}
orderChan <- orderChanges
}
You might also want to consider using sync.Map for an easier to use concurrent map.

Related

Reading/Writing set of records in different go routines - what datastructure to use

I have a chat application using 2 go routines. I would like to add/remove records to/from the list in one thread and read the same list from the other thread.
As I am pretty new in Go, I am a bit puzzled about what data structure should be used. I thought of slices, but not sure that I use it the right way
func listener(addr *net.UDPAddr, clients *[] *net.UDPAddr, messages chan clientMessage) {
for {
*clients=append(*clients,otherAddr)
}
}
func sender(messages chan clientMessage,clients *[] *net.UDPAddr) {
for {
message :=<- messages
for _,client := range *clients {
fmt.Printf("Message %s sent to %s\n", message.message, client.String())
}
}
}
func main() {
var clients [] *net.UDPAddr
go listener(s,&clients,messageCh)
go sender(messageCh,&clients)
}
Since listener only needs to write, and sender only needs to read - this is a good example of using channels to communicate. The flow would look like the following:
Listener would post the new client to the channel.
Sender will receive the new client and will update its local slice
of clients.
It will be a lot cleaner and safer this way - since listener will not be able to "accidentally" read and sender will not be able to "accidentally" write. Listener can also close the channel to indicate to the sender that it's done.
A slice is looks OK for the scenario, but a mutex is needed to prevent concurrent read and write to the slice.
Let's bundle the slice and mutex together in a struct and add methods for the two operations: add and enumerate.
type clients struct {
mu sync.Mutex
values []*net.UDPAddr
}
// add adds a new client
func (c *clients) add(value *net.UDPAddr) {
c.mu.Lock()
c.values = append(c.values, value)
c.mu.Unlock()
}
// do calls fn for each client
func (c *clients) do(fn func(*net.UDPAddr) error) error {
c.mu.Lock()
defer c.mu.Unlock()
for _, value := range c.values {
if err := fn(value); err != nil {
return err
}
}
return nil
}
Use it like this:
func listener(addr *net.UDPAddr, clients *clients, messages chan clientMessage) {
for {
clients.add(otherAddr)
}
}
func sender(messages chan clientMessage, clients *clients) {
for {
message := <-messages
clients.do(func(client *net.UDPAddr) error {
fmt.Printf("Message %s sent to %s\n", message.message, client.String())
return nil
})
}
}
func main() {
var clients clients
go listener(s, &clients, messageCh)
go sender(messageCh, &clients)
}

How to pass byte slice between go routines using channels

I have a function that reads data from a source and send them to destination. Source and destination could be anything, lets say for this example source is database (any MySQL, PostgreSQL...) and destination is distributed Q (any... ActiveMQ, Kafka). Messages are stored in bytes.
This is main function. idea is it will spin a new go routine and will wait for messages to be returned for future processing.
type Message []byte
func (p *ProcessorService) Continue(dictId int) {
level.Info(p.logger).Log("process", "message", "dictId", dictId)
retrieved := make(chan Message)
go func() {
err := p.src.Read(retrieved, strconv.Itoa(p.dictId))
if err != nil {
level.Error(p.logger).Log("process", "read", "message", "err", err)
}
}()
for r := range retrieved {
go func(message Message) {
level.Info(p.logger).Log("message", message)
if len(message) > 0 {
if err := p.dst.sendToQ(message); err != nil {
level.Error(p.logger).Log("failed", "during", "persist", "err", err)
}
} else {
level.Error(p.logger).Log("failed")
}
}(r)
}
}
and this is read function itself
func (s *Storage) Read(out chan<- Message, opt ...string) error {
// I just skip some basic database read operations here
// but idea is simple, read data from the table / file row by row and
//
for _, value := range dataFromDB {
message, err := value.row
if err == nil {
out <- message
} else {
errorf("Unable to get data %v", err)
out <- make([]byte, 0)
}
}
})
close(out)
if err != nil {
return err
}
return nil
}
As you can see communication done via out chan<- Message channel.
My concern in Continue function, specifically here
for r := range retrieved {
go func(message Message) {
// basically here message and r are pointing to the same underlying array
}
}
When data received var r is a type of slice byte. Then it passed to go func(message Message) everything passed by value in go, in this case var r will be passed as copy to anonymous func, however it will still have a pointer to underlying slice data. I am curious if it could be a problem during p.dst.sendToQ(message); execution and at the same time read function will send something to out channel causing slice data structure to be overridden with a new information. Should I copy byte slice r into the new byte slice before passing to anonymous function, so underlying arrays will be different? I tested it, but couldn't really cause this behavior. Not sure if I am paranoid or have to worry about it.
The message in p.dst.sendToQ(message) is the same slice as value.row when you get data from the db. So, as long as each value.row has a different underlying array, you should be good. So, I suggest you check the source and make sure it does not use a common byte array and keeps rewriting to it.

Graceful shutdown of gRPC downstream

Using the following proto buffer code :
syntax = "proto3";
package pb;
message SimpleRequest {
int64 number = 1;
}
message SimpleResponse {
int64 doubled = 1;
}
// All the calls in this serivce preform the action of doubling a number.
// The streams will continuously send the next double, eg. 1, 2, 4, 8, 16.
service Test {
// This RPC streams from the server only.
rpc Downstream(SimpleRequest) returns (stream SimpleResponse);
}
I'm able to successfully open a stream, and continuously get the next doubled number from the server.
My go code for running this looks like :
ctxDownstream, cancel := context.WithCancel(ctx)
downstream, err := testClient.Downstream(ctxDownstream, &pb.SimpleRequest{Number: 1})
for {
responseDownstream, err := downstream.Recv()
if err != io.EOF {
println(fmt.Sprintf("downstream response: %d, error: %v", responseDownstream.Doubled, err))
if responseDownstream.Doubled >= 32 {
break
}
}
}
cancel() // !!This is not a graceful shutdown
println(fmt.Sprintf("%v", downstream.Trailer()))
The problem I'm having is using a context cancellation means my downstream.Trailer() response is empty. Is there a way to gracefully close this connection from the client side and receive downstream.Trailer().
Note: if I close the downstream connection from the server side, my trailers are populated. But I have no way of instructing my server side to close this particular stream. So there must be a way to gracefully close a stream client side.
Thanks.
As requested some server code :
func (b *binding) Downstream(req *pb.SimpleRequest, stream pb.Test_DownstreamServer) error {
request := req
r := make(chan *pb.SimpleResponse)
e := make(chan error)
ticker := time.NewTicker(200 * time.Millisecond)
defer func() { ticker.Stop(); close(r); close(e) }()
go func() {
defer func() { recover() }()
for {
select {
case <-ticker.C:
response, err := b.Endpoint(stream.Context(), request)
if err != nil {
e <- err
}
r <- response
}
}
}()
for {
select {
case err := <-e:
return err
case response := <-r:
if err := stream.Send(response); err != nil {
return err
}
request.Number = response.Doubled
case <-stream.Context().Done():
return nil
}
}
}
You will still need to populate the trailer with some information. I use the grpc.StreamServerInterceptor to do this.
According to the grpc go documentation
Trailer returns the trailer metadata from the server, if there is any.
It must only be called after stream.CloseAndRecv has returned, or
stream.Recv has returned a non-nil error (including io.EOF).
So if you want to read the trailer in client try something like this
ctxDownstream, cancel := context.WithCancel(ctx)
defer cancel()
for {
...
// on error or EOF
break;
}
println(fmt.Sprintf("%v", downstream.Trailer()))
Break from the infinate loop when there is a error and print the trailer. cancel will be called at the end of the function as it is deferred.
I can't find a reference that explains it clearly, but this doesn't appear to be possible.
On the wire, grpc-status is followed by the trailer metadata when the call completes normally (i.e. the server exits the call).
When the client cancels the call, neither of these are sent.
Seems that gRPC treats call cancellation as a quick abort of the rpc, not much different than the socket being dropped.
Adding a "cancel message" via request streaming works; the server can pick this up and cancel the stream from its end and trailers will still get sent:
message SimpleRequest {
oneof RequestType {
int64 number = 1;
bool cancel = 2;
}
}
....
rpc Downstream(stream SimpleRequest) returns (stream SimpleResponse);
Although this does add a bit of complication to the code.

Go lang Redis PubSub in different go routes for publish and subscribe

Am new to go programming language, and I have a requirement to create Redis PubSub with websocket.
My reference code is here
https://github.com/vortec/orchestrate
Am using following libraries
"golang.org/x/net/websocket"
"github.com/garyburd/redigo/redis"
Everything is working for me like this way, but I don't understand what is "websocket.Handler(handleWSConnection)" here.
I need 2 different go routes for /ws-subscribe and /ws-publish. I don't know anything wrong in this concept?
Doubts
Can I do this way, http.HandleFunc("/ws", handleWSConnection) // Tried this way but am getting "not enough arguments in call to handleWSConnection"
Is there any way to call "handleWSConnection()" as a normal function.
Any suggestions how to write /ws-publish as a different go route
Following is my code
main function
func (wsc *WSConnection) ReadWebSocket() {
for {
var json_data []byte
var message WSMessage
// Receive data from WebSocket
err := websocket.Message.Receive(wsc.socket, &json_data)
if err != nil {
return
}
// Parse JSON data
err = json.Unmarshal(json_data, &message)
if err != nil {
return
}
switch message.Action {
case "SUBSCRIBE":
wsc.subscribe.Subscribe(message.Channel)
case "UNSUBSCRIBE":
wsc.subscribe.Unsubscribe(message.Channel)
case "PUBLISH":
wsc.publish.Conn.Do("PUBLISH", message.Channel, message.Data)
}
}
}
func handleWSConnection(socket *websocket.Conn) {
wsc := &WSConnection {socket: socket}
defer wsc.Uninitialize()
wsc.Initialize()
go wsc.ProxyRedisSubscribe()
wsc.ReadWebSocket()
}
func serveWeb() {
http.Handle("/ws", websocket.Handler(handleWSConnection)) // I need to call this route as funciton
if err := http.ListenAndServe(":9000", nil); err != nil {
log.Fatal("ListenAndServe:", err)
}
}
Done following way, I dont know is it the proper way to do this
http.HandleFunc("/publish", publishHandler)
func publishHandler(conn http.ResponseWriter, req *http.Request) {
log.Println("PUBLISH HANDLER")
wsHandler := websocket.Handler(func(ws *websocket.Conn) {
handleWSConnection(ws)
})
wsHandler.ServeHTTP(conn, req)
}

How to handle HTTP timeout errors and accessing status codes in golang

I have some code (see below) written in Go which is supposed to "fan-out" HTTP requests, and collate/aggregate the details back.
I'm new to golang and so expect me to be a nOOb and my knowledge to be limited
The output of the program is currently something like:
{
"Status":"success",
"Components":[
{"Id":"foo","Status":200,"Body":"..."},
{"Id":"bar","Status":200,"Body":"..."},
{"Id":"baz","Status":404,"Body":"..."},
...
]
}
There is a local server running that is purposely slow (sleeps for 5 seconds and then returns a response). But I have other sites listed (see code below) that sometime trigger an error as well (if they error, then that's fine).
The problem I have at the moment is how best to handle these errors, and specifically the "timeout" related errors; in that I'm not sure how to recognise if a failure is a timeout or some other error?
At the moment I get a blanket error back all the time:
Get http://localhost:8080/pugs: read tcp 127.0.0.1:8080: use of closed network connection
Where http://localhost:8080/pugs will generally be the url that failed (hopefully by timeout!). But as you can see from the code (below), I'm not sure how to determine the error code is related to a timeout nor how to access the status code of the response (I'm currently just blanket setting it to 404 but obviously that's not right - if the server was to error I'd expect something like a 500 status code and obviously I'd like to reflect that in the aggregated response I send back).
The full code can be seen below. Any help appreciated.
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"sync"
"time"
)
type Component struct {
Id string `json:"id"`
Url string `json:"url"`
}
type ComponentsList struct {
Components []Component `json:"components"`
}
type ComponentResponse struct {
Id string
Status int
Body string
}
type Result struct {
Status string
Components []ComponentResponse
}
var overallStatus string = "success"
func main() {
var cr []ComponentResponse
var c ComponentsList
b := []byte(`{"components":[{"id":"local","url":"http://localhost:8080/pugs"},{"id":"google","url":"http://google.com/"},{"id":"integralist","url":"http://integralist.co.uk/"},{"id":"sloooow","url":"http://stevesouders.com/cuzillion/?c0=hj1hfff30_5_f&t=1439194716962"}]}`)
json.Unmarshal(b, &c)
var wg sync.WaitGroup
timeout := time.Duration(1 * time.Second)
client := http.Client{
Timeout: timeout,
}
for i, v := range c.Components {
wg.Add(1)
go func(i int, v Component) {
defer wg.Done()
resp, err := client.Get(v.Url)
if err != nil {
fmt.Printf("Problem getting the response: %s\n", err)
cr = append(cr, ComponentResponse{
v.Id,
404,
err.Error(),
})
} else {
defer resp.Body.Close()
contents, err := ioutil.ReadAll(resp.Body)
if err != nil {
fmt.Printf("Problem reading the body: %s\n", err)
}
cr = append(cr, ComponentResponse{
v.Id,
resp.StatusCode,
string(contents),
})
}
}(i, v)
}
wg.Wait()
j, err := json.Marshal(Result{overallStatus, cr})
if err != nil {
fmt.Printf("Problem converting to JSON: %s\n", err)
return
}
fmt.Println(string(j))
}
If you want to fan out then aggregate results and you want specific timeout behavior the net/http package isn't giving you, then you may want to use goroutines and channels.
I just watched this video today and it will walk you through exactly those scenarios using the concurrency features of Go. Plus, the speaker Rob Pike is quite the authority -- he explains it much better than I could.
https://www.youtube.com/watch?v=f6kdp27TYZs
I am adding this for completes, as the correct answer was provided by Dave C in the comments of the accepted answer.
We can try to cast the error to a net.Error and check if it is a timeout.
resp, err := client.Get(url)
if err != nil {
// if there is an error check if its a timeout error
if e, ok := err.(net.Error); ok && e.Timeout() {
// handle timeout
return
}
// otherwise handle other types of error
}
The Go 1.5 release solved this issue by being more specific about the type of error it has handled.
So if you see this example https://github.com/Integralist/Go-Requester/blob/master/requester.go#L38 you'll see that I'm able to apply a regex pattern to the error message to decipher if the error was indeed a timeout or not
status := checkError(err.Error())
func checkError(msg string) int {
timeout, _ := regexp.MatchString("Timeout", msg)
if timeout {
return 408
}
return 500
}

Resources