How to pass byte slice between go routines using channels - go

I have a function that reads data from a source and send them to destination. Source and destination could be anything, lets say for this example source is database (any MySQL, PostgreSQL...) and destination is distributed Q (any... ActiveMQ, Kafka). Messages are stored in bytes.
This is main function. idea is it will spin a new go routine and will wait for messages to be returned for future processing.
type Message []byte
func (p *ProcessorService) Continue(dictId int) {
level.Info(p.logger).Log("process", "message", "dictId", dictId)
retrieved := make(chan Message)
go func() {
err := p.src.Read(retrieved, strconv.Itoa(p.dictId))
if err != nil {
level.Error(p.logger).Log("process", "read", "message", "err", err)
}
}()
for r := range retrieved {
go func(message Message) {
level.Info(p.logger).Log("message", message)
if len(message) > 0 {
if err := p.dst.sendToQ(message); err != nil {
level.Error(p.logger).Log("failed", "during", "persist", "err", err)
}
} else {
level.Error(p.logger).Log("failed")
}
}(r)
}
}
and this is read function itself
func (s *Storage) Read(out chan<- Message, opt ...string) error {
// I just skip some basic database read operations here
// but idea is simple, read data from the table / file row by row and
//
for _, value := range dataFromDB {
message, err := value.row
if err == nil {
out <- message
} else {
errorf("Unable to get data %v", err)
out <- make([]byte, 0)
}
}
})
close(out)
if err != nil {
return err
}
return nil
}
As you can see communication done via out chan<- Message channel.
My concern in Continue function, specifically here
for r := range retrieved {
go func(message Message) {
// basically here message and r are pointing to the same underlying array
}
}
When data received var r is a type of slice byte. Then it passed to go func(message Message) everything passed by value in go, in this case var r will be passed as copy to anonymous func, however it will still have a pointer to underlying slice data. I am curious if it could be a problem during p.dst.sendToQ(message); execution and at the same time read function will send something to out channel causing slice data structure to be overridden with a new information. Should I copy byte slice r into the new byte slice before passing to anonymous function, so underlying arrays will be different? I tested it, but couldn't really cause this behavior. Not sure if I am paranoid or have to worry about it.

The message in p.dst.sendToQ(message) is the same slice as value.row when you get data from the db. So, as long as each value.row has a different underlying array, you should be good. So, I suggest you check the source and make sure it does not use a common byte array and keeps rewriting to it.

Related

Why is data being pushed into the channel but never read from the receiver goroutine?

I am building a daemon and I have two services that will be sending data to and from each other. Service A is what produces the data and service B a is Data Buffer service or like a queue. So from the main.go file, service B is instantiated and started. The Start() method will perform the buffer() function as a goroutine because this function waits for data to be passed onto a channel and I don't want the main process to halt waiting for buffer to complete. Then Service A is instantiated and started. It is then also "registered" with Service B.
I created a method called RegisterWithBufferService for Service A that creates two new channels. It will store those channels as it's own attributes and also provide them to Service B.
func (s *ServiceA) RegisterWithBufferService(bufService *data.DataBuffer) error {
newIncomingChan := make(chan *data.DataFrame, 1)
newOutgoingChan := make(chan []byte, 1)
s.IncomingBuffChan = newIncomingChan
s.OutgoingDataChannels = append(s.OutgoingDataChannels, newOutgoingChan)
bufService.DataProviders[s.ServiceName()] = data.DataProviderInfo{
IncomingChan: newOutgoingChan, //our outGoing channel is their incoming
OutgoingChan: newIncomingChan, // our incoming channel is their outgoing
}
s.DataBufferService = bufService
bufService.NewProvider <- s.ServiceName() //The DataBuffer service listens for new services and creates a new goroutine for buffering
s.Logger.Info().Msg("Registeration completed.")
return nil
}
Buffer essentially listens for incoming data from Service A, decodes it using Decode() and then adds it to a slice called buf. If the slice is greater in length than bufferPeriod then it will send the first item in the slice in the Outgoing channel back to Service A.
func (b* DataBuffer) buffer(bufferPeriod int) {
for {
select {
case newProvider := <- b.NewProvider:
b.wg.Add(1)
/*
newProvider is a string
DataProviders is a map the value it returns is a struct containing the Incoming and
Outgoing channels for this service
*/
p := b.DataProviders[newProvider]
go func(prov string, in chan []byte, out chan *DataFrame) {
defer b.wg.Done()
var buf []*DataFrame
for {
select {
case rawData := <-in:
tmp := Decode(rawData) //custom decoding function. Returns a *DataFrame
buf = append(buf, tmp)
if len(buf) < bufferPeriod {
b.Logger.Info().Msg("Sending decoded data out.")
out <- buf[0]
buf = buf[1:] //pop
}
case <- b.Quit:
return
}
}
}(newProvider, p.IncomingChan, p.OutgoingChan)
}
case <- b.Quit:
return
}
}
Now Service A has a method called record that will periodically push data to all the channels in it's OutgoingDataChannels attribute.
func (s *ServiceA) record() error {
...
if atomic.LoadInt32(&s.Listeners) != 0 {
s.Logger.Info().Msg("Sending raw data to data buffer")
for _, outChan := range s.OutgoingDataChannels {
outChan <- dataBytes // the receiver (Service B) is already listening and this doesn't hang
}
s.Logger.Info().Msg("Raw data sent and received") // The logger will output this so I know it's not hanging
}
}
The problem is that Service A seems to push the data successfully using record but Service B never goes into the case rawData := <-in: case in the buffer sub-goroutine. Is this because I have nested goroutines? Incase it's not clear, when Service B is started, it calls buffer but because it would hang otherwise, I made the call to buffer a goroutine. So then when Service A calls RegisterWithBufferService, the buffer goroutine creates a goroutine to listen for new data from Service B and push it back to Service A once the buffer is filled. I hope I explained it clearly.
EDIT 1
I've made a minimal, reproducible example.
package main
import (
"fmt"
"sync"
"sync/atomic"
"time"
)
var (
defaultBufferingPeriod int = 3
DefaultPollingInterval int64 = 10
)
type DataObject struct{
Data string
}
type DataProvider interface {
RegisterWithBufferService(*DataBuffer) error
ServiceName() string
}
type DataProviderInfo struct{
IncomingChan chan *DataObject
OutgoingChan chan *DataObject
}
type DataBuffer struct{
Running int32 //used atomically
DataProviders map[string]DataProviderInfo
Quit chan struct{}
NewProvider chan string
wg sync.WaitGroup
}
func NewDataBuffer() *DataBuffer{
var (
wg sync.WaitGroup
)
return &DataBuffer{
DataProviders: make(map[string]DataProviderInfo),
Quit: make(chan struct{}),
NewProvider: make(chan string),
wg: wg,
}
}
func (b *DataBuffer) Start() error {
if ok := atomic.CompareAndSwapInt32(&b.Running, 0, 1); !ok {
return fmt.Errorf("Could not start Data Buffer Service.")
}
go b.buffer(defaultBufferingPeriod)
return nil
}
func (b *DataBuffer) Stop() error {
if ok := atomic.CompareAndSwapInt32(&b.Running, 1, 0); !ok {
return fmt.Errorf("Could not stop Data Buffer Service.")
}
for _, p := range b.DataProviders {
close(p.IncomingChan)
close(p.OutgoingChan)
}
close(b.Quit)
b.wg.Wait()
return nil
}
// buffer creates goroutines for each incoming, outgoing data pair and decodes the incoming bytes into outgoing DataFrames
func (b *DataBuffer) buffer(bufferPeriod int) {
for {
select {
case newProvider := <- b.NewProvider:
fmt.Println("Received new Data provider.")
if _, ok := b.DataProviders[newProvider]; ok {
b.wg.Add(1)
p := b.DataProviders[newProvider]
go func(prov string, in chan *DataObject, out chan *DataObject) {
defer b.wg.Done()
var (
buf []*DataObject
)
fmt.Printf("Waiting for data from: %s\n", prov)
for {
select {
case inData := <-in:
fmt.Printf("Received data from: %s\n", prov)
buf = append(buf, inData)
if len(buf) > bufferPeriod {
fmt.Printf("Queue is filled, sending data back to %s\n", prov)
out <- buf[0]
fmt.Println("Data Sent")
buf = buf[1:] //pop
}
case <- b.Quit:
return
}
}
}(newProvider, p.IncomingChan, p.OutgoingChan)
}
case <- b.Quit:
return
}
}
}
type ServiceA struct{
Active int32 // atomic
Stopping int32 // atomic
Recording int32 // atomic
Listeners int32 // atomic
name string
QuitChan chan struct{}
IncomingBuffChan chan *DataObject
OutgoingBuffChans []chan *DataObject
DataBufferService *DataBuffer
}
// A compile time check to ensure ServiceA fully implements the DataProvider interface
var _ DataProvider = (*ServiceA)(nil)
func NewServiceA() (*ServiceA, error) {
var newSliceOutChans []chan *DataObject
return &ServiceA{
QuitChan: make(chan struct{}),
OutgoingBuffChans: newSliceOutChans,
name: "SERVICEA",
}, nil
}
// Start starts the service. Returns an error if any issues occur
func (s *ServiceA) Start() error {
atomic.StoreInt32(&s.Active, 1)
return nil
}
// Stop stops the service. Returns an error if any issues occur
func (s *ServiceA) Stop() error {
atomic.StoreInt32(&s.Stopping, 1)
close(s.QuitChan)
return nil
}
func (s *ServiceA) StartRecording(pol_int int64) error {
if ok := atomic.CompareAndSwapInt32(&s.Recording, 0, 1); !ok {
return fmt.Errorf("Could not start recording. Data recording already started")
}
ticker := time.NewTicker(time.Duration(pol_int) * time.Second)
go func() {
for {
select {
case <-ticker.C:
fmt.Println("Time to record...")
err := s.record()
if err != nil {
return
}
case <-s.QuitChan:
ticker.Stop()
return
}
}
}()
return nil
}
func (s *ServiceA) record() error {
current_time := time.Now()
ct := fmt.Sprintf("%02d-%02d-%d", current_time.Day(), current_time.Month(), current_time.Year())
dataObject := &DataObject{
Data: ct,
}
if atomic.LoadInt32(&s.Listeners) != 0 {
fmt.Println("Sending data to Data buffer...")
for _, outChan := range s.OutgoingBuffChans {
outChan <- dataObject // the receivers should already be listening
}
fmt.Println("Data sent.")
}
return nil
}
// RegisterWithBufferService satisfies the DataProvider interface. It provides the bufService with new incoming and outgoing channels along with a polling interval
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
if _, ok := bufService.DataProviders[s.ServiceName()]; ok {
return fmt.Errorf("%v data provider already registered with Data Buffer.", s.ServiceName())
}
newIncomingChan := make(chan *DataObject, 1)
newOutgoingChan := make(chan *DataObject, 1)
s.IncomingBuffChan = newIncomingChan
s.OutgoingBuffChans = append(s.OutgoingBuffChans, newOutgoingChan)
bufService.DataProviders[s.ServiceName()] = DataProviderInfo{
IncomingChan: newOutgoingChan, //our outGoing channel is their incoming
OutgoingChan: newIncomingChan, // our incoming channel is their outgoing
}
s.DataBufferService = bufService
bufService.NewProvider <- s.ServiceName() //The DataBuffer service listens for new services and creates a new goroutine for buffering
return nil
}
// ServiceName satisfies the DataProvider interface. It returns the name of the service.
func (s ServiceA) ServiceName() string {
return s.name
}
func main() {
var BufferedServices []DataProvider
fmt.Println("Instantiating and Starting Data Buffer Service...")
bufService := NewDataBuffer()
err := bufService.Start()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
defer bufService.Stop()
fmt.Println("Data Buffer Service successfully started.")
fmt.Println("Instantiating and Starting Service A...")
serviceA, err := NewServiceA()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
BufferedServices = append(BufferedServices, *serviceA)
err = serviceA.Start()
if err != nil {
panic(fmt.Sprintf("%v", err))
}
defer serviceA.Stop()
fmt.Println("Service A successfully started.")
fmt.Println("Registering services with Data Buffer...")
for _, s := range BufferedServices {
_ = s.RegisterWithBufferService(bufService) // ignoring error msgs for base case
}
fmt.Println("Registration complete.")
fmt.Println("Beginning recording...")
_ = atomic.AddInt32(&serviceA.Listeners, 1)
err = serviceA.StartRecording(DefaultPollingInterval)
if err != nil {
panic(fmt.Sprintf("%v", err))
}
for {
select {
case RTD := <-serviceA.IncomingBuffChan:
fmt.Println(RTD)
case <-serviceA.QuitChan:
atomic.StoreInt32(&serviceA.Listeners, 0)
bufService.Quit<-struct{}{}
}
}
}
Running on Go 1.17. When running the example, it should print the following every 10 seconds:
Time to record...
Sending data to Data buffer...
Data sent.
But then Data buffer never goes into the inData := <-in case.
To diagnose this I changed fmt.Println("Sending data to Data buffer...") to fmt.Println("Sending data to Data buffer...", s.OutgoingBuffChans) and the output was:
Time to record...
Sending data to Data buffer... []
So you are not actually sending the data to any channels. The reason for this is:
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
As the receiver is not a pointer when you do the s.OutgoingBuffChans = append(s.OutgoingBuffChans, newOutgoingChan) you are changing s.OutgoingBuffChans in a copy of the ServiceA which is discarded when the function exits. To fix this change:
func (s ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
to
func (s *ServiceA) RegisterWithBufferService(bufService *DataBuffer) error {
and
BufferedServices = append(BufferedServices, *serviceA)
to
BufferedServices = append(BufferedServices, serviceA)
The amended version outputs:
Time to record...
Sending data to Data buffer... [0xc0000d8060]
Data sent.
Received data from: SERVICEA
Time to record...
Sending data to Data buffer... [0xc0000d8060]
Data sent.
Received data from: SERVICEA
So this resolves the reported issue (I would not be suprised if there are other issues but hopefully this points you in the right direction). I did notice that the code you originally posted does use a pointer receiver so that might have suffered from another issue (but its difficult to comment on code fragments in a case like this).

How to handle multiple goroutines that share the same channel

I've been searching a lot but could not find an answer for my problem yet.
I need to make multiple calls to an external API, but with different parameters concurrently.
And then for each call I need to init a struct for each dataset and process the data I receive from the API call. Bear in mind that I read each line of the incoming request and start immediately send it to the channel.
First problem I encounter was not obvious at the beginning due to the large quantity of data I'm receiving, is that each goroutine does not receive all the data that goes through the channel. (Which I learned by the research I've made). So what I need is a way of requeuing/redirect that data to the correct goroutine.
The function that sends the streamed response from a single dataset.
(I've cut useless parts of code that are out of context)
func (api *API) RequestData(ctx context.Context, c chan DWeatherResponse, dataset string, wg *sync.WaitGroup) error {
for {
line, err := reader.ReadBytes('\n')
s := string(line)
if err != nil {
log.Println("End of %s", dataset)
return err
}
data, err := extractDataFromStreamLine(s, dataset)
if err != nil {
continue
}
c <- *data
}
}
The function that will process the incoming data
func (s *StrikeStruct) Process(ch, requeue chan dweather.DWeatherResponse) {
for {
data, more := <-ch
if !more {
break
}
// data contains {dataset string, value float64, date time.Time}
// The s.Parameter needs to match the dataset
// IMPORTANT PART, checks if the received data is part of this struct dataset
// If not I want to send it to another go routine until it gets to the correct
one. There will be a max of 4 datasets but still this could not be the best approach to have
if !api.GetDataset(s.Parameter, data.Dataset) {
requeue <- data
continue
}
// Do stuff with the data from this point
}
}
Now on my own API endpoint I have the following:
ch := make(chan dweather.DWeatherResponse, 2)
requeue := make(chan dweather.DWeatherResponse)
final := make(chan strike.StrikePerYearResponse)
var wg sync.WaitGroup
for _, s := range args.Parameters.Strikes {
strike := strike.StrikePerYear{
Parameter: strike.Parameter(s.Dataset),
StrikeValue: s.Value,
}
// I receive and process the data in here
go strike.ProcessStrikePerYear(ch, requeue, final, string(s.Dataset))
}
go func() {
for {
data, _ := <-requeue
ch <- data
}
}()
// Creates a goroutine for each dataset
for _, dataset := range api.Params.Dataset {
wg.Add(1)
go api.RequestData(ctx, ch, dataset, &wg)
}
wg.Wait()
close(ch)
//Once the data is all processed it is all appended
var strikes []strike.StrikePerYearResponse
for range args.Fetch.Datasets {
strikes = append(strikes, <-final)
}
return strikes
The issue with this code is that as soon as I start receiving data from more than one endpoint the requeue will block and nothing more happens. If I remove that requeue logic data will be lost if it does not land on the correct goroutine.
My two questions are:
Why is the requeue blocking if it has a goroutine always ready to receive?
Should I take a different approach on how I'm processing the incoming data?
this is not a good way to solving your problem. you should change your solution. I suggest an implementation like the below:
import (
"fmt"
"sync"
)
// answer for https://stackoverflow.com/questions/68454226/how-to-handle-multiple-goroutines-that-share-the-same-channel
var (
finalResult = make(chan string)
)
// IData use for message dispatcher that all struct must implement its method
type IData interface {
IsThisForMe() bool
Process(*sync.WaitGroup)
}
//MainData can be your main struct like StrikePerYear
type MainData struct {
// add any props
Id int
Name string
}
type DataTyp1 struct {
MainData *MainData
}
func (d DataTyp1) IsThisForMe() bool {
// you can check your condition here to checking incoming data
if d.MainData.Id == 2 {
return true
}
return false
}
func (d DataTyp1) Process(wg *sync.WaitGroup) {
d.MainData.Name = "processed by DataTyp1"
// send result to final channel, you can change it as you want
finalResult <- d.MainData.Name
wg.Done()
}
type DataTyp2 struct {
MainData *MainData
}
func (d DataTyp2) IsThisForMe() bool {
// you can check your condition here to checking incoming data
if d.MainData.Id == 3 {
return true
}
return false
}
func (d DataTyp2) Process(wg *sync.WaitGroup) {
d.MainData.Name = "processed by DataTyp2"
// send result to final channel, you can change it as you want
finalResult <- d.MainData.Name
wg.Done()
}
//dispatcher will run new go routine for each request.
//you can implement a worker pool to preventing running too many go routines.
func dispatcher(incomingData *MainData, wg *sync.WaitGroup) {
// based on your requirements you can remove this go routing or not
go func() {
var p IData
p = DataTyp1{incomingData}
if p.IsThisForMe() {
go p.Process(wg)
return
}
p = DataTyp2{incomingData}
if p.IsThisForMe() {
go p.Process(wg)
return
}
}()
}
func main() {
dummyDataArray := []MainData{
MainData{Id: 2, Name: "this data #2"},
MainData{Id: 3, Name: "this data #3"},
}
wg := sync.WaitGroup{}
for i := range dummyDataArray {
wg.Add(1)
dispatcher(&dummyDataArray[i], &wg)
}
result := make([]string, 0)
done := make(chan struct{})
// data collector
go func() {
loop:for {
select {
case <-done:
break loop
case r := <-finalResult:
result = append(result, r)
}
}
}()
wg.Wait()
done<- struct{}{}
for _, s := range result {
fmt.Println(s)
}
}
Note: this is just for opening your mind for finding a better solution, and for sure this is not a production-ready code.

Reading/Writing set of records in different go routines - what datastructure to use

I have a chat application using 2 go routines. I would like to add/remove records to/from the list in one thread and read the same list from the other thread.
As I am pretty new in Go, I am a bit puzzled about what data structure should be used. I thought of slices, but not sure that I use it the right way
func listener(addr *net.UDPAddr, clients *[] *net.UDPAddr, messages chan clientMessage) {
for {
*clients=append(*clients,otherAddr)
}
}
func sender(messages chan clientMessage,clients *[] *net.UDPAddr) {
for {
message :=<- messages
for _,client := range *clients {
fmt.Printf("Message %s sent to %s\n", message.message, client.String())
}
}
}
func main() {
var clients [] *net.UDPAddr
go listener(s,&clients,messageCh)
go sender(messageCh,&clients)
}
Since listener only needs to write, and sender only needs to read - this is a good example of using channels to communicate. The flow would look like the following:
Listener would post the new client to the channel.
Sender will receive the new client and will update its local slice
of clients.
It will be a lot cleaner and safer this way - since listener will not be able to "accidentally" read and sender will not be able to "accidentally" write. Listener can also close the channel to indicate to the sender that it's done.
A slice is looks OK for the scenario, but a mutex is needed to prevent concurrent read and write to the slice.
Let's bundle the slice and mutex together in a struct and add methods for the two operations: add and enumerate.
type clients struct {
mu sync.Mutex
values []*net.UDPAddr
}
// add adds a new client
func (c *clients) add(value *net.UDPAddr) {
c.mu.Lock()
c.values = append(c.values, value)
c.mu.Unlock()
}
// do calls fn for each client
func (c *clients) do(fn func(*net.UDPAddr) error) error {
c.mu.Lock()
defer c.mu.Unlock()
for _, value := range c.values {
if err := fn(value); err != nil {
return err
}
}
return nil
}
Use it like this:
func listener(addr *net.UDPAddr, clients *clients, messages chan clientMessage) {
for {
clients.add(otherAddr)
}
}
func sender(messages chan clientMessage, clients *clients) {
for {
message := <-messages
clients.do(func(client *net.UDPAddr) error {
fmt.Printf("Message %s sent to %s\n", message.message, client.String())
return nil
})
}
}
func main() {
var clients clients
go listener(s, &clients, messageCh)
go sender(messageCh, &clients)
}

Graceful shutdown of gRPC downstream

Using the following proto buffer code :
syntax = "proto3";
package pb;
message SimpleRequest {
int64 number = 1;
}
message SimpleResponse {
int64 doubled = 1;
}
// All the calls in this serivce preform the action of doubling a number.
// The streams will continuously send the next double, eg. 1, 2, 4, 8, 16.
service Test {
// This RPC streams from the server only.
rpc Downstream(SimpleRequest) returns (stream SimpleResponse);
}
I'm able to successfully open a stream, and continuously get the next doubled number from the server.
My go code for running this looks like :
ctxDownstream, cancel := context.WithCancel(ctx)
downstream, err := testClient.Downstream(ctxDownstream, &pb.SimpleRequest{Number: 1})
for {
responseDownstream, err := downstream.Recv()
if err != io.EOF {
println(fmt.Sprintf("downstream response: %d, error: %v", responseDownstream.Doubled, err))
if responseDownstream.Doubled >= 32 {
break
}
}
}
cancel() // !!This is not a graceful shutdown
println(fmt.Sprintf("%v", downstream.Trailer()))
The problem I'm having is using a context cancellation means my downstream.Trailer() response is empty. Is there a way to gracefully close this connection from the client side and receive downstream.Trailer().
Note: if I close the downstream connection from the server side, my trailers are populated. But I have no way of instructing my server side to close this particular stream. So there must be a way to gracefully close a stream client side.
Thanks.
As requested some server code :
func (b *binding) Downstream(req *pb.SimpleRequest, stream pb.Test_DownstreamServer) error {
request := req
r := make(chan *pb.SimpleResponse)
e := make(chan error)
ticker := time.NewTicker(200 * time.Millisecond)
defer func() { ticker.Stop(); close(r); close(e) }()
go func() {
defer func() { recover() }()
for {
select {
case <-ticker.C:
response, err := b.Endpoint(stream.Context(), request)
if err != nil {
e <- err
}
r <- response
}
}
}()
for {
select {
case err := <-e:
return err
case response := <-r:
if err := stream.Send(response); err != nil {
return err
}
request.Number = response.Doubled
case <-stream.Context().Done():
return nil
}
}
}
You will still need to populate the trailer with some information. I use the grpc.StreamServerInterceptor to do this.
According to the grpc go documentation
Trailer returns the trailer metadata from the server, if there is any.
It must only be called after stream.CloseAndRecv has returned, or
stream.Recv has returned a non-nil error (including io.EOF).
So if you want to read the trailer in client try something like this
ctxDownstream, cancel := context.WithCancel(ctx)
defer cancel()
for {
...
// on error or EOF
break;
}
println(fmt.Sprintf("%v", downstream.Trailer()))
Break from the infinate loop when there is a error and print the trailer. cancel will be called at the end of the function as it is deferred.
I can't find a reference that explains it clearly, but this doesn't appear to be possible.
On the wire, grpc-status is followed by the trailer metadata when the call completes normally (i.e. the server exits the call).
When the client cancels the call, neither of these are sent.
Seems that gRPC treats call cancellation as a quick abort of the rpc, not much different than the socket being dropped.
Adding a "cancel message" via request streaming works; the server can pick this up and cancel the stream from its end and trailers will still get sent:
message SimpleRequest {
oneof RequestType {
int64 number = 1;
bool cancel = 2;
}
}
....
rpc Downstream(stream SimpleRequest) returns (stream SimpleResponse);
Although this does add a bit of complication to the code.

Gin If `request body` bound in middleware, c.Request.Body become 0

My API server has middle ware which is getting token from request header.
If it access is correct, its go next function.
But request went to middle ware and went to next function, c.Request.Body become 0.
middle ware
func getUserIdFromBody(c *gin.Context) (int) {
var jsonBody User
length, _ := strconv.Atoi(c.Request.Header.Get("Content-Length"))
body := make([]byte, length)
length, _ = c.Request.Body.Read(body)
json.Unmarshal(body[:length], &jsonBody)
return jsonBody.Id
}
func CheckToken() (gin.HandlerFunc) {
return func(c *gin.Context) {
var userId int
config := model.NewConfig()
reqToken := c.Request.Header.Get("token")
_, resBool := c.GetQuery("user_id")
if resBool == false {
userId = getUserIdFromBody(c)
} else {
userIdStr := c.Query("user_id")
userId, _ = strconv.Atoi(userIdStr)
}
...
if ok {
c.Nex()
return
}
}
next func
func bindOneDay(c *gin.Context) (model.Oneday, error) {
var oneday model.Oneday
if err := c.BindJSON(&oneday); err != nil {
return oneday, err
}
return oneday, nil
}
bindOneDay return error with EOF. because maybe c.Request.Body is 0.
I want to get user_id from request body in middle ware.
How to do it without problem that c.Request.Body become 0
You can only read the Body from the client once. The data is streaming from the user, and they're not going to send it again. If you want to read it more than once, you're going to have to buffer the whole thing in memory, like so:
bodyCopy := new(bytes.Buffer)
// Read the whole body
_, err := io.Copy(bodyCopy, req.Body)
if err != nil {
return err
}
bodyData := bodyCopy.Bytes()
// Replace the body with a reader that reads from the buffer
req.Body = ioutil.NopCloser(bytes.NewReader(bodyData))
// Now you can do something with the contents of bodyData,
// like passing it to json.Unmarshal
Note that buffering the entire request into memory means that a user can cause you to allocate unlimited memory -- you should probably either block this at a frontend proxy or use an io.LimitedReader to limit the amount of data you'll buffer.
You also have to read the entire body before Unmarshal can start its work -- this is probably no big deal, but you can do better using io.TeeReader and json.NewDecoder if you're so inclined.
Better, of course, would be to figure out a way to restructure your code so that buffering the body and decoding it twice aren't necessary.
Gin provides a native solution to allow you to get data multiple times from c.Request.Body. The solution is to use c.ShouldBindBodyWith. Per the gin documentation
ShouldBindBodyWith ... stores the
request body into the context, and reuse when it is called again.
For your particular example, this would be implemented in your middleware like so,
func getUserIdFromBody(c *gin.Context) (int) {
var jsonBody User
if err := c.ShouldBindBodyWith(&jsonBody, binding.JSON); err != nil {
//return error
}
return jsonBody.Id
}
After the middleware, if you want to bind to the body again, just use ctx.ShouldBindBodyWith again. For your particular example, this would be implemented like so
func bindOneDay(c *gin.Context) (model.Oneday, error) {
var oneday model.Oneday
if err := c.ShouldBindBodyWith(&oneday); err != nil {
return error
}
return oneday, nil
}
The issue we're fighting against is that gin has setup c.Request.Body as an io.ReadCloser object -- meaning that it is intended to be read from only once. So, if you access c.Request.Body in your code at all, the bytes will be read (consumed) and c.Request.Body will be empty thereafter. By using ShouldBindBodyWith to access the bytes, gin saves the bytes into another storage mechanism within the context, so that it can be reused over and over again.
As a side note, if you've consumed the c.Request.Body and later want to access c.Request.Body, you can do so by tapping into gin's storage mechanism via ctx.Get(gin.BodyBytesKey). Here's an example of how you can obtain the gin-stored Request Body as []byte and then convert it to a string,
var body string
if cb, ok := ctx.Get(gin.BodyBytesKey); ok {
if cbb, ok := cb.([]byte); ok {
body = string(cbb)
}
}

Resources