Redigo multi requests - go

I have previously been using this:
data, err := redis.Bytes(c.Do("GET", key))
to make sure that data returned is a slice of bytes.
However, I now need to add an extra command to the Redis request so I have something like this:
c.Send("MULTI")
c.Send("GET", key)
c.Send("EXPIRE", key)
r, err := c.Do("EXEC")
but now I can't seem to make the GET command return a slice of bytes. I've tried adding redis.Bytes like below but no luck.
c.Send("MULTI")
redis.Bytes(c.Send("GET", key))
c.Send("EXPIRE", key)
r, err := c.Do("EXEC")

In redis, the EXEC command returns an array containing the results of all the commands in the transaction.
redigo provides a Values function, which converts an array command reply to a []interface{}.
c.Send("MULTI")
c.Send("GET", key)
c.Send("EXPIRE", key)
r, err := redis.Values(c.Do("EXEC"))
r[0] now has the reply from your GET command as a interface{}, so you'll need to do a type assertion to get the slice of bytes you're expecting:
data := r[0].([]byte)
References
func Values: https://godoc.org/github.com/garyburd/redigo/redis#Values
Type assertions: https://golang.org/ref/spec#Type_assertions

MULTI is used to send several commands in an atomic way to Redis, by creating a transaction. This is not a pipeline at all.
None of the commands will be actually executed before the EXEC call so it is impossible to obtain the value when you call GET from within a transaction.
From the docs:
When a Redis connection is in the context of a MULTI request, all commands will reply with the string QUEUED (sent as a Status Reply from the point of view of the Redis protocol). A queued command is simply scheduled for execution when EXEC is called.
In redigo pipelining is done in a different way:
http://godoc.org/github.com/garyburd/redigo/redis#hdr-Pipelining
What you want to do is something like this (untested):
c.Send("GET", key)
c.Send("EXPIRE", key)
c.Flush()
v := redis.Bytes(c.Receive()) // reply from GET
_, err = c.Receive() // reply from EXPIRE

Related

Redis transaction pipeline is not carrying out all transactions and instead returning QUEUED commands

Go redis transaction pipeline, is not carrying out all transactions and instead returning QUEUED commands when pipe.Exec() is called.
(Redis client used: "github.com/go-redis/redis/v7" imported as red)
When the length of keys > 9, the cmders struct from pipe.Exec() returned is an array where the first element is of type *redis.StatusCmd, with the val "QUEUED" and the data "multi". The rest of the elements are of type *redis.StringStringMapCmd as expected.
pipe := *red.Client.TxPipeline()
for _, key := range keys {
pipe.HGetAll(key)
}
cmders, err := pipe.Exec()
if err != nil {
return err
}
Additional Information/things I tried:
All errors are nil
When len(keys) = 10 the last command in the pipeline is not executed and is instead QUEUED; the cmders array contained 1 *redis.StatusCmd and 9 *redis.StringStringMapCmd structs corresponding to the first 9 keys.
Increasing the number of keys, results in more commands being missed.
For example when the length of keys was 70, there were no
*redis.StringStringMapCmd structs returned for the last 7 commands.
The same problem was found to occur when using HDEL, HGET and HSET.
Adding a time.Sleep() before and after the Exec did not change
anything either.
Finally I tried rearranging the keys array and found the same
problem, the last command added to the pipeline was QUEUED.

Logging and decoding repsonse body from golang net/http library

I'm writing a webhook in Go that parses a JSON payload. I'm attempting to log the raw payload and then decode it immediately after but it fails when I try. If I perform the actions separately, they both work fine independently.
Can someone explain why I can't use ioutil.ReadAll and json.NewDecoder together?
func webhook(w http.ResponseWriter, r *http.Request) {
body, _ := ioutil.ReadAll(r.Body)
log.Printf("incoming message - %s", body)
var p payload
decoder := json.NewDecoder(r.Body)
err := decoder.Decode(&p)
if err != nil {
// Returns EOF
log.Printf("invalid payload - %s", err)
}
defer r.Body.Close()
}
Can someone explain why I can't use ioutil.ReadAll and json.NewDecoder
together?
The request body is an io.ReadCloser that reads bytes, more or less, directly from a network connection. The contents of the Body aren't stored in memory by default. That's why after the first time you've read the Body the next time you try to read it you'll get EOF.
So if you need to process the request Body more than once, you yourself will have to store the contents into memory, which is what you are already doing with:
body, _ := ioutil.ReadAll(r.Body)
You can then reuse body as many times as you like, and since you have the Body contents at your disposal as a []byte value, you can use json.Unmarshal instead of json.NewDecoder(...).Decode.
This is unrelated to your question, but please do not ignore the error returned from ioutil.ReadAll.
Also you can drop the defer r.Body.Close() line, because you do not have to close the request body in your server handlers. (emphasis mine)
For server requests the Request Body is always non-nil but will return
EOF immediately when no body is present. The Server will close the
request body. The ServeHTTP Handler does not need to.
r.Body is meant to be read exactly once.
When you use the ioutil.ReadAll function you do read all the data from the body. That's why the decoder which also relies on r.Body in fact gets nothing to decode.
Minor additional point about json.Decoder and json.Unmarshal: at first glance it looks like the only difference between the two is just that the former operates on a stream and the latter on a []byte, but they actually have different semantics.
json.Unmarshal will return an error if the data contains more than one json object. So, for example, it will parse {}, but it will not parse {}{}.
json.Decoder parses one complete object per call to Decode, so if you give it {}{}, it will parse those two objects and then the third call will return io.EOF and it's More method will return false.
In a normal http body, you probably only want a single object, so you'd want to use Unmarshal if you're not worried about loading all the data into memory at once. You can also use Decoder and manually check that there is only one object if you care to do so.

How to read data from serial and process it when a specific delimiter is found

I have a device, which continues to send data over a serial port.
Now I want to read this and process it.
The data send this delimiter "!" and
as soon as this delimiter appears I want to pause reading to processing the data thats already been received.
How can I do that? Is there any documentation or examples that I can read or follow.
For reading data from a serial port you can find a few packages on Github, e.g. tarm/serial.
You can use this package to read data from your serial port. In order to read until a specific delimiter is reached, you can use something like:
config := &serial.Config{Name: "/dev/ttyUSB", Baud: 9600}
s, err := serial.OpenPort(config)
if err != nil {
// stops execution
log.Fatal(err)
}
// golang reader interface
r := bufio.NewReader(s)
// reads until delimiter is reached
data, err := r.ReadBytes('\x21')
if err != nil {
// stops execution
log.Fatal(err)
}
// or use fmt.Printf() with the right verb
// https://golang.org/pkg/fmt/#hdr-Printing
fmt.Println(data)
See also: Reading from serial port with while-loop
bufio's reader unfortunately did not work for me - it kept crashing after a while. This was a no-go since I needed a stable solution for a low-performance system.
My solution was to implement this suggestion with a small tweak. As noted, if you don't use bufio, the buffer gets overwritten every time you call
n, err := s.Read(buf0)
To fix this, append the bytes from buf0 to a second buffer, buf1:
if n > 0 {
buf1 = append(buf1, buf0[:n]...)
}
Then parse the bytes stored in buf1. If you find a subset you're looking for, process it further.
make sure to clear the buffers in a suitable manner
make sure to limit the frequency the loop is running with (e.g. time.Sleep)

is delete guaranteed to delete from a hash in golang?

I have a hash like this:
var TransfersInFlight map[string]string = make(map[string]string)
And before I send a file I make a key for it store, send it, delete it:
timeKey := fmt.Sprintf("%v",time.Now().UnixNano())
TransfersInFlight[timeKey] = filename
total, err := sendTheFile(filename)
delete(TransfersInFlight, timeKey)
i.e. during the time it takes to send the file, there is a key in the hash with a timestamp pointing to the filename.
the func sendTheFile always either works, or has an err but never throws a stacktrace exception and crashes the whole program so the line:
delete(TransfersInFlight, timeKey)
should be called 100% of the time. And yet, I sometimes find cases where it's like this line was never called and the file is stuck in TransfersInFlight forever. How is this possible?
Maps are not safe for concurrent access. I would do this either using a mutex to moderate map access or having a goroutine reading either a channel of "op" structs or have a "add" channel and a "delete" channel.
You're probably safe having multiple read-only accesses concurrently, but once you have writes in the mix, you really want to ensure you only have one access at a time.
If you are set on using a goroutine to manage the count, one way would be something like:
import "sync/atomic"
var TransferChan chan int32
var TransfersInFlight int32
func TransferManager() {
TransfersInFlight = 0
for delta := range TransferChan {
// You're *probably* safe just using +=, but, you know...
atomic.AddInt32(&TransfersInFlight, delta)
}
}
That way, you only need to do go TransferManager() and then pass your increments and decrements over the TransferChan channel.

Modyfication of bufio in golang

I reading big file and sending this file by http POST.
I use bufio.
And now I want to modify one of first line of this file, how to do it ?
f := bufio.NewReaderSize(os.Stdin, 65536)
bufPart, err := f.Peek(65536))
//how to modify bufPart(f) ?
...
req, err := http.NewRequest("POST", url, f)
Two ideas how to do it:
Create your own Reader implementation that wraps an bufio.Reader and implements replacing logic (you will have to count number of read bytes).
Call io.Pipe, pass the returned PipeReader to NewRequest and start a separate goroutine that will read data from a file, modify it and write to the returned PipeWriter.
Hope this makes sense.

Resources