Batching Operations in Boltdb - go

Currently using db.Update() to update the key-value in boltdb.
err := db.Update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucket([]byte("widgets"))
if err != nil {
return err
}
if err := b.Put([]byte("foo"), []byte("bar")); err != nil {
return err
}
return nil
})
How to use db.Batch() operations using go routines?

Just call db.Batch() from your goroutines. Batch() was created to be used this way. There is an example in documentation.

Related

gorilla websocket NextWriter and WriteMessage() difference

i have function:
func write() {
defer func() {
serverConn.Close()
}()
for message := range msgChan {
w, err := serverConn.NextWriter(websocket.TextMessage)
if err != nil {
return
}
bmessage, err := json.Marshal(message)
if err != nil {
return
}
_, err = w.Write(bmessage)
if err != nil {
fmt.Println(err)
}
if err := w.Close(); err != nil {
return
}
}
}
And i got
panic: concurrent write to websocket connection
I'm wondering how is this possible? in ws writes only this function, which is running in 1 instance. and second question: what is the point of using NextWriter instead of just conn.WriteMessage()? Is it possible that with a large number of messages NextWriter accumulate and can try to write at the same time?

Atomically Execute commands across Redis Data Structures

I want to execute some redis commands atomically (HDel, SADD, HSet etc). I see the Watch feature in the go-redis to implement transactions , however since I am not going to modify the value of a key i.e use SET,GET etc , does it make sense to use Watch to execute it as transaction or just wrapping the commands in a TxPipeline would be good enough?
Approach 1 : Using Watch
func sampleTransaction() error{
transactionFunc := func(tx *redis.Tx) error {
// Get the current value or zero.
_, err := tx.TxPipelined(context.Background(), func(pipe redis.Pipeliner) error {
_, Err := tx.SAdd(context.Background(), "redis-set-key", "value1").Result()
if Err != nil {
return Err
}
_, deleteErr := tx.HDel(context.Background(), "redis-hash-key", "value1").Result()
if deleteErr != nil {
return deleteErr
}
return nil
})
return err
}
retries:=10
// Retry if the key has been changed.
for i := 0; i < retries; i++ {
fmt.Println("tries", i)
err := redisClient.Watch(context.Background(), transactionFunc())
if err == nil {
// Success.
return nil
}
if err == redis.TxFailedErr {
continue
}
return err
}
}
Approach 2: Just wrapping in TxPipelined
func sampleTransaction() error {
_, err:= tx.TxPipelined(context.Background(), func(pipe redis.Pipeliner) error {
_, Err := tx.SAdd(context.Background(), "redis-set-key", "value1").Result()
if Err != nil {
return Err
}
_, deleteErr := tx.HDel(context.Background(), "redis-hash-key", "value1").Result()
if deleteErr != nil {
return deleteErr
}
return nil
})
return err
}
As far as I know, pipelines do not guarantee atomicity. If you need atomicity, use lua.
https://pkg.go.dev/github.com/mediocregopher/radix.v3#NewEvalScript

golang sql/database prepared statement in transaction

While I was reading the example of "Prepared" statement in "transaction" in golang SQL/database example. One of the line says, "danger", yet the code example was provided without an alternative.
I wanted to have more clear explanation on below query, as not much information was provided on Wiki page at - http://go-database-sql.org/prepared.html
tx, err := db.Begin()
if err != nil {
log.Fatal(err)
}
defer tx.Rollback()
stmt, err := tx.Prepare("INSERT INTO foo VALUES (?)")
if err != nil {
log.Fatal(err)
}
defer stmt.Close() // danger!
for i := 0; i < 10; i++ {
_, err = stmt.Exec(i)
if err != nil {
log.Fatal(err)
}
}
err = tx.Commit()
if err != nil {
log.Fatal(err)
}
// stmt.Close() runs here!
If you see in defer stmt.Close() it mentions, it's dangerous and yet not commented out for users to remove it.
Though I see no issue in above code as "defer" is going to run the code at the end but do they mean, above code is wrong and it should be replaced with below code or other better alternatives code.
tx, err := db.Begin()
if err != nil {
log.Fatal(err)
}
defer tx.Rollback()
stmt, err := tx.Prepare("INSERT INTO foo VALUES (?)")
if err != nil {
log.Fatal(err)
}
// Commented out below line.
// defer stmt.Close()
for i := 0; i < 10; i++ {
_, err = stmt.Exec(i)
if err != nil {
log.Fatal(err)
}
}
err = tx.Commit()
if err != nil {
log.Fatal(err)
}
// Comment removed from below line to close the stmt
stmt.Close()
I see no difference in both of the code above, yet, I need expert advice on above if there is any difference or If I am missing something.
a defer statement is a good way to make sure something runs no matter how you exit the function.
In this particular case, it seems to not matter, since all the error handlers use log.Fatal. If you replace the log.Fatals with return statements, and remove the defers, you now have to cleanup in many places:
tx, err := db.Begin()
if err != nil {
return nil,err
}
stmt, err := tx.Prepare("INSERT INTO foo VALUES (?)")
if err != nil {
tx.Rollback()
return nil,err
}
defer
for i := 0; i < 10; i++ {
_, err = stmt.Exec(i)
if err != nil {
tx.Rollback()
return nil,err
}
}
err = tx.Commit()
if err != nil {
stmt.Close()
tx.Rollback()
return nil,err
}
stmt.Close()
return someValue, nil
If you use defer, it is harder to forget one place you need to clean something up.

golang scp file using crypto/ssh

I'm trying to download a remote file over ssh
The following approach works fine on shell
ssh hostname "tar cz /opt/local/folder" > folder.tar.gz
However the same approach on golang giving some difference in output artifact size. For example the same folders with pure shell produce artifact gz file 179B and same with go script 178B.
I assume that something has been missed from io.Reader or session got closed earlier. Kindly ask you guys to help.
Here is the example of my script:
func executeCmd(cmd, hostname string, config *ssh.ClientConfig, path string) error {
conn, _ := ssh.Dial("tcp", hostname+":22", config)
session, err := conn.NewSession()
if err != nil {
panic("Failed to create session: " + err.Error())
}
r, _ := session.StdoutPipe()
scanner := bufio.NewScanner(r)
go func() {
defer session.Close()
name := fmt.Sprintf("%s/backup_folder_%v.tar.gz", path, time.Now().Unix())
file, err := os.OpenFile(name, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
panic(err)
}
defer file.Close()
for scanner.Scan() {
fmt.Println(scanner.Bytes())
if err := scanner.Err(); err != nil {
fmt.Println(err)
}
if _, err = file.Write(scanner.Bytes()); err != nil {
log.Fatal(err)
}
}
}()
if err := session.Run(cmd); err != nil {
fmt.Println(err.Error())
panic("Failed to run: " + err.Error())
}
return nil
}
Thanks!
bufio.Scanner is for newline delimited text. According to the documentation, the scanner will remove the newline characters, stripping any 10s out of your binary file.
You don't need a goroutine to do the copy, because you can use session.Start to start the process asynchronously.
You probably don't need to use bufio either. You should be using io.Copy to copy the file, which has an internal buffer already on top of any buffering already done in the ssh client itself. If an additional buffer is needed for performance, wrap the session output in a bufio.Reader
Finally, you return an error value, so use it rather than panic'ing on regular error conditions.
conn, err := ssh.Dial("tcp", hostname+":22", config)
if err != nil {
return err
}
session, err := conn.NewSession()
if err != nil {
return err
}
defer session.Close()
r, err := session.StdoutPipe()
if err != nil {
return err
}
name := fmt.Sprintf("%s/backup_folder_%v.tar.gz", path, time.Now().Unix())
file, err := os.OpenFile(name, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
return err
}
defer file.Close()
if err := session.Start(cmd); err != nil {
return err
}
n, err := io.Copy(file, r)
if err != nil {
return err
}
if err := session.Wait(); err != nil {
return err
}
return nil
You can try doing something like this:
r, _ := session.StdoutPipe()
reader := bufio.NewReader(r)
go func() {
defer session.Close()
// open file etc
// 10 is the number of bytes you'd like to copy in one write operation
p := make([]byte, 10)
for {
n, err := reader.Read(p)
if err == io.EOF {
break
}
if err != nil {
log.Fatal("err", err)
}
if _, err = file.Write(p[:n]); err != nil {
log.Fatal(err)
}
}
}()
Make sure your goroutines are synchronized properly so output is completeky written to the file.

How to query Redis db from golang using redigo library

I am trying to figure out what is the best way to query Redis db for multiple keys in one command.
I have seen MGET which can be used for redis-cli. But how you do that using redigo library from GOlang code. Imagine I have an array of keys and I want to take from Redis db all the values for those keys in one query.
Thanks in advance!
Assuming that c is a Redigo connection and keys is a []string of your keys:
var args []interface{}
for _, k := range keys {
args = append(args, k)
}
values, err := redis.Strings(c.Do("MGET", args...))
if err != nil {
// handle error
}
for _, v := range values {
fmt.Println(v)
}
The Go FAQ explains why you need to copy the keys. The spec describes how to pass a slice to a variadic param.
http://play.golang.org/p/FJazj_PuCq
func main() {
// connect to localhost, make sure to have redis-server running on the default port
conn, err := redis.Dial("tcp", ":6379")
if err != nil {
log.Fatal(err)
}
defer conn.Close()
// add some keys
if _, err = conn.Do("SET", "k1", "a"); err != nil {
log.Fatal(err)
}
if _, err = conn.Do("SET", "k2", "b"); err != nil {
log.Fatal(err)
}
// for fun, let's leave k3 non-existing
// get many keys in a single MGET, ask redigo for []string result
strs, err := redis.Strings(conn.Do("MGET", "k1", "k2", "k3"))
if err != nil {
log.Fatal(err)
}
// prints [a b ]
fmt.Println(strs)
// now what if we want some integers instead?
if _, err = conn.Do("SET", "k4", "1"); err != nil {
log.Fatal(err)
}
if _, err = conn.Do("SET", "k5", "2"); err != nil {
log.Fatal(err)
}
// get the keys, but ask redigo to give us a []interface{}
// (it doesn't have a redis.Ints helper).
vals, err := redis.Values(conn.Do("MGET", "k4", "k5", "k6"))
if err != nil {
log.Fatal(err)
}
// scan the []interface{} slice into a []int slice
var ints []int
if err = redis.ScanSlice(vals, &ints); err != nil {
log.Fatal(err)
}
// prints [1 2 0]
fmt.Println(ints)
}
UPDATE March 10th 2015: redigo now has a redis.Ints helper.

Resources