I have what is essentially a counter that users can increment.
However, I want to avoid the race condition of two users incrementing the counter at once.
Is there a way to atomically increment a counter using Gorm as opposed to fetching the value from the database, incrementing, and finally updating the database?
If you want to use the basic ORM features, you can use FOR UPDATE as query option when retrieving the record, the database will lock the record for that specific connection until that connection issues an UPDATE query to change that record.
Both the SELECT and UPDATE statements must happen on the same connection, which means you need to wrap them in a transaction (otherwise Go may send the second query over a different connection).
Please note that this will make every other connection that wants to SELECT the same record wait until you've done the UPDATE. That is not an issue for most applications, but if you either have very high concurrency or the time between SELECT ... FOR UPDATE and the UPDATE after that is long, this may not be for you.
In addition to FOR UPDATE, the FOR SHARE option sounds like it can also work for you, with less locking contentions (but I don't know it well enough to say this for sure).
Note: This assumes you use an RDBMS that supports SELECT ... FOR UPDATE; if it doesn't, please update the question to tell us which RDBMS you are using.
Another option is to just go around the ORM and do db.Exec("UPDATE counter_table SET counter = counter + 1 WHERE id = ?", 42) (though see https://stackoverflow.com/a/29945125/1073170 for some pitfalls).
A possible solution is to use GORM transactions (https://gorm.io/docs/transactions.html).
err := db.Transaction(func(tx *gorm.DB) error {
// Get model if exist
var feature models.Feature
if err := tx.Where("id = ?", c.Param("id")).First(&feature).Error; err != nil {
return err
}
// Increment Counter
if err := tx.Model(&feature).Update("Counter", feature.Counter+1).Error; err != nil {
return err
}
return nil
})
if err != nil {
c.Status(http.StatusInternalServerError)
return
}
c.Status(http.StatusOK)
Related
The Problem
Using the golang cloud.google.com/go/datastore package to create a transaction, perform a series of getMulti's, and putMulti's, on commit of this transaction I'm confronted with an an entity write limit error.
2021/12/22 09:07:18 err: rpc error: code = InvalidArgument desc = cannot write more than 500 entities in a single call
The Question
My question is how do you create a transaction with more than 500 writes?
While I want my operation to remain atomic, I can't seem to solve this write limit error for a transaction and the set of queries run just fine when I test on an emulator, writing in batches of 500.
What I've Tried
please excuse the sudo code but I'm trying to get the jist of what I've done
All in one
transaction, err := datastoreClient.NewTransaction(ctx)
transaction.PutMulti(allKeys, allEntities)
transaction.commit()
// err too many entities written in a single call
Batched in an attempt to avoid the write limit
transaction, err := datastoreClient.NewTransaction(ctx)
transaction.PutMulti(first500Keys, first500Entities)
transaction.PutMulti(second500Keys, second500Entities)
transaction.commit()
// err too many entities written in a single call
A simple regular putmulti also fails
datastoreClient.PutMulti(ctx,allKeys, allEntities)
// err too many entities written in a single call
What Works
Non-atomic write to the datastore
datastoreClient.PutMulti(ctx,first500Keys, first500Entities)
datastoreClient.PutMulti(ctx,second500Keys, second500Entities)
here's the real code that I used for the write, either as a batched transaction or regular putMulti
for i := 0; i < (len(allKeys) / 500); i++ {
var max int = (i + 1) * 500
if len(allKeys) < max {
max = len(allKeys) % 500
}
_, err = svc.dsClient.PutMulti(ctx, allKeys[i*500:max], allEntities[i*500:max])
if err != nil {
return
}
}
Where I'm Lost
so in an effort to keeping my work atomic, is there any method to commit a transaction that has more than 500 entities written in it?
Nothing you can do. This limit is enforced by the platform to ensure scalability and to prevent performance degradation. You can't write more than 500 entities in a single transaction.
It's possible to change the limit on Google's side, but nothing you can do on your side.
I am using a raw query to create an entry in MySQL,
db.Exec("INSERT STATEMENT")
I want to retrieve the last ID , in the native golang/sql package, it is easy, but here in GORM I can't see anything
I know that using the db.Create can solve my problem, but I have to use Raw query and nothing else
Why do you want to do it with GORM if you are not actually using GORM, but a raw query anyway? GORM exposes the generic database interface through the DB method. So you can do this:
sqlDB, err := db.DB()
res, err := sqlDB.Exec("INSERT STATEMENT")
lid, err := res.LastInsertId()
Of course, you should handle possible errors.
From CBT's documentation
// READING OP HERE
timestamp := bigtable.Now()
mut := bigtable.NewMutation()
mut.Set(columnFamilyName, "os_name", timestamp, []byte("android"))
filter := bigtable.ChainFilters(
bigtable.FamilyFilter(columnFamilyName),
bigtable.ColumnFilter("os_build"),
bigtable.ValueFilter("PQ2A\\..*"))
conditionalMutation := bigtable.NewCondMutation(filter, mut, nil)
rowKey := "phone#4c410523#20190501"
if err := tbl.Apply(ctx, rowKey, conditionalMutation); err != nil {
return fmt.Errorf("Apply: %v", err)
}
fmt.Println("Successfully updated row's os_name")
I wanted to know if this also enables concurrency control, i.e. if we go by sequence
#1 - Read
#2 - Modify on Read
#3 - Write
If two threads are trying to modify same row at same time, will tbl.Apply fail?
The process of checking and then writing as a conditional mutation is completed as a single, atomic action. If you are sending multiple mutations they could get executed in an arbitrary order, but since the conditionalMutation is a single action it wont be affected by another mutation. Therefore, tbl.Apply should not fail in your case.
I use this driver to communicate with psql from Go. Now when I issue an update query, I have no possibility to know whether it actually updated anything (it can update 0 rows if such id is not present).
_, err := Db.Query("UPDATE tags SET name=$1 WHERE id=1", name)
I tried to investigate err variable (in the way the doc suggests for Insert statement):
if err == sql.ErrNoRows {
...
}
But even with non-existent id, err is still null.
I also tried to use QueryRow with returning clause:
id := 0
err := Db.QueryRow("UPDATE tags SET name=$1 WHERE id=1 RETURNING id", name).Scan(&id)
But this one fails to scan &id when id=1 is not present in the database.
So what is the canonical way to check whether my update updated anything?
Try using db.Exec() instead of db.Query() for queries that do not return results. Instead of returning a sql.Rows object (which doesn't have a way to check how many rows were affected), it returns a sql.Result object, which has a method RowsAffected() (int64, error). This returns the number of rows affected (inserted, deleted, updated) by any write operations in the query fed to the Exec() call.
res, err := db.Exec(query, args...)
if err != nil {
return err
}
n, err := res.RowsAffected()
if err != nil {
return err
}
// do something with n
Note that if your query doesn't affect any rows directly, but only does so via a subquery, the rows affected by the subquery will not be counted as rows affected for that method call.
Also, as the method comment notes, this doesn't work for all database types, but I know for a fact it works with pq, as we're using that driver ourselves (and using the RowsAffected() method).
Reference links:
https://golang.org/pkg/database/sql/#DB.Exec
https://golang.org/pkg/database/sql/#Result
I have a small Heroku app in which i print out name and age from each rows after query execution.
I want to avoid looping rows.Next(),Scan().. and just want to show what database returned after query execution which may be some data or error.
Can we directly dump data to a string for printing?
rows, err := db.Query("SELECT name FROM users WHERE age = $1", age)
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var name string
if err := rows.Scan(&name); err != nil {
log.Fatal(err)
}
fmt.Printf("%s is %d\n", name, age)
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
Pretty much: No.
The Query method is going to return a pointer to a Rows struct:
func (db *DB) Query(query string, args ...interface{}) (*Rows, error)
If you print that (fmt.Printf("%#v\n", rows)) you'll see something such as:
&sql.Rows{dc:(*sql.driverConn)(0xc8201225a0), releaseConn:(func(error)(0x4802c0), rowsi:(*pq.rows)(0xc820166700), closed:false, lastcols:[]driver.Value(nil), lasterr:error(nil), closeStmt:driver.Stmt(nil)}
...probably not what you want.
Those correspond to the Rows struct from the sql package (you'll notice the fields are not exported):
type Rows struct {
dc *driverConn // owned; must call releaseConn when closed to release
releaseConn func(error)
rowsi driver.Rows
closed bool
lastcols []driver.Value
lasterr error // non-nil only if closed is true
closeStmt driver.Stmt // if non-nil, statement to Close on close
}
You'll see []driver.Value (an interface from the driver package), that looks like where we can expect to find some useful, maybe even human readable data. But when directly printed it doesn't appear useful, it's even empty... So you have to somehow get at the underlying information. The sql package gives us the Next method to start with:
Next prepares the next result row for reading with the Scan method.
It returns true on success, or false if there is no next
result row or an error happened while preparing it. Err
should be consulted to distinguish between the two cases.
Every call to Scan, even the first one, must be preceded by a call to Next.
Next is going to make a []driver.Value the same size as the number of columns I have, which is accessible (within the sql package) through driver.Rows (the rowsi field) and populate it with values from the query.
After calling rows.Next() if you did the same fmt.Printf("%#v\n", rows) you should now see that []diver.Value is no longer empty but it's still not going to be anything that you can read, more likely something resembling:[]diver.Value{[]uint8{0x47, 0x65...
And since the field isn't exported you can't even try and convert it to something more meaningful. But the sql package gives us a means to do something with the data, which is Scan.
The Scan method is pretty concise, with lengthy comments that I won't paste here, but the really important bit is that it ranges over the columns in the current row you get from the Next method and calls convertAssign(dest[i], sv), which you can see here:
https://golang.org/src/database/sql/convert.go
It's pretty long but actually relatively simple, it essentially switches on the type of the source and destination and converts where it can, and copies from source to destination; the function comments tell us:
convertAssign copies to dest the value in src, converting it if possible. An error is returned if the copy would result in loss of information. dest should be a pointer type.
So now you have a method (Scan) which you can call directly and which hands you back converted values. Your code sample above is fine (except maybe the call to Fatal() on a Scan error).
It's important to realize that the sql package has to work with a specific driver, which is in turn implemented for specific database software, so there is quite some work going on behind the scenes.
I think your best bet if you want to hide/generalize the whole Query() ---> Next() ---> Scan() idiom is to drop it into another function which does it behind the scenes... write a package in which you abstract away that higher level implementation, as the sql package abstracts away some of the driver-specific details, the converting and copying, populating the Rows, etc.