Filters on Multiple Column in BigTable - go

From CBT's documentation
// READING OP HERE
timestamp := bigtable.Now()
mut := bigtable.NewMutation()
mut.Set(columnFamilyName, "os_name", timestamp, []byte("android"))
filter := bigtable.ChainFilters(
bigtable.FamilyFilter(columnFamilyName),
bigtable.ColumnFilter("os_build"),
bigtable.ValueFilter("PQ2A\\..*"))
conditionalMutation := bigtable.NewCondMutation(filter, mut, nil)
rowKey := "phone#4c410523#20190501"
if err := tbl.Apply(ctx, rowKey, conditionalMutation); err != nil {
return fmt.Errorf("Apply: %v", err)
}
fmt.Println("Successfully updated row's os_name")
I wanted to know if this also enables concurrency control, i.e. if we go by sequence
#1 - Read
#2 - Modify on Read
#3 - Write
If two threads are trying to modify same row at same time, will tbl.Apply fail?

The process of checking and then writing as a conditional mutation is completed as a single, atomic action. If you are sending multiple mutations they could get executed in an arbitrary order, but since the conditionalMutation is a single action it wont be affected by another mutation. Therefore, tbl.Apply should not fail in your case.

Related

Is it possible to write truncate a partition when inserting rows in the Go library for BigQuery?

dataset := s.client.DatasetInProject(someProjectId, someDatasetId)
inserter := dataset.Table(someTableName).Inserter()
for _, batch := range batches {
if err := inserter.Put(ctx, batch); err != nil {
return SomeCustomError
}
}
This is how I currently use the Go client library to write rows to a BigQuery table. What I want to achieve, however, is to write truncating a BigQuery partition.
https://pkg.go.dev/cloud.google.com/go/bigquery
This option is listed here but for some reason, it's only used in the method LoadFrom that uses GCS first.
I am wondering whether it's possible to set this truncating behavior (instead of just appending) for the standard inserting from a slice of values in Go.

Go Gorm Atomic Update to Increment Counter

I have what is essentially a counter that users can increment.
However, I want to avoid the race condition of two users incrementing the counter at once.
Is there a way to atomically increment a counter using Gorm as opposed to fetching the value from the database, incrementing, and finally updating the database?
If you want to use the basic ORM features, you can use FOR UPDATE as query option when retrieving the record, the database will lock the record for that specific connection until that connection issues an UPDATE query to change that record.
Both the SELECT and UPDATE statements must happen on the same connection, which means you need to wrap them in a transaction (otherwise Go may send the second query over a different connection).
Please note that this will make every other connection that wants to SELECT the same record wait until you've done the UPDATE. That is not an issue for most applications, but if you either have very high concurrency or the time between SELECT ... FOR UPDATE and the UPDATE after that is long, this may not be for you.
In addition to FOR UPDATE, the FOR SHARE option sounds like it can also work for you, with less locking contentions (but I don't know it well enough to say this for sure).
Note: This assumes you use an RDBMS that supports SELECT ... FOR UPDATE; if it doesn't, please update the question to tell us which RDBMS you are using.
Another option is to just go around the ORM and do db.Exec("UPDATE counter_table SET counter = counter + 1 WHERE id = ?", 42) (though see https://stackoverflow.com/a/29945125/1073170 for some pitfalls).
A possible solution is to use GORM transactions (https://gorm.io/docs/transactions.html).
err := db.Transaction(func(tx *gorm.DB) error {
// Get model if exist
var feature models.Feature
if err := tx.Where("id = ?", c.Param("id")).First(&feature).Error; err != nil {
return err
}
// Increment Counter
if err := tx.Model(&feature).Update("Counter", feature.Counter+1).Error; err != nil {
return err
}
return nil
})
if err != nil {
c.Status(http.StatusInternalServerError)
return
}
c.Status(http.StatusOK)

How can I dynamically populate a struct?

I want to dynamically populate my internal struct, for an atomic insert. I am new to go so pointers and referencing them is something that I am still learning. I can not figure out why this for each loop is putting the same fields in twice. I tried removing the '&' then I get a cannot use type as *type error, I checked to make sure my loop was hitting every object in the tradeArray, and it is. It looks like it is overwriting the object before it with the last one it loops over. How can I fix this?
func createTrade(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
var tradeArray []Trade
if err := json.NewDecoder(r.Body).Decode(&tradeArray); err != nil {
e := Error{Message: "Bad Request - Improper Types Passed"}
w.WriteHeader(http.StatusBadRequest)
_ = json.NewEncoder(w).Encode(e)
return
}
for _, trade := range tradeArray {
internal := InternalTrade{
Id: strconv.Itoa(rand.Intn(1000000)),
Trade: &trade,
}
submit := TradeSubmitted{
TradeId: internal.Id,
ClientTradeId: trade.ClientTradeId ,
}
submitArray = append(submitArray, submit)
trades = append(trades, internal)
}
if err := json.NewEncoder(w).Encode(submitArray); err != nil {
e := Error{Message:"Internal Server Error"}
w.WriteHeader(http.StatusInternalServerError)
_ = json.NewEncoder(w).Encode(e)
return
}
}
edit: I was able to fix my problem by creating a new variable to hold the trade and referencing that variable in the struct creation. I am not sure how this is different that what I was doing above with just referencing the "trade" if someone could explain that I would greatly appreciate it.
for _, trade := range tradeArray {
p := trade
internal := InternalTrade{
Id: strconv.Itoa(rand.Intn(1000000)),
Trade: &p,
}
submit := TradeSubmitted{
TradeId: internal.Id,
ClientTradeId: trade.ClientTradeId ,
}
submitArray = append(submitArray, submit)
trades = append(trades, internal)
}
Let's look at just these parts:
var tradeArray []Trade
// code that fills in `tradeArray` -- correct, and omitted here
for _, trade := range tradeArray {
internal := InternalTrade{
Id: strconv.Itoa(rand.Intn(1000000)),
Trade: &trade,
}
submit := TradeSubmitted{
TradeId: internal.Id,
ClientTradeId: trade.ClientTradeId ,
}
submitArray = append(submitArray, submit)
trades = append(trades, internal)
}
This for loop, as you have seen, doesn't work the way you want. Here's a variant of it that's kind of similar, except that the variable trade has scope that extends beyond the for loop:
var trade Trade
for i := range tradeArray {
trade = tradeArray[i]
internal := InternalTrade{
Id: strconv.Itoa(rand.Intn(1000000)),
Trade: &trade,
}
// do correct stuff with `internal`
}
Note that each internal object points to a single, shared trade variable, whose value gets overwritten on each trip through the loop. The result is that they all point to the one from the last trip through the loop.
Your fix is itself OK: each trip through the loop, you make a new (different) p variable, and use &p, so that each internal.Trade has a different pointer to a different copy. You could also just do trade := trade inside the loop, to create a new unique trade variable. However, in this particular case, it may make the most sense to rewrite the loop this way:
for i := range tradeArray {
internal := InternalTrade{
Id: strconv.Itoa(rand.Intn(1000000)),
Trade: &tradeArray[i],
}
// do correct stuff with `internal`
}
That is, you already have len(tradeArray) different Trade objects: the slice header tradeArray gives you access to each tradeArray[i] instance, stored in the underlying array. You can just point to those directly.
There are various advantages and disadvantages to this approach. The big advantage is that you don't re-copy each trade at all: you just use the ones from the array that the slice header covers, that was allocated inside the json Decode function somewhere. The big disadvantage is that this underlying array cannot be garbage-collected as long as you retain any pointer to any of its elements. That disadvantage may have no cost at all, depending on the structure of the remaining code, but if it is a disadvantage, consider declaring tradeArray as:
var tradeArray []*Trade
so that the json Decode function allocates each one separately, and you can point to them one at a time without forcing the retention of the entire collection.

How to read data from serial and process it when a specific delimiter is found

I have a device, which continues to send data over a serial port.
Now I want to read this and process it.
The data send this delimiter "!" and
as soon as this delimiter appears I want to pause reading to processing the data thats already been received.
How can I do that? Is there any documentation or examples that I can read or follow.
For reading data from a serial port you can find a few packages on Github, e.g. tarm/serial.
You can use this package to read data from your serial port. In order to read until a specific delimiter is reached, you can use something like:
config := &serial.Config{Name: "/dev/ttyUSB", Baud: 9600}
s, err := serial.OpenPort(config)
if err != nil {
// stops execution
log.Fatal(err)
}
// golang reader interface
r := bufio.NewReader(s)
// reads until delimiter is reached
data, err := r.ReadBytes('\x21')
if err != nil {
// stops execution
log.Fatal(err)
}
// or use fmt.Printf() with the right verb
// https://golang.org/pkg/fmt/#hdr-Printing
fmt.Println(data)
See also: Reading from serial port with while-loop
bufio's reader unfortunately did not work for me - it kept crashing after a while. This was a no-go since I needed a stable solution for a low-performance system.
My solution was to implement this suggestion with a small tweak. As noted, if you don't use bufio, the buffer gets overwritten every time you call
n, err := s.Read(buf0)
To fix this, append the bytes from buf0 to a second buffer, buf1:
if n > 0 {
buf1 = append(buf1, buf0[:n]...)
}
Then parse the bytes stored in buf1. If you find a subset you're looking for, process it further.
make sure to clear the buffers in a suitable manner
make sure to limit the frequency the loop is running with (e.g. time.Sleep)

How can I check whether Psql successfully updated the record in Go

I use this driver to communicate with psql from Go. Now when I issue an update query, I have no possibility to know whether it actually updated anything (it can update 0 rows if such id is not present).
_, err := Db.Query("UPDATE tags SET name=$1 WHERE id=1", name)
I tried to investigate err variable (in the way the doc suggests for Insert statement):
if err == sql.ErrNoRows {
...
}
But even with non-existent id, err is still null.
I also tried to use QueryRow with returning clause:
id := 0
err := Db.QueryRow("UPDATE tags SET name=$1 WHERE id=1 RETURNING id", name).Scan(&id)
But this one fails to scan &id when id=1 is not present in the database.
So what is the canonical way to check whether my update updated anything?
Try using db.Exec() instead of db.Query() for queries that do not return results. Instead of returning a sql.Rows object (which doesn't have a way to check how many rows were affected), it returns a sql.Result object, which has a method RowsAffected() (int64, error). This returns the number of rows affected (inserted, deleted, updated) by any write operations in the query fed to the Exec() call.
res, err := db.Exec(query, args...)
if err != nil {
return err
}
n, err := res.RowsAffected()
if err != nil {
return err
}
// do something with n
Note that if your query doesn't affect any rows directly, but only does so via a subquery, the rows affected by the subquery will not be counted as rows affected for that method call.
Also, as the method comment notes, this doesn't work for all database types, but I know for a fact it works with pq, as we're using that driver ourselves (and using the RowsAffected() method).
Reference links:
https://golang.org/pkg/database/sql/#DB.Exec
https://golang.org/pkg/database/sql/#Result

Resources