Failed to update rows in "jinzhu/gorm" pkg - go

I need to update value of fields in multiple rows.
I'm querying to get some of the database rows, but it doesn't work.
DB.Where("is_send = ?", "0").Find(&artists)
for _, artist := range artists {
if condition {
artist.IsSend = 1
... (more updatee)
DB.Save(&artist)
}
}

Change how you range it, by referring the below example:
for _, elem := range elems {
elem = new_val // Won't work, because elem is a copy of
// the value from elems
}
for i := range elems {
elems[i] = new_val // Works, because elems[i] deferences
// the pointer to the actual value in elems
}
Read: Gotchas
Also, if you're not modifying all fields, rather than using Save you can use Update as well. Refer: GORM CRUD's Interface UPDATE

Related

How to get columns data from golang apache-arrow?

I am using apache-arrow/go to read parquet data.
I can parse the data to table by using apach-arrow.
reader, err := ipc.NewReader(buf, ipc.WithAllocator(alloc))
if err != nil {
log.Println(err.Error())
return nil
}
defer reader.Release()
records := make([]array.Record, 0)
for reader.Next() {
rec := reader.Record()
rec.Retain()
defer rec.Release()
records = append(records, rec)
}
table := array.NewTableFromRecords(reader.Schema(), records)
Here, i can get the column info from table.Colunmn(index), such as:
for i, _ := range table.Schema().Fields() {
a := table.Column(i)
log.Println(a)
}
But the Column struct is defined as
type Column struct {
field arrow.Field
data *Chunked
}
and the println result is like
["WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN" "WARN"]
However, this is not a string or slice. Is there anyway that i can get the data of each column with string type or []interface{} ?
Update:
I find that i can use reflect to get the element from col.
log.Println(col.(*array.Int64).Value(0))
But i am not sure if this is the recommended way to use it.
When working with Arrow data, there's a couple concepts to understand:
Array: Metadata + contiguous buffers of data
Record Batch: A schema + a collection of Arrays that are all the same length.
Chunked Array: A group of Arrays of varying lengths but all the same data type. This allows you to treat multiple Arrays as one single column of data without having to copy them all into a contiguous buffer.
Column: Is just a Field + a Chunked Array
Table: A collection of Columns allowing you to treat multiple non-contiguous arrays as a single large table without having to copy them all into contiguous buffers.
In your case, you're reading multiple record batches (groups of contiguous Arrays) and treating them as a single large table. There's a few different ways you can work with the data:
One way is to use a TableReader:
tr := array.NewTableReader(tbl, 5)
defer tr.Release()
for tr.Next() {
rec := tr.Record()
for i, col := range rec.Columns() {
// do something with the Array
}
}
Another way would be to interact with the columns directly as you were in your example:
for i := 0; i < table.NumCols(); i++ {
col := table.Column(i)
for _, chunk := range col.Data().Chunks() {
// do something with chunk (an arrow.Array)
}
}
Either way, you eventually have an arrow.Array to deal with, which is an interface containing one of the typed Array types. At this point you are going to have to switch on something, you could type switch on the type of the Array itself:
switch arr := col.(type) {
case *array.Int64:
// do stuff with arr
case *array.Int32:
// do stuff with arr
case *array.String:
// do stuff with arr
...
}
Alternately, you could type switch on the data type:
switch col.DataType().ID() {
case arrow.INT64:
// type assertion needed col.(*array.Int64)
case arrow.INT32:
// type assertion needed col.(*array.Int32)
...
}
For getting the data out of the array, primitive types which are stored contiguously tend to have a *Values method which will return a slice of the type. For example array.Int64 has Int64Values() which returns []int64. Otherwise, all of the types have .Value(int) methods which return the value at a particular index as you showed in your example.
Hope this helps!
Make sure you use v9
(import "github.com/apache/arrow/go/v9/arrow") because it have implemented json.Marshaller (from go-json)
Use "github.com/goccy/go-json" for Marshaler (because of this)
Then you can use TableReader to Marshal it then Unmarshal with type []any
In your example maybe look like this:
import (
"github.com/apache/arrow/go/v9/arrow"
"github.com/apache/arrow/go/v9/arrow/array"
"github.com/apache/arrow/go/v9/arrow/memory"
"github.com/goccy/go-json"
)
...
tr := array.NewTableReader(tabel, 6)
defer tr.Release()
// fmt.Printf("tbl.NumRows() = %+v\n", tbl.NumRows())
// fmt.Printf("tbl.NumColumn = %+v\n", tbl.NumCols())
// keySlice is for sorting same as data source
keySlice := make([]string, 0, tabel.NumCols())
res := make(map[string][]any, 0)
var key string
for tr.Next() {
rec := tr.Record()
for i, col := range rec.Columns() {
key = rec.ColumnName(i)
if res[key] == nil {
res[key] = make([]any, 0)
keySlice = append(keySlice, key)
}
var tmp []any
b2, err := json.Marshal(col)
if err != nil {
panic(err)
}
err = json.Unmarshal(b2, &tmp)
if err != nil {
panic(err)
}
// fmt.Printf("key = %s\n", key)
// fmt.Printf("tmp = %+v\n", tmp)
res[key] = append(res[key], tmp...)
}
}
fmt.Println("res", res)

Add a index column to dataframe in gota

with the following sample, I can add a new column that is diverted from the row values.
it's working well.
package main
import (
"fmt"
"strings"
"github.com/go-gota/gota/dataframe"
"github.com/go-gota/gota/series"
)
func main() {
csvStr := `accountId,deposit,Withdrawals
anil0001,50,10
vikas0002,10,10
ravi0003,20,10
user1111,NaN,20`
df := dataframe.ReadCSV(strings.NewReader(csvStr))
// Within a row, elements are indexed by their column index.
indexDeposit := 1
indexWithdrawals := 2
// Rapply reads the data by rows.
// You can access each element of the row using
// s.Elem(index) or s.Val(index).
// To browse by columns use Capply.
s := df.Rapply(func(s series.Series) series.Series {
deposit, err := s.Elem(indexDeposit).Int()
if err != nil {
return series.Ints("NAN")
}
withdrawal, err := s.Elem(indexWithdrawals).Int()
if err != nil {
return series.Ints("NAN")
}
return series.Ints(deposit - withdrawal)
})
// The new series is appended to
// the data source via a call to Mutate.
// You can print s to read its content.
df = df.Mutate(s.Col("X0")).
Rename("deposit_Withdrawals_diff", "X0")
fmt.Println(df)
}
but the question is that, I want to add an index ( row counter) to each row. ( later on I want to join it with a subset of that) So I need an index.
something like
index,accountId,deposit,Withdrawals
1,anil0001,50,10
2,vikas0002,10,10
3,ravi0003,20,10
4,user1111,NaN,20
I see there are no GetIndex or Index methods on series. How can I add this index?
I did it with a global variable ( but I'm not sure it's the best solution for gota, maybe for pure go developer is a good solution :) )
index := 0
s := df.Rapply(func(s series.Series) series.Series {
index++
return series.Ints(index)
})
df = df.Mutate(s.Col("X0")).
Rename("index", "X0")
fmt.Println(df)

Append to golang slice passed as empty interface

How to append to empty interface (that has been verified to be a *[]struct)?
func main() {
var mySlice []myStruct // myStruct can be any struct (dynamic)
decode(&mySlice, "...")
}
func decode(dest interface{}, src string) {
// assume dest has been verified to be *[]struct
var modelType reflect.Type = getStructType(dest)
rows, fields := getRows(src)
for _, row := range rows {
// create new struct of type modelType and assign all fields
model := reflect.New(modelType)
for field := fields {
fieldValue := getRowValue(row, field)
model.Elem().FieldByName(field).Set(fieldValue)
}
castedModelRow := model.Elem().Interface()
// append model to dest; how to do this?
// dest = append(dest, castedModelRow)
}
}
Things I've tried:
This simply panics: reflect: call of reflect.Append on ptr Value (as we pass &mySlice instead of mySlice)
dest = reflect.Append(reflect.ValueOf(dest), reflect.ValueOf(castedModelRow))
This works but doesn't set the value back to dest... in main func, len(mySlice) remains 0 after decode function is called.
func decode(dest interface{}, src string) {
...
result := reflect.MakeSlice(reflect.SliceOf(modelType), rowCount, rowCount)
for _, row : range rows {
...
result = reflect.Append(result, reflect.ValueOf(castedModelRow))
}
dest = reflect.ValueOf(result)
}
Here's how to fix the second decode function shown in the question. The statement
dest = reflect.ValueOf(result)
modifies local variable dest, not the caller's value. Use the following statement to modify the caller's slice:
reflect.ValueOf(dest).Elem().Set(result)
The code in the question appends decoded elements after the elements created in reflect.MakeSlice. The resulting slice has len(rows) zero values followed by len(rows) decoded values. Fix by changing
result = reflect.Append(result, reflect.ValueOf(castedModelRow))
to:
result.Index(i).Set(model)
Here's the update version of the second decode function in the question:
func decode(dest interface{}, src string) {
var modelType reflect.Type = getStructType(dest)
rows, fields := getRows(src)
result := reflect.MakeSlice(reflect.SliceOf(modelType), len(rows), len(rows))
for i, row := range rows {
model := reflect.New(modelType).Elem()
for _, field := range fields {
fieldValue := getRowValue(row, field)
model.FieldByName(field).Set(fieldValue)
}
result.Index(i).Set(model)
}
reflect.ValueOf(dest).Elem().Set(result)
}
Run it on the Playground.
You were very close with your original solution. You had to de-reference the pointer before calling the append operation. This solution would be helpful if your dest already had some existing elements and you don't want to lose them by creating a newSlice.
tempDest := reflect.ValueOf(dest).Elem()
tempDest = reflect.Append(tempDest, reflect.ValueOf(model.Interface()))
Similar to how #I Love Reflection pointed out, you finally need to set the new slice back to the pointer.
reflect.ValueOf(dest).Elem().Set(tempDest)
Overall Decode:
var modelType reflect.Type = getStructType(dest)
rows, fields := getRows(src)
tempDest := reflect.ValueOf(dest).Elem()
for _, row := range rows {
model := reflect.New(modelType).Elem()
for _, field := range fields {
fieldValue := getRowValue(row, field)
model.FieldByName(field).Set(fieldValue)
}
tempDest = reflect.Append(tempDest, reflect.ValueOf(model.Interface()))
}
reflect.ValueOf(dest).Elem().Set(tempDest)

Optimal way to add or remove slice element in Go without broke elements order

Assume I have []struct{} and I need to know whether an element with id = A exists in the slice. If exists, the element will be removed or moved to index 0 according to request in user input. So, how to find an element in golang slice in optimal way without check each element? Or, is using slice.contains(obj) enough? Then, if the element exists, I will do action according to request in user input. If the request is remove, I will remove it without broke the elements order. But if the request is add, I will move the element to index 0.
Note: The function will be often called.
Thank you.
It is not difficult to write function to find element by iterating over slice:
func contains(s []your_struct, e int) (bool, int) {
for idx, a := range s {
if a.id == e {
return true, idx
}
}
return false, -1
}
If you a going to call the function often it may be useful to sort the slice by id field and implement binary search over slice of your_struct.
If the slice is not very big you can create additional data structure - map[int]int and keep the indexes of elements of the slice in this map. But in this case you need to synchronize content of your slice and the map when you are modifying one of them:
your_map := make(map[int]int)
if idx, ok := your_map[id]; ok {
// ...
}
If you need to check many times then
it's better to create a map[string]int of id field one time.
And every time just check map contains that id or not
Here,id as key and slice index as value
mp := make(map[string]int)
for idx, a := range yourStuctSlice {
mp[a.id] = idx
}
if idx, ok := mp[id]; ok {
// remove the element using idx
}
If new element added in slice then update the map also
mp[newElement.id] = true
If you want to remove searched element you can remove by slice index
func RemoveIndex(s []yourStuct, index int) []int {
return append(s[:index], s[index+1:]...)
}
if idx, ok := mp[id]; ok {
yourStuctSlice = RemoveIndex(yourStuctSlice , idx)
delete(mp , id); // Remove from map also for next search
}

How to loop through UUID items

How do I loop through a slice composed of UUIDS? My values comes from db via rows.Next()
Here's how I'm appending my uuid values to my slice (really don't know if its proper)
type Images struct {
image_id uuid.UUID `gorm:"type:uuid;primary_key;"`
}
var new_images []Images
for olds.Next() {
olds.Scan(&oldimages.image_id)
new_images = append(new_images , Images{image_id: oldimages.image_id})
}
olds here is the rows im getting from gorm Rows
olds, err := db.Raw("SELECT images_received.image_id FROM old_pics").Rows()
defer olds.Close()
Heres the function in looping I was given but its for int i dont know how to use this for uuid:
func islice(s []int, n int, f func([]int)) {
for i := 0; i < len(s); i += n {
var section []int
if i > len(s)-n {
section = s[i:]
} else {
section = s[i : i+n]
}
f(section)
}
}
Any idea how I do this? Currently for uuid im using the "github.com/satori/go.uuid" lib
I got the function from another SO question, My goal is to iterate over the rows, but rows.Next() doesnt allow that I guess in order to do that I thought I needed to append them into a slice, so I can get them by fours.
Hence leading to this question.
All you need to do is replace []int with []uuid.UUID everywhere in your islice function, including the parameter types. The functionality of islice() is not bound to []int if thats what your problem is.

Resources