High CPU usage on GORM - go

We have suspicious issue on our function with gorm context
func (*OrderRepositoryImpl) FindByIdAndUserWithTransaction(tx *gorm.DB, id int, userID string) (models.Order, error) {
order := models.Order{}
if tx == nil {
tx = getDatabase()
}
err := tx.Where("orders.id = ? AND external_user_id = ?", id, userID).
Joins("LEFT JOIN delivery_informations ON delivery_informations.order_id = orders.id").Preload("DeliveryInformation").First(&order).Error
if err != gorm.ErrRecordNotFound {
return order, err
}
return order, nil
}
Actually when get pprof profiling the cpu usage is going high on this function.
Any suggestion how we must improve this call function to the query?
Thanks

Perhaps you should restrict the Preload
http://gorm.io/docs/preload.html

Related

Golang query scan not scanning query correctly into struct

I am having trouble with scanning from a pgx query in Golang. The id field is always that of the last record. If I un-comment the var person Person declaration at the top of the function, every id is 3. There are 3 records with id's from 1 to 3 in my db. When I comment that declaration and declare the variable in the rows.Next() loop I get the correct id's. I can't figure out why the personId isn't being correctly overwritten
output from marshalled JSON with the var declared at the top of the function.
[{"person_id":3,"first_name":"Mark","last_name":"Brown"},{"person_id":3,"first_name":"Sam","last_name":"Smith"},{"person_id":3,"first_name":"Bob","last_name":"Jones"}]
output after declaring person every iteration of the scan loop
[{"person_id":1,"first_name":"Mark","last_name":"Brown"},{"person_id":2,"first_name":"Sam","last_name":"Smith"},{"person_id":3,"first_name":"Bob","last_name":"Jones"}]
I have this struct
// Person model
type Person struct {
PersonId *int64 `json:"person_id"`
FirstName *string `json:"first_name"`
LastName *string `json:"last_name"`
}
Here is my query function
func getPersons(rs *appResource, companyId int64) ([]Person, error) {
// var person Person
var persons []Person
queryString := `SELECT
user_id,
first_name,
last_name,
FROM users
WHERE company_id = $1`
rows, err := rs.db.Query(context.Background(), queryString, companyId)
if err != nil {
return persons, err
}
for rows.Next() {
var person Person
err = rows.Scan(
&person.PersonId,
&person.FirstName,
&person.LastName)
if err != nil {
return persons, err
}
log.Println(*person.PersonId) // 1, 2, 3 for both var patterns
persons = append(persons, person)
}
if rows.Err() != nil {
return persons, rows.Err()
}
return persons, err
}
I believe that you have discovered a bug (or, at least, unexpected behaviour) in github.com/jackc/pgx/v4. When running Scan it appears that if the pointer (so person.PersonId) is not nil then whatever it is pointing to is being reused. To prove this I replicated the issue and confirmed that you can also fix it with:
persons = append(persons, person)
person.PersonId = nil
I can duplicate the issue with this simplified code:
conn, err := pgx.Connect(context.Background(), "postgresql://user:password#127.0.0.1:5432/schema?sslmode=disable")
if err != nil {
panic(err)
}
defer conn.Close(context.Background())
queryString := `SELECT num::int FROM generate_series(1, 3) num`
var scanDst *int64
var slc []*int64
rows, err := conn.Query(context.Background(), queryString)
if err != nil {
panic(err)
}
for rows.Next() {
err = rows.Scan(&scanDst)
if err != nil {
panic(err)
}
slc = append(slc, scanDst)
// scanDst = nil
}
if rows.Err() != nil {
panic(err)
}
for _, i := range slc {
fmt.Printf("%v %d\n", i, *i)
}
The output from this is:
0xc00009f168 3
0xc00009f168 3
0xc00009f168 3
You will note that the pointer is the same in each case. I have done some further testing:
Uncommenting scanDst = nil in the above fixes the issue.
When using database/sql (with the "github.com/jackc/pgx/stdlib" driver) the code works as expected.
If PersonId is *string (and query uses num::text) it works as expected.
The issue appears to boil down to the following in convert.go:
if v := reflect.ValueOf(dst); v.Kind() == reflect.Ptr {
el := v.Elem()
switch el.Kind() {
// if dst is a pointer to pointer, strip the pointer and try again
case reflect.Ptr:
if el.IsNil() {
// allocate destination
el.Set(reflect.New(el.Type().Elem()))
}
return int64AssignTo(srcVal, srcStatus, el.Interface())
So this handles the case where the destination is a pointer (for some datatypes). The code checks if it is nil and, if so, creates a new instance of the relevant type as a destination. If it's not nil it just reuses the pointer. Note: I've not used reflect for a while so there may be issues with my interpretation.
As the behaviour differs from database/sql and is likely to cause confusion I believe it's probably a bug (I guess it could be an attempt to reduce allocations). I have had a quick look at the issues and could not find anything reported (will have a more detailed look later).

Scylla gocqlx how to implement pagination similar to a cursor

I'm using Scylla to save parties created by a users. The method below returns a list of parties created by a user. I currently return all parties without allowing pagination, but I'm trying to implement Pagination for the method below but I still don't quite understand how pagination is handled with Scylla.
My guess would be that a cursor can be passed to a query. Based on this example it looks like the PageState can be used to pass something similar to a cursor.
I would appreciate a short explanation what PageState is and if I should use it to accomplish token based pagination. It would also be great if an example could be provided that shows how a new PageState can be returned to the client and used to fetch a new page on a second request.
func (pq *partyQuery) GetByUser(ctx context.Context, uId string) ([]datastruct.Party, error) {
var result []datastruct.Party
stmt, names := qb.
Select(TABLE_NAME).
Where(qb.Eq("user_id")).
ToCql()
err := pq.sess.
Query(stmt, names).
BindMap((qb.M{"user_id": uId})).
PageSize(10).
Iter().
Select(&result)
if err != nil {
log.Println(err)
return []datastruct.Party{}, errors.New("no parties found")
}
return result, nil
}
Thanks in advance and I appreciate your time.
Edit
For anybody interested, this is how I transformed my function to allow paging:
func (pq *partyQuery) GetByUser(ctx context.Context, uId string, page []byte) (result []datastruct.Party, nextPage []byte, err error) {
stmt, names := qb.
Select(TABLE_NAME).
Where(qb.Eq("user_id")).
ToCql()
q := pq.sess.
Query(stmt, names).
BindMap((qb.M{"user_id": uId}))
defer q.Release()
q.PageState(page)
q.PageSize(10)
iter := q.Iter()
err = iter.Select(&result)
if err != nil {
log.Println(err)
return []datastruct.Party{}, nil, errors.New("no parties found")
}
return result, iter.PageState(), nil
}
Hi gocqlx author here.
Please take a look at this example https://github.com/scylladb/gocqlx/blob/25d81de30ebcdfa02d3d849b518fc57b839e4399/example_test.go#L482
getUserVideos := func(userID int, page []byte) (userVideos []Video, nextPage []byte, err error) {
q := videoTable.SelectQuery(session).Bind(userID)
defer q.Release()
q.PageState(page)
q.PageSize(itemsPerPage)
iter := q.Iter()
return userVideos, iter.PageState(), iter.Select(&userVideos)
}
You need to send page state to caller.

How write effective save and update function in beego?

I have the following function in the BaseModel that I can use anywhere.
func (d *Dummy) Save() (int64, error) {
o := orm.NewOrm()
var err error
var count int64
if d.Id == 0 {
count, err = o.Insert(d)
} else {
count, err = o.Update(d)
}
return count, err
}
I am using like this
d := models.Dummy{Id: 10}
d.SomeValue = "x"
d.Save()
The problem is I have "d.OtherValue" is already in DB with value.
After executing this function it's getting updated to 0.
As it is a common model function effective for all models, How can I solve this? Basically, I wanted to do this in a single query just like update/Save Django ORM
You need to load the record first. You are missing the Read(&struct) ORM method:
o := orm.NewOrm()
d := models.Dummy{Id: 10}
readErr:= o.Read(&d)
// check if the record with Id of 10 exists and update...
if readErr!= o.ErrNoRows {
if rowsAffected, err := o.Update(&d); err == nil {
// record updated (rowsAffected indicates the number of affected rows)
}
} else {
// record does not exist, create a new one
id, insertErr:= o.Insert(&d)
if insertErr== nil {
// success
}
}
Then you should check if a record is found by the ORM
For more details you can refer to the Read and Update methods.

Mapping Gorm queries to lists of structs

Go here, using gorm to help me with database stuff.
I have the following function which is working for me:
func (d DbPersister) FetchOrderById(orderId string) (Order,error) {
order := &Order{}
if err := d.GormDB.Table("orders").
Select(`orders`.`order_id`,
`orders`.`quantity`,
`addresses`.`line_1`,
`addresses`.`state`,
`addresses`.`zip`).
Joins("join addresses on addresses.address_id = orders._address_id").
Where("orders.order_id = ?", orderId).
First(order).Error; err != nil {
return Order{}, err
}
return *order,nil
}
So, you give it an orderId, and it fetches that from the DB and maps it to an Order instance.
I now want to look up all a particular user's orders:
func (d DbPersister) FetchAllOrdersByCustomerId(userId string) ([]Order,error) {
orders := []&Order{}
if err := d.GormDB.Table("orders").
Select(`orders`.`order_id`,
`orders`.`quantity`,
`orders`.`status`,
`addresses`.`line_1`,
`addresses`.`state`,
`addresses`.`zip`).
Joins("join addresses on addresses.address_id = orders.shipping_address_id").
Joins("join users on users.user_id = orders.user_id").
Where("users.user_id = ?", userId).
First(orders).Error; err != nil {
return []Order{}, err
}
return orders,nil
}
However, I'm getting compiler errors. I don't believe First(orders) is the correct function to be calling here. Basically do a join between orders and users and give me all of a particular user's orders, which should be a list of Order instances. Can anyone spot where I'm going awry?
Firstly orders := []&Order{} should be orders := make([]Order,0)
Then use Find() instead of First() to get multiple values.
For structs, slices, maps, and other composite types, if no data is contained you can return nil
So your code should be
func (d DbPersister) FetchAllOrdersByCustomerId(userId string) ([]Order,error) {
orders := make([]Order,0)
if err := d.GormDB.Table("orders").
Select(`orders`.`order_id`,
`orders`.`quantity`,
`orders`.`status`,
`addresses`.`line_1`,
`addresses`.`state`,
`addresses`.`zip`).
Joins("join addresses on addresses.address_id = orders.shipping_address_id").
Joins("join users on users.user_id = orders.user_id").
Where("users.user_id = ?", userId).
Find(&orders).Error; err != nil {
return nil, err
}
return orders,nil
}

Would I be better by using a prepared statement here?

I still struggle understanding the benefits of prepared statement in Go / psql.
Let's assume I have a struct
type Brand struct {
Id int `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Issued_at *time.Time `json:"issued_at,omitempty"`
}
And some table brands, where id is a unique field. Now I want to retrieve the element from that table using and id.
I can write the following function using QueryRow.
func GetBrand1(id int) (Brand, error) {
brand := Brand{}
if err := Db.QueryRow("SELECT name, issued_at FROM brands WHERE id = $1", id).Scan(&brand.Name, &brand.Issued_at); err != nil {
if err == sql.ErrNoRows {
return brand, nil
}
return brand, err
}
brand.Id = id
return brand, nil
}
and I can do the same (I hope it is the same) using prepared statement:
func GetBrand2(id int) (Brand, error) {
brand := Brand{}
stmt, err := Db.Prepare("SELECT name, issued_at FROM brands WHERE id = $1")
if err != nil {
return brand, err
}
defer stmt.Close()
rows, err := stmt.Query(id)
if err != nil {
return brand, err
}
defer rows.Close()
for rows.Next() {
rows.Scan(&brand.Name, &brand.Issued_at)
brand.Id = id
return brand, err
}
if err = rows.Err(); err != nil {
return brand, err
}
return brand, err
}
Now in my application I am planning to execute GetBrand* function many times (with different parameters). Will is one of this implementations is more preferable to another (in terms of sql-requests/memory/anything). Or may be they both suck and I would be better doing something else.
I have read this and a followed up link and I saw that:
db.Query() actually prepares, executes, and closes a prepared
statement. That’s three round-trips to the database. If you’re not
careful, you can triple the number of database interactions your
application makes
but I think that prepared statement in the second case will be removed at the end of the function.
In both of those examples, there's roughly the same database overhead. If you're going to use a statement a lot, prepare it once in a wider scope so it's reusable.
You would only be making one round trip to the database with that pattern.
If you're ever using databases in conjunction with user input, you should always prepare the statement beforehand.
If not, you run a risk of DB Insertion (SQL Insertion ex).

Resources