I still struggle understanding the benefits of prepared statement in Go / psql.
Let's assume I have a struct
type Brand struct {
Id int `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Issued_at *time.Time `json:"issued_at,omitempty"`
}
And some table brands, where id is a unique field. Now I want to retrieve the element from that table using and id.
I can write the following function using QueryRow.
func GetBrand1(id int) (Brand, error) {
brand := Brand{}
if err := Db.QueryRow("SELECT name, issued_at FROM brands WHERE id = $1", id).Scan(&brand.Name, &brand.Issued_at); err != nil {
if err == sql.ErrNoRows {
return brand, nil
}
return brand, err
}
brand.Id = id
return brand, nil
}
and I can do the same (I hope it is the same) using prepared statement:
func GetBrand2(id int) (Brand, error) {
brand := Brand{}
stmt, err := Db.Prepare("SELECT name, issued_at FROM brands WHERE id = $1")
if err != nil {
return brand, err
}
defer stmt.Close()
rows, err := stmt.Query(id)
if err != nil {
return brand, err
}
defer rows.Close()
for rows.Next() {
rows.Scan(&brand.Name, &brand.Issued_at)
brand.Id = id
return brand, err
}
if err = rows.Err(); err != nil {
return brand, err
}
return brand, err
}
Now in my application I am planning to execute GetBrand* function many times (with different parameters). Will is one of this implementations is more preferable to another (in terms of sql-requests/memory/anything). Or may be they both suck and I would be better doing something else.
I have read this and a followed up link and I saw that:
db.Query() actually prepares, executes, and closes a prepared
statement. That’s three round-trips to the database. If you’re not
careful, you can triple the number of database interactions your
application makes
but I think that prepared statement in the second case will be removed at the end of the function.
In both of those examples, there's roughly the same database overhead. If you're going to use a statement a lot, prepare it once in a wider scope so it's reusable.
You would only be making one round trip to the database with that pattern.
If you're ever using databases in conjunction with user input, you should always prepare the statement beforehand.
If not, you run a risk of DB Insertion (SQL Insertion ex).
Related
I think if i keep using the method below, i'll have to write too much code.
I declared structures for all the tables. and i used the go validate package for validation.
[types.go]
type TableA struct {
Field1 string `json:"field1" validate:"required, max=10"`
Field2 int `json:"field2" validate:"number"`
}
type TableB struct {
...
}
And i initialized the router for each method and connected the handlers.
[tableA.go]
router.Get("/table-a", r.Get_tableA_Handler),
router.Post("/table-a", r.Post_tableA_Handler),
router.Patch("/table-a", r.Patch_tableA_Handler),
router.Delete("/table-a", r.Delete_tableA_Handler)
...
Each handler parses the json in the request body, validates the data and call the db function.
[tableA_router.go]
func (rt *tableARouter) Post_tableA_Handler(w http.ResponseWriter, r *http.Request) error {
//Json to Struct
req := new(types.tableA)
if err := httputils.DecodeJsonBody(r, req); err != nil {
return err
}
// Validation
if err := validCheck(req); err != nil {
return err
}
// DB function
err := rt.insert_tableA_DB(r.Context(), req)
if err != nil {
return err
}
return rt.rd.JSON(w, http.StatusCreated, "Create Success")
}
...
func validCheck(data interface{}) error {
validate := validator.New()
err := validate.Struct(data)
return err
}
This is a DB function called from the handler function above (using Gorm)
[tableA_db.go]
func (rt *tableARouter) insert_tableA_DB(ctx context.Context, data *types.TableA) error {
// DB Connect
db, err := db.Open(rt.dbcfg)
if err != nil {
return err
}
defer db.Close()
tx := db.Begin()
defer tx.Rollback()
// == INSERT ==
query := `INSERT INTO table_a
(field1, field2, ...)
VALUES (?, ?, ...)`
result := tx.WithContext(ctx).Exec(query,
data.Field1, data.Field2, ...)
//Result
if result.Error != nil {
...
}
There are too many tables now... If there are 100 tables i have to write 100 handlers and 100 DB functions.
Is there any way to use something like /tables/{tableName}?
Please give me any advice.... Thank you.
You can use an ORM package, like GORM to make easier your work.
Or you can make an universal handler and with the reflect package, analyze your defined structs and make every SQL query dinamically. But it's not the best solution if any of your struct has inner slices, other embedded structs, or if you need to use joined tables you also have to deal with it manually. I have servers where we have more than 200 endpoints with more than 3-400 methods with 200+ SQL tables and the whole server was written by hand. But I can say, it's very rare when a handler and the DB func can be reused without modifying.
Maybe you can wrap the error handling, rollback/commit, json parse and response parts in a func then use it to call the DB methods.
I am having trouble with scanning from a pgx query in Golang. The id field is always that of the last record. If I un-comment the var person Person declaration at the top of the function, every id is 3. There are 3 records with id's from 1 to 3 in my db. When I comment that declaration and declare the variable in the rows.Next() loop I get the correct id's. I can't figure out why the personId isn't being correctly overwritten
output from marshalled JSON with the var declared at the top of the function.
[{"person_id":3,"first_name":"Mark","last_name":"Brown"},{"person_id":3,"first_name":"Sam","last_name":"Smith"},{"person_id":3,"first_name":"Bob","last_name":"Jones"}]
output after declaring person every iteration of the scan loop
[{"person_id":1,"first_name":"Mark","last_name":"Brown"},{"person_id":2,"first_name":"Sam","last_name":"Smith"},{"person_id":3,"first_name":"Bob","last_name":"Jones"}]
I have this struct
// Person model
type Person struct {
PersonId *int64 `json:"person_id"`
FirstName *string `json:"first_name"`
LastName *string `json:"last_name"`
}
Here is my query function
func getPersons(rs *appResource, companyId int64) ([]Person, error) {
// var person Person
var persons []Person
queryString := `SELECT
user_id,
first_name,
last_name,
FROM users
WHERE company_id = $1`
rows, err := rs.db.Query(context.Background(), queryString, companyId)
if err != nil {
return persons, err
}
for rows.Next() {
var person Person
err = rows.Scan(
&person.PersonId,
&person.FirstName,
&person.LastName)
if err != nil {
return persons, err
}
log.Println(*person.PersonId) // 1, 2, 3 for both var patterns
persons = append(persons, person)
}
if rows.Err() != nil {
return persons, rows.Err()
}
return persons, err
}
I believe that you have discovered a bug (or, at least, unexpected behaviour) in github.com/jackc/pgx/v4. When running Scan it appears that if the pointer (so person.PersonId) is not nil then whatever it is pointing to is being reused. To prove this I replicated the issue and confirmed that you can also fix it with:
persons = append(persons, person)
person.PersonId = nil
I can duplicate the issue with this simplified code:
conn, err := pgx.Connect(context.Background(), "postgresql://user:password#127.0.0.1:5432/schema?sslmode=disable")
if err != nil {
panic(err)
}
defer conn.Close(context.Background())
queryString := `SELECT num::int FROM generate_series(1, 3) num`
var scanDst *int64
var slc []*int64
rows, err := conn.Query(context.Background(), queryString)
if err != nil {
panic(err)
}
for rows.Next() {
err = rows.Scan(&scanDst)
if err != nil {
panic(err)
}
slc = append(slc, scanDst)
// scanDst = nil
}
if rows.Err() != nil {
panic(err)
}
for _, i := range slc {
fmt.Printf("%v %d\n", i, *i)
}
The output from this is:
0xc00009f168 3
0xc00009f168 3
0xc00009f168 3
You will note that the pointer is the same in each case. I have done some further testing:
Uncommenting scanDst = nil in the above fixes the issue.
When using database/sql (with the "github.com/jackc/pgx/stdlib" driver) the code works as expected.
If PersonId is *string (and query uses num::text) it works as expected.
The issue appears to boil down to the following in convert.go:
if v := reflect.ValueOf(dst); v.Kind() == reflect.Ptr {
el := v.Elem()
switch el.Kind() {
// if dst is a pointer to pointer, strip the pointer and try again
case reflect.Ptr:
if el.IsNil() {
// allocate destination
el.Set(reflect.New(el.Type().Elem()))
}
return int64AssignTo(srcVal, srcStatus, el.Interface())
So this handles the case where the destination is a pointer (for some datatypes). The code checks if it is nil and, if so, creates a new instance of the relevant type as a destination. If it's not nil it just reuses the pointer. Note: I've not used reflect for a while so there may be issues with my interpretation.
As the behaviour differs from database/sql and is likely to cause confusion I believe it's probably a bug (I guess it could be an attempt to reduce allocations). I have had a quick look at the issues and could not find anything reported (will have a more detailed look later).
Summary
I am trying to write data from several postgres tables into a nested Go struct for the purpose of returning a single json response to a GET request within my web app.
Questions
Is the way I'm declaring a nested struct reasonable from a Go best practices perspective, or is there a reason I should avoid this method and do it another way?
What am I doing wrong in Step 3 to prevent my code from working? (I fear the answer is 'everything')
What I've got so far
I've declared my struct of structs
type MainObject struct {
SubObjects []struct {
SpecificDetail string `json:"specific-detail"`
} `json:"sub-object"`
...(other []structs)...
}
I've retrieved rows from the tables
func getMainObjectHandler(w http.ResponseWriter, r *http.Request) {
...(database connnection)...
MainObjectID := r.URL.Query().Get("moid")
if MainObjectID != "null" {
NewMainObject := MainObject{}
SubObjectDetail_rows, err := db.Query("SELECT specific_detail from the_table WHERE moid= '" + MainObjectID + "'")
if err != nil {
log.Fatalf("could not execute query: %v", err)
}
...(other db.Query rows)...
I've tried (and failed) to build the row data into the struct.
for SubObjectDetail_rows.Next() {
SpecificDetail := NewMainObject.SubObject.SpecificDetail{}
SubObjectDetail_rows.Scan(&SpecificDetail)
SubObject = append(SubObject, SpecificDetail)
}
NewMainObject = append(MainObject, SubObject)
defer persona_rows.Close()
Finally, I've set up Marshal and write.
NMOListBytes, err := json.Marshal(NewMainObject)
if err != nil {
fmt.Println(fmt.Errorf("Error: %v", err))
w.WriteHeader(http.StatusInternalServerError)
return
}
w.Write(NMOListBytes)
Firstly, please use placeholders when creating your SQL query to avoid injections:
// db.Query("SELECT specific_detail from the_table WHERE moid= '" + MainObjectID + "'") // not this
db.Query("SELECT specific_detail from the_table WHERE moid=?", MainObjectID)
Unless you're using a framework like GORM you can't Scan() into a single struct value.
From the docs:
Scan copies the columns in the current row into the values pointed at
by dest. The number of values in dest must be the same as the number
of columns in Rows.
It looks like you're pulling JSON from a DB query, as you're only querying one column, so you probably want:
var bs []byte // get raw JSON bytes
err = SubObjectDetail_rows.Scan(&bs)
if err != nil { /* always check errors */ }
and then unmarshal them to your struct:
err = json.Unmarshal(bs, &SpecificDetail)
if err != nil { /* ... */ }
Go here, using gorm to help me with database stuff.
I have the following function which is working for me:
func (d DbPersister) FetchOrderById(orderId string) (Order,error) {
order := &Order{}
if err := d.GormDB.Table("orders").
Select(`orders`.`order_id`,
`orders`.`quantity`,
`addresses`.`line_1`,
`addresses`.`state`,
`addresses`.`zip`).
Joins("join addresses on addresses.address_id = orders._address_id").
Where("orders.order_id = ?", orderId).
First(order).Error; err != nil {
return Order{}, err
}
return *order,nil
}
So, you give it an orderId, and it fetches that from the DB and maps it to an Order instance.
I now want to look up all a particular user's orders:
func (d DbPersister) FetchAllOrdersByCustomerId(userId string) ([]Order,error) {
orders := []&Order{}
if err := d.GormDB.Table("orders").
Select(`orders`.`order_id`,
`orders`.`quantity`,
`orders`.`status`,
`addresses`.`line_1`,
`addresses`.`state`,
`addresses`.`zip`).
Joins("join addresses on addresses.address_id = orders.shipping_address_id").
Joins("join users on users.user_id = orders.user_id").
Where("users.user_id = ?", userId).
First(orders).Error; err != nil {
return []Order{}, err
}
return orders,nil
}
However, I'm getting compiler errors. I don't believe First(orders) is the correct function to be calling here. Basically do a join between orders and users and give me all of a particular user's orders, which should be a list of Order instances. Can anyone spot where I'm going awry?
Firstly orders := []&Order{} should be orders := make([]Order,0)
Then use Find() instead of First() to get multiple values.
For structs, slices, maps, and other composite types, if no data is contained you can return nil
So your code should be
func (d DbPersister) FetchAllOrdersByCustomerId(userId string) ([]Order,error) {
orders := make([]Order,0)
if err := d.GormDB.Table("orders").
Select(`orders`.`order_id`,
`orders`.`quantity`,
`orders`.`status`,
`addresses`.`line_1`,
`addresses`.`state`,
`addresses`.`zip`).
Joins("join addresses on addresses.address_id = orders.shipping_address_id").
Joins("join users on users.user_id = orders.user_id").
Where("users.user_id = ?", userId).
Find(&orders).Error; err != nil {
return nil, err
}
return orders,nil
}
We're trying to use Gorm with mysql 8 to much frustration.
I have the following tables (simplified for brevity here)
type StoragePool struct {
gorm.Model
PoolId string `json:"id" gorm:"column:poolid;size:40;unique;not null"`
Volumes []Volume `json:"volumes" gorm:"foreignkey:StorageId;association_foreignkey:PoolId"`
}
type Volume struct {
gorm.Model
StorageId string `json:"storageid" gorm:"column:storageid;size:40"`
}
Data insertions seem to work fine. Both tables get populated and no constraints are violated.
A query that expects a single record seems to work fine:
poolRecord := &StoragePool{}
if err := tx.Where("poolid = ?", pool.PoolId).First(&StoragePool{}).Scan(poolRecord).Error; err != nil {
return err
}
This query only returns a single row. When I perform this exact query as raw SQL outside of go, it returns all 31 records I expect.
var poolVolumes []Volume
if err := tx.Where("storageid = ?", pool.PoolId).Find(&Volume{}).Scan(&poolVolumes).Error; err != nil {
return err
}
log.Debugf("found %d volumes belonging to %q [%s]", len(poolVolumes), pool.Name, pool.PoolId)
According to the docs, that second sql statement is the equivalent of "SELECT * FROM VOLUMES WHERE STORAGEID = 'poolid'". That is not the behavior I am getting.
Anyone have any ideas what I'm doing wrong here?
I rarely use an ORM while coding with go, but following the doc from gorm, it seems like you are doing it the wrong way.
Scan is used for scanning result into another struct, like this:
type Result struct {
Name string
Age int
}
var result Result
db.Table("users").Select("name, age").Where("name = ?", 3).Scan(&result)
The correct way to get query results into a slice of structs should be:
var poolVolumes []Volume
if err := tx.Where("storageid = ?", pool.PoolId).Find(&poolVolumes).Error; err != nil {
return err
}