GORM Recursive Preloading - go

Using Go 1.10, and CockroachDB via the Postgres pq driver.
I have a GORM model that looks like this:
type User struct {
ID string `gorm:"type:UUID;primary_key;NOT_NULL"`
UserName string
... <other misc things here>
EnclosedUsers []User `gorm:"many2many:enclosed_user;jointable_foreignkey:parent_id;association_jointable_foreignkey:child_id"`
}
where ecnlosed_user is (specifically defined because reasons :) )
type EnclosedUsers struct {
ParentID string `gorm:"type:UUID;default:'00000000-0000-0000-0000-000000000000'"`
ChildID string `gorm:"type:UUID;default:'00000000-0000-0000-0000-000000000000'"`
}
That is, each user can have 0 or many enclosed users, and each user may have many parent users. Im trying to preload all of the enclosed users for each user, however GORM only preloads the first level.
i.e.:
usr1
|-> usr 2
| |->usr3
|
|-> usr4
|->usr6
usr5
|->usr7
The only users that are loaded are usr1, usr2, usr4, usr5, and usr7. usr3 or usr 6 aren't. Im currently trying to recursively force the enclosed users to load with an AfterFind callback:
func (u *User) AfterFind(scope *gorm.Scope, role *CRRole) error {
var childUsers []User
if err := scope.DB().Model(&u).Related(&childUsers, "EnclosedUsers").Error; err != nil {
return err
}
role.EnclosedRoles = childRoles
return nil
}
However this generates the following SQL:
SELECT "users".* FROM "users" INNER JOIN "enclosed_users" ON "enclosed_users"."child_id" = "enclosed_users"."id" WHERE (user_name = 'test user 1') AND ("enclosed_users"."parent_id" IN ('<UUID HERE>'))
if the user_name = ... wasn't there, the query would work perfectly.
Any ideas would be greatly appreciated.

I ended up forking the repository and removing callback_query_preload.go at line 331. Works like a charm.

Related

How to use Go / GORM to print SELECT query output without pre-defined struct

I am developing an API using Go which connects to MySQL database for some query execution. Am using GORM for database operations. But am stuck at printing the SELECT query output for the tables which I don't have the column names.
My use case is that, I need to run the query on multiple tables where I don't have an idea about what their column names and types are. And so I cannot pre-define a struct for all the current and future tables which might get added.
Is there a way to print/save the SELECT query output without a pre-defined struct ?
I tried do some using empty struct but it didn't help me.
P.S: Am a beginner in Go
type Testing struct{}
var test Testing
dsn := fmt.Sprintf("%v:%v#tcp(%v:%v)/%v", myds.DBuser, myds.DBpassword, myds.DBhost, myds.DBport, myds.DBname)
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{})
if err != nil {
fmt.Println(err)
}
tx := db.Raw(query).Scan(&test)
if tx.Error != nil {
fmt.Println(tx.Error)
}
fmt.Println(test)
You can use an anonymous struct
Let's say you have a struct:
type User struct{
FirstName string
LastName string
}
Query:
SELECT CONCAT(first_name,last_name) AS full_name from users;
Notice the new column full_name
you can simply do
var fullName = struct{FullName string}{}
Notice how I use pascal case & FullName has to be the field name
A capital letter in between will represent a _
Field is public so it can be accessed outside.
full_name(query) = FullName(field)
pass this fullName object as a bucket to your Scan and it should work.
db.Raw(query).Scan(&fullName)
EDIT:
Your query will have some result right?
Let me assume that you have
column_one,column_two... column_n
Now, to get the data from all the columns or selected ones if you want, you simply have to define fields (in anonymous struct) with specific names. In our case:
struct{ColumnOne,ColumnTwo,..ColumnN interface{}}{}
P.S. I have used interface{}, you can use types depending on the data your column returns.
It worked for me by using a map type with interface. This helped me to save the SELECT query results without pre-defined struct or the column names.
dsn := fmt.Sprintf("%v:%v#tcp(%v:%v)/%v", myds.DBuser, myds.DBpassword, myds.DBhost, myds.DBport, myds.DBname)
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{})
if err != nil {
fmt.Println(err)
}
var result []map[string]interface{}
tx := db.Raw(query).Scan(&result)
if tx.Error != nil {
fmt.Println(tx.Error)
return
}
bytes, _ := json.Marshal(result)
fmt.Println(string(bytes))

Using sqlx, to populated embedded structs from a table joined twice

My question in a nutshell: can I use sqlx's StructScan to populate two embedded structs with values sourced from the same SQL table joined twice?
The help files to the useful sqlx package state this:
A StructScan will set an id column result in Person.AutoIncr.ID, also accessible as Person.ID. To avoid confusion, it's suggested that you use AS to create column aliases in your SQL instead.
Supposed I have this SQL query (parent-child, people to phones):
func getSQL() string {
return `SELECT *
FROM person
LEFT JOIN phones AS Phone1 ON Phone1.phone_id = person_phoneID1
LEFT JOIN phones AS Phone2 ON Phone2.phone_id = person_phoneID2
WHERE people_id = 1;`
}
Using sqlx and StructScan, I'd like to populate a struct full of embedded structs, something like this:
//Struct with embedded structs
type personHelper struct{
Person
Phone1 //Should I use the same name as SQL table alias?
Phone2
}
type Phone1 struct {
Phone //Underlying struct
}
type Phone2 struct{
Phone
}
//Base structs, with tags to match up fields
type Person struct{
ID `db:"person_id"`
Name `db:"person_name"`
Phone1 `db:"person_phoneID1"`
Phone2 `db:"person_phoneID2"`
}
type Phone struct{
ID int64 `db:"phone_id"`
Number string `db:"phone_no"`
//etc.
}
I might have a function something like this:
func getPeople(){
parseRows := func(rows *sqlx.Rows) {
for rows.Next() {
var ph personHelper
err := rows.StructScan(&ph)
if err != nil{
//etc.
}
}
}
sql := getSQL()
sqlutils.GetRows(parseRows, sql)//GetRows executes the SQL query and returns rows for processing
}
I can populate one phone number, but not both. I'm not sure whether I'm understanding the aliasing instructions correctly.
I'd appreciate any insights.
Thanks.

Iterate over struct and perform database queries

So I'm new to go and I come from a javascript/node background and for practice, I've been rewriting some of my javascript code into go.
I have a situation where I have an struct (in node it was my object) and I need to iterate over it and perform two database queries. I have something that works but it seems costly and repetitive.
Struct:
type SiteUsers struct {
Active struct {
Moderators []string `json:"moderators"`
Admins []string `json:"admins"`
Regulars []string `json:"regulars"`
} `json:"active"`
}
Then in the function where I handle an api request that returns JSON binded to this struct I use a for range loop for each role under active. For each one I perform the same first query and then a second one that is specific to each one.
v := getSiteUsers(&usrs, website)
for _, moderators := range v.Active.Moderators {
// Insert into user table
// Insert into user table with role of moderator
}
for _, admins := range v.Active.Admins {
// Insert into user table
// Insert into user table with role of admin
}
for _, regulars := range v.Active.Regulars {
// Insert into user table
// Insert into user table with role of regular
}
This method will work but it doesn't feel completely right and I would love to get some input from people experienced with go.
Would something like this be better?
v := getSiteUsers(&usrs, website)
insertUsers := func(users []string, role roleType) {
for _, user := range users {
// Insert into user table
// Insert into user table with given role
}
}
insertUsers(v.Active.Moderators, moderatorRole)
insertUsers(v.Active.Admins, adminRole)
insertUsers(v.Active.Regulars, regularRole)

LoadRelated of a list in beego

I am wondering how the correct approach is to load related fields in beego.
The doc explains it like this:
type User struct {
Id int
Name string
Posts []*Post `orm:"reverse(many)"`
}
user := User{Id: 1}
err := dORM.Read(&user)
num, err := dORM.LoadRelated(&user, "Posts")
This makes sense as long as I only query one record. What is the correct way to fetch related fields when I query all users?
A possible solution would be like this:
var users []*User
o.QueryTable(new(User)).All(&users)
for _, user := range users {
o.LoadRelated(controlCategory, "Posts")
}
However, this means I have to loop everytime over the complete list and make for every record a DB query to load all records.
Any suggestions? Thanks!

Golang query with float value is not working correctly

I am doing a Golang query to the postgres database and the weird thing is that the query only works if I hard-code the value in for instance this query works
db.QueryRow("select json_build_object('Streams', array_to_json(array_agg(t))) from (select p.name FROM profiles as p INNER JOIN streams as s ON(s.profile_id =
p.id) WHERE s.latitudes >=28.1036 AND shared = false order by id desc limit 15)t").Scan(&result)
The only part that I now change is where the WHERE s.latitudes >=28.1036
instead of having that value hard-coded I Past it through a form and now have the query like this
db.QueryRow("select json_build_object('Streams', array_to_json(array_agg(t))) from (select p.name FROM profiles as p INNER JOIN streams as s ON(s.profile_id =
p.id) WHERE s.latitudes>=$1 AND shared = false order by id desc limit 15)t",LatMin).Scan(&result)
Now the query just comes back null and I do know for a fact that the LatMin variable is being populated correctly as this is my code
func Auto_Location(w http.ResponseWriter, r *http.Request) {
var result string
if r.Method == "GET" {
} else {
r.ParseForm() }
LatMin := r.Form["LatMin"]
db,err := sql.Open("Postgres Connects")
if err != nil {
log.Fatal(err)
println(err)
}
db.QueryRow("select json_build_object('Streams', array_to_json(array_agg(t))) from (select p.name FROM profiles as p INNER JOIN streams as s ON(s.profile_id =
p.id) WHERE s.latitudes>=$1 AND shared = false order by id desc limit 15)t",LatMin).Scan(&result)
defer db.Close()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w,result)
fmt.Println("Value:", LatMin)
}
Again as you can see from the code I am using FMT and the LatMin has the correct value of 28.1036 is there something that I am missing here.. The postgres package I am using is _ "github.com/lib/pq" . I am thinking it is an issue with float values because if I change LatMin to 28 it works but 28.1036 does not
Hello everyone I have found the answer, if you are collecting form information on a Post and if some of the values are of type decimal,float or double and you want to query that value into the database then use
r.FormValue("LatMin")
instead of
r.Form["LatMin"]
as that will get the correcte
Both of those methods get the correct value however Form[""] as issues with the database.

Resources