DB Query Commands are producing strange sql queries - go

I have my model as:
type Report struct {
ID int `json:"id,omitempty" gorm:"primary_key"`
Title *string `json:"title" gorm:"not null"`
}
I have initialized variable report as var report Report I have successfully auto migrated this model as database table and have populated database as sql INSERT using GORM's db.Create(&report).
The problem I am facing is while trying query commands. Every query commands supported by GORM such as db.Find(&report) , db.First(&report, 1) is resulting to queries such as folows:
SELECT * FROM "reports" WHERE "reports"."deleted_at" IS NULL AND ((id = $1))
SELECT * FROM "reports" WHERE "reports"."deleted_at" IS NULL AND ((id = $1))
SELECT * FROM reports WHERE (reports.deleted_at IS NULL) AND ((id = $1))
SELECT * FROM reports WHERE (reports.deleted_at IS NULL) AND ((id = $1))
SELECT 0 done
I am unable to query database. I am using GORM with cockroach db. This works fine when using GO pq driver and raw sql commands.

The deleted_at column is a part of GORM's base gorm.Model struct and its soft delete feature. Are you using gorm.Model somewhere that we can't see in this example? This isn't supposed to happen unless you either define a field named DeletedAt or embed a gorm.Model in your model struct.

Since the model has the field deleted_at, gorm is using the soft delete ability automatically. You can use Unscoped
db.Unscoped().Find(&reports)
Which is the same as running the raw query
db.Raw("SELECT * FROM reports").Scan(&reports)

Related

How to convert gorm.DB query to its string representation

Let's say I have a gorm.DB object in Go, and I want to extract and assert the query I've built to see it was built correctly.
How can I compare the "string" representation of the query to this object ?
Make sure your gorm is up to date.
Using ToSQL
example:
sql := DB.ToSQL(func(tx *gorm.DB) *gorm.DB {
return tx.Model(&User{}).Where("id = ?", 100).Limit(10).Order("age desc").Find(&[]User{})
})
sql //=> SELECT * FROM "users" WHERE id = 100 AND "users"."deleted_at" IS NULL ORDER BY age desc LIMIT 10
Using DryRun Mode
example:
stmt := db.Session(&Session{DryRun: true}).First(&user, 1).Statement
stmt.SQL.String() //=> SELECT * FROM `users` WHERE `id` = $1 ORDER BY `id`
stmt.Vars //=> []interface{}{1}
Using Debug
example:
db.Debug().Where("name = ?", "jinzhu").First(&User{})
Maybe you can try to use Debug() to print raw sql built by gorm to stdout.
db.Debug().Where("name = ?", "jinzhu").First(&User{})
FYI https://gorm.io/docs/logger.html#Debug

Go GORM Preload & Select only items matching on preload table condition

Im trying to use GORM to select only items from the parent table that have a matching condition in a related table.
type Table1 struct {
gorm.Model
Name string
Email string
Items Table2
}
type Table2 struct {
gorm.Model
Product string
otherfield string
}
I want to return all Table1 items that have Product in Table2 set to a specific value. So far I am getting mssql: The multi-part identifier "visits.sign_out_time" could not be bound. a lot.
My command is
var items []Table2
db.Debug().Preload("Table2").Where("table2.product = ?", "xxx").Find(&items).GetErrors()
Not entirely sure where I'm going wrong but for whatever reason the .Where() cannot access the second, preloaded table. How do I go about using GORM to achieve what I am trying to do?
Thanks,
Alex
The Where("table2.product = ?", "xxx") cannot access the second (preloaded) table because Preload is not a JOINS, it's a separate SELECT query. Your code creates two separate queries, something like this:
// first query
SELECT * FROM table1 WHERE table2.product = 'xxx';
// second query
SELECT * FROM table2;
In order to return all Table1 records that have Product in Table2 set to a specific value you have to do the following:
var t1 []Table1
err = db.
Where(`EXISTS(SELECT 1 FROM table2 t2 WHERE t2.product = ? AND table1.id = t2.table1_id)`, productValue).
Find(&t1).Error
Note that AND table1.id = t2.table1_id part is just an example how the two tables might be related, you may have a different relation and you'll need to modify the query accordingly.
If you want GORM to populate the t1.Items with the Table2 data, you prepend Preload("Items") to the above query.
If you only want only Items, you can directly query on Table2 no need to preload Table1
var items []Table2
db.Where("product = ?", "xxx").Find(&items).GetErrors()
Or you need all data of Table1 then Join with table2 then use where clause
db.Debug().Joins("JOIN table2 ON table1.id = table2.table1_id")
.Where("table2.product = ?", "xxxx").Find(&table1data)
Here I don't see any foreign key in Table2 for join.You can add one.
type Table2 struct {
gorm.Model
Product string
Table1ID uint
}

Field Authorization level with Postgraphile

Currently, I'm working with Postgraphile and I need to enforce permissions at data/field level.
For example, I have a carmodel table, with my query is something like:
{
carmodel
{
id
name
description
}
}
Well, in this table I have ids (1 = AUDI, 2 = GM, 3 = BMW)
The current user (on roles/claims) has permission only to see (1=AUDI/3=BMW)
There is a way to enforce permissions based on field data? And return only data filtered on the user permissions?
Yes; row-level security can define this declaratively. Something like:
create policy audi_bmw on carmodel for select using (
id in (1, 3)
);
I'm guessing this permission comes from another table though; so it might be more like:
create policy audi_bmw on carmodel for select using (
id in (
select car_model_id
from user_carmodel_permissions
where user_id = current_user_id()
)
);
assuming you already have a current_user_id function, something like:
create function current_user_id() returns int as $$
select nullif(current_setting('jwt.claims.user_id', true), '')::int;
$$ language sql stable;
Check out our row level security cheatsheet.

entity framework lazy loading null properties in child

Using entity framwork with lazy loading - Have the following question on loading related entities when the entities
are null.
Say I have two tables employee and employeedetails. Assume in the above case not all employee entries have an entry in the employeedetails table.
If I want to look up a list of Employees
(from e in objectcontext.employees
select new EmployeeEntity
{
EmpID= e.EmployeeID,
FirstName = e.FirstName,
Address = e.employeedetails.Address
}).ToList();
EmployeeEntity is the data class into which we stuff the results.
The above code breaks if even one employee in the returned list
does not have a entry in table employeedetails. This is obvious since e.employeedetails will be null for those customers who do not have a details entry
What is the best way to rewrite the above query?
Would something like this be acceptable ?
(from e in objectcontext.employees
select new EmployeeEntity
{
EmpID= e.EmployeeID,
FirstName = e.FirstName,
Address = e.employeedetails == null ? "" : e.employeedetails.Address,
}).ToList();
I am not clear on the efficiency of this above query - Would this statment do the null check at DB level?
Should I instead do an explicit include like
objectcontext.include("employeedetails")...
And then loop through the results to check for null?
Yes, this statement would indeed perform a null check in the SQL query that is generated. Most likely, it will simply be a NVL or COALESCE.
That's the way you should be doing it.

Change DataSet Designer parameter inference

I'm using the VS2010 DataSet designer to make some select queries with optional parameters similar to this:
SELECT CustomerID, FirstName, JoinDate, etc
FROM tblCustomers
WHERE (
(#CustomerID IS NULL OR CustomerID = #CustomerID) AND
(#FirstName IS NULL OR FirstName = #FirstName) AND
(#JoinedBefore IS NULL OR JoinDate < #JoinedBefore) AND
(#JoinedAfter IS NULL OR JoinDate > #JoinedAfter) AND
.. etc ..
)
The inference for these properties data-types and allow DB null is almost always wrong. I end up with string types set for date time and vice versa. Over half the fields are always marked as non-null.
That obviously wreaks havoc on my queries. I can manually change these inference's, but every time I have to update the TableAdapter, it resets them all to what it thinks is best! Anyone know how to either a) get the inferences right, or b) override them in a permanent way?
It seems VS infers the data type based on the first occurrence of the parameter in the query. Because I put my #Parater IS NULL OR... first, that confused the designer and caused it to infer wrong a lot of the time. I swapped the order of my query and now it infers perfectly:
SELECT CustomerID, FirstName, JoinDate, etc
FROM tblCustomers
WHERE (
(CustomerID = #CustomerID OR #CustomerID IS NULL AND
(FirstName = #FirstName OR #FirstName IS NULL) AND
(JoinDate < #JoinedBefore OR #JoinedBefore IS NULL) AND
(JoinDate > #JoinedAfter OR #JoinedAfter IS NULL) AND
.. etc ..
)

Resources