How can I generate SQL code from GORM struct model? - go

I'm using goose to manage my database migrations but I need to write SQL sentences directly in the migrations file. There is a way to generate the SQL directly from the GORM model?

Unfortunately using the gorm.Session{DryRun: true} option doesn't make the migration SQL statement/s available to the caller as it does with normal queries.
The only way I can see right now would be to capture the SQL that is run by the migration when it's being logged by reimplementing the gorm.io/gorm/logger.Interface interface. Specifically, the Trace method.
type Interface interface {
LogMode(LogLevel) Interface
Info(context.Context, string, ...interface{})
Warn(context.Context, string, ...interface{})
Error(context.Context, string, ...interface{})
Trace(ctx context.Context, begin time.Time, fc func() (string, int64), err error)
}
Inside Trace you can call that fc function argument to get the SQL and RowsAffected, which you can do whatever you want with.
For example:
import (
"time"
"context"
"gorm.io/gorm/logger"
)
type RecorderLogger struct {
logger.Interface
Statements []string
}
func (r *RecorderLogger) Trace(ctx context.Context, begin time.Time, fc func() (string, int64), err error) {
sql, _ := fc()
r.Statements = append(r.Statements, sql)
}
Now use it as:
recorder := RecorderLogger{logger.Default.LogMode(logger.Info)}
session := db.Session(&gorm.Session{
Logger: &recorder
})
session.AutoMigrate(&Model{}, ...)
// or
session.Migrator().CreateTable(&Model{}, ...) // or any method therein
// now recorder.Statements contains the statements run during migration
This is very hacky, and you may run into problems because AutoMigrate modifies the current state of the database and migrates it up to what your model requires (up to a point) and for that to work your current database must reflect the current state of your production database (or whatever database your hope to migrate). So, you could build that tool that helps you get the migration script started if you're careful, but to properly gain the advantages of a migration system like goose you'll need to get your hands dirty with the SQL :)

you can using this lib: https://github.com/sunary/sqlize
It's allowed you create sql from models, also support migration by differ between models and existing sql.

I personally would use the migration functionality that is available inside Gorm, but for your case we can do the following.
Firstly there is a feature in Gorm called Dry Run and you can use this to see the SQL statements that get executed when performing queries. Unfortunately I can't see that it is possible when using migrations. So what I suggest is to use github.com/DATA-DOG/go-sqlmock
I would usually use this for testing purposes, but you could use it temporarily to get the SQL needed for your separate migrations.
package main
import (
"database/sql"
"time"
"github.com/DATA-DOG/go-sqlmock"
"gorm.io/driver/mysql"
"gorm.io/gorm"
)
type Model struct {
ID uint64 `gorm:"primaryKey"`
Name string `gorm:"index"`
Description string
CreatedAt time.Time
LastLogin sql.NullTime
}
func main() {
sqlDB, _, err := sqlmock.New()
if err != nil {
panic(err)
}
gormDB, err := gorm.Open(mysql.New(mysql.Config{
Conn: sqlDB,
SkipInitializeWithVersion: true,
}), &gorm.Config{})
if err != nil {
panic(err)
}
defer sqlDB.Close()
gormDB.AutoMigrate(&Model{})
}
This will give you a result like this
all expectations were already fulfilled, call to ExecQuery 'CREATE TABLE `models` (`id` bigint unsigned AUTO_INCREMENT,`name` varchar(191),`description` longtext,`created_at` datetime(3) NULL,`last_login` datetime(3) NULL,PRIMARY KEY (`id`),INDEX idx_models_name (`name`))' with args [] was not expected
[0.003ms] [rows:0] CREATE TABLE `models` (`id` bigint unsigned AUTO_INCREMENT,`name` varchar(191),`description` longtext,`created_at` datetime(3) NULL,`last_login` datetime(3) NULL,PRIMARY KEY (`id`),INDEX idx_models_name (`name`))
which contains the SQL statement required. This feels incredibly hacky but will give you the result you need

Related

How to use pgtype.Numeric with gorm and sqlite3?

I need to store very large and high precision numbers with gORM, and using a pgtype.Numeric seems like the best bet. However, I cannot because I get an error: sql: Scan error on column index 4, name "foo": cannot scan int64
My model looks something like this:
type Model struct {
gorm.Model
Foo *pgtype.Numeric `gorm:"not null"`
}
Not sure if using pgtype.Numeric is the best (that's what i've seen everyone else use), or I'm doing something wrong. Thanks!
The code that caused the error:
package main
import (
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"math/big"
"github.com/jackc/pgtype"
)
type Model struct {
gorm.Model
Foo *pgtype.Numeric `gorm:"not null"`
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Model{})
// Create
db.Create(&Model{Foo: &pgtype.Numeric{Int: big.NewInt(10000000), Status: pgtype.Present}})
var m Model
db.First(&m) // this line causes the error
}
Sqlite3 does not support big integer so there is no way you can accomplish that directly. I run the code and foo column is create as:
`foo` numeric NOT NULL
Which in sqlite https://www.sqlite.org/datatype3.html means
A column with NUMERIC affinity may contain values using all five storage classes... If the TEXT value is a well-formed integer literal that is too large to fit in a 64-bit signed integer, it is converted to REAL.
So your big int will turn into float64. Good thing it paniced instead of losing accuracy silently.
What you can do is convert the big int to string or bytes first and store that.
When debugging the sql.Scanner interface for database deserialization, it is noticeable that the value from the database arrives either as int64 or float64. This then leads to the corresponding error message.
A possible solution is to use a text data type in the database, by adding the type text to the field tag:
`gorm: "type:text;"`
Using the github.com/shopspring/decimal package, you can conveniently create a decimal number using the NewString function.
The adapted code to insert the data:
num, err := decimal.NewFromString("123456789012345.12345678901")
if err != nil {
panic(err)
}
db.Create(&Model{Foo: &num})
The model structure might then look something like this:
type Model struct {
gorm.Model
Foo *decimal.Decimal `gorm: "not null;type:text;"`
}
This would result in the following schema:
Test
If one inserts a breakpoint in decimal.Scan, one can see that the value comes from the database as expected as a string, resulting in the creation of a decimal with NewFromString (see Decimal's scan method).
If you add this line of code to the end of the main function
fmt.Println(m.Foo)
it would result in the following output in the debug console:
123456789012345.12345678901
Complete Program
Your complete program, slightly adapted to the above points, would then look something like this:
package main
import (
"fmt"
"github.com/shopspring/decimal"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
type Model struct {
gorm.Model
Foo *decimal.Decimal `gorm:"not null;type:text;"`
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Model{})
// Create
num, err := decimal.NewFromString("123456789012345.12345678901")
if err != nil {
panic(err)
}
db.Create(&Model{Foo: &num})
var m Model
db.First(&m)
fmt.Println(m.Foo)
}
pgtype.Numeric and SQLite
If a PostgreSQL database is used, gorm can be used together with pgtype.Numeric to handle decimal numbers like 123456789012345.12345678901. You just need to use the numeric data type on the Postgres side with the appropriate desired precision (e.g. numeric(50,15)).
After all, this is exactly what pgtype is for, see the pgtype readme where it says:
pgtype is the type system underlying the https://github.com/jackc/pgx PostgreSQL driver.
However, if you use a text data type in SQLite for the reasons mentioned above, pgtype.Numeric will not work with SQLite. An attempt with the above number writes 12345678901234512345678901e-11 to the DB and when reading it out the following error occurs:
sql: Scan error on column index 4, name "foo": 12345678901234512345678901e-11 is not a number

How to use (variable of type gorm.io/gorm.DB) as github.com/jinzhu/gorm.DB?

Now I have upgraded my gorm package to the new version which is "gorm.io/gorm" but I am using package (github.com/qor/admin) that use the old version (github.com/jinzhu/gorm) of the package.
I need to pass gorm.DB(new version) value to a function of the package "github.com/qor/admin" that take gorm.DB(old version) as a parameter
package main
import (
adminPkg "github.com/qor/admin"
database "github.com/youssefsiam38/myfolder/db"
)
func main() {
db, err := database.Connection() // retrun db of type *gorm.io/gorm.DB
if err != nil {
panic(err)
}
admin := adminPkg.New(&adminPkg.AdminConfig{DB: db})
}
The err
vet: ./main.go:14:50: cannot use db (variable of type *gorm.DB) as *gorm.DB value in struct literal
You can't. Those two objects are not related, even though the name and implementations seem to indicate otherwise.
The github.com/qor/admin library has an issue open for that, so I'd stay tuned and/or contribute to migrate to the new version of gorm (and maybe rollback the lib upgrade if github.com/qor/admin is critical for your operations :)
It's good to note that if those libs were using interfaces, this could be fixable by third-parties. Stay in school kids, and use interfaces.

godror SQL driver, and a slice of structs

I am using Go to open a file with multiple JSON entries, parse the file into a slice with a custom type, then insert the slice data into an Oracle database. According to the godror documentation at https://godror.github.io/godror/doc/tuning.html, I should be able to feed a slice into the insert command, and have the database/sql Exec method iterate thru the struct for me. I am at a loss for how to do this. I am sure there is a simple solution to this.
To slightly complicate things, I have a database column that is not in the struct for the host name of the computer the app is running on. This column should be filled in for every row the app inserts. In other words, every row of this table needs to have a column filled in with the host name of the machine is running on. Is there a more elegant way to do this than to just add a 'hostname' field to my struct that has the running system's host name, over and over again?
What follows is a simplified version of my code.
package main
import (
"database/sql"
_ "github.com/godror/godror"
)
type MyType struct {
var1 string `json:"var1"`
var2 string `json:"var2"`
}
func main() {
hostname, err := os.Hostname()
if err != nil {
//log.Println("Error when getting host name")
log.Fatal(err)
}
mySlice := parseFile("/path/to/file", false)
db, err := sql.Open("godror", "user/pass#oraHost/oraDb")
sql := `INSERT INTO mytable (var1, var2, host) values (:1 :2 :3)`
// this is the line where everything breaks down, and i am not sure
// what should go here.
_, err = db.Exec(sql, mySlice[var1], mySlice[var2], hostname)
}
func parseFile(filePath string, deleteFile bool) []MyType {
// a few lines of code that opens a text file, parses it into a slice
// of type MyType, and returns it
}
not sure, if you already went through, does this test case, TestExecuteMany help ? mentioned in https://github.com/godror/godror/blob/master/z_test.go has example usage for array insert.
res, err := tx.ExecContext(ctx,
`INSERT INTO `+tbl+ //nolint:gas
` (f_id, f_int, f_num, f_num_6, F_num_5_2, F_vc, F_dt)
VALUES
(:1, :2, :3, :4, :5, :6, :7)`,
ids, ints, nums, int32s, floats, strs, dates)
for batch insert of structs:
https://github.com/jmoiron/sqlx

convert go-pg query into plain sql

Is it possible to convert go-pg query
err = db.Model(story).
Relation("Author").
Where("story.id = ?", story1.Id).
Select()
into plain SQL?
It would be helpful for debugging. So I could copy this plain SQL query and run in psql client as a string.
Probably there is some kind of package for this?
This is listed in the project's wiki:
How to view queries this library generates?
How to view queries this library generates?
You can setup query logger like this:
type dbLogger struct { }
func (d dbLogger) BeforeQuery(c context.Context, q *pg.QueryEvent) (context.Context, error) {
return c, nil
}
func (d dbLogger) AfterQuery(c context.Context, q *pg.QueryEvent) (context.Context, error) {
fmt.Println(q.FormattedQuery())
return c, nil
}
db := pg.Connect(&pg.Options{...})
db.AddQueryHook(dbLogger{})
Problem: There is no default parsing to RAW sql from ORM since v10.
Well, that's too late i guess. Maybe someone (like me) will face this problem in 2021. There is some steps how you could solve this:
[x] Read DOCs.
[x] Check all structs.
[x] Implement all methods.
Solving the problem
This solve is "forked" from this issue but i'll explain it step by step.
First of all we need to read some source code of go-pg hook.
As I said before: we need to check all structs from this doc. But we're lucky. There is only 1 struct!
// QueryEvent ...
type QueryEvent struct {
StartTime time.Time
DB orm.DB
Model interface{}
Query interface{}
Params []interface{}
fmtedQuery []byte
Result Result
Err error
Stash map[interface{}]interface{}
}
We don't really need to implement this struct completely.
But when you use db.AddQueryHook() (where db is ref on our DB connection and AddQueryHook() is method) AddQueryHook() wait's from you this interface:
type QueryHook interface {
BeforeQuery(context.Context, *QueryEvent) (context.Context, error)
AfterQuery(context.Context, *QueryEvent) error
}
So, we already read DOCs, checked structs. And what's next? Answer is pretty easy:
Implement all methods.
TBH, I thought that this is harder than it is.
To implement it you just need to create 2 methods of current (new empty) structure that implements functionality of methods above, like this:
Creating empty structure
type dbLogger struct{}
Adding methods from doc:
func (d dbLogger) BeforeQuery(c context.Context, q *pg.QueryEvent) (context.Context, error) {
return c, nil
}
func (d dbLogger) AfterQuery(c context.Context, q *pg.QueryEvent) error {
fq, _ := q.FormattedQuery()
fmt.Println(string(fq))
return nil
}
I hope this helps everyone who ever encounters this problem.
I've just been upgrading from go-pg v7 to v10 & had a problem where Query.AppendFormat() which is what I was using to get the RAW SQL had been removed.
After using the comments in this post for inspiration I managed extract it, using the code below
import (
"github.com/go-pg/pg/v10/orm"
)
func QueryToString(q *orm.Query) string {
value, _ := q.AppendQuery(orm.NewFormatter(), nil)
return string(value)
}
Hope this helps future viewers
//db your *pg.DB
// q your *orm.Query = db.Model(&yourModel).
qq := pg.QueryEvent{
DB: db,
Model: q.TableModel(),
Query: q,
}
fmt.Println(qq.FormattedQuery())
So in your case
q:= db.Model(story).
Relation("Author").
Where("story.id = ?", story1.Id)
fmt.Println("running SQL:")
qq := pg.QueryEvent{
DB: db,
Model: q.TableModel(),
Query: q,
}
fmt.Println(qq.FormattedQuery())
q.Select()

Get argument value in SQL query with go-sqlmock

I'm currently using go-sqlmock to mock my database.
Is there any way to get the values that have been passed to the sql driver when performing a query?. Like the arg variable that is been passing as argument here:
import (
"database/sql"
)
func retrieveInfo(){
// this function returns an initialized instance of type *sql.DB
DbDriver := initDb()
query := "my query"
arg := 3
rows, err := Db_driver.Query(query, arg)
// ...
}
Then I want to test the function, and I would like to know the value of the variable arg while testing. I think it should be possible, since it is passed to the "fake" driver that go-sqlmock creates. My test logic looks like this:
import "github.com/DATA-DOG/go-sqlmock"
// Inits mock and overwrites the original driver
db, mock, err := sqlmock.New()
Db_driver = db
func TestRetrieveInfo(t *testing.T){
// query that matchs the one in retrieveInfo()
query := "..."
queryRows := sqlmock.NewRows([]string{"column1, column2"}).FromCSVString("value1, value2")
mock.ExpectQuery(query).WillReturnRows(queryRows)
}
You can wrap the real driver with instrumentation to log the values.
Use package github.com/luna-duclos/instrumentedsql.
You can capture an arg value by making your own Match method, e.g.
type ArgCapture struct {
CapturedValue driver.Value
}
func (a *ArgCapture) Match(value driver.Value) bool {
a.CapturedValue = value
return true
}
Then in your test you'd have something like
capture := &ArgCapture{}
mock.ExpectQuery(query).WithArgs(argCapture).WillReturnRows(queryRows)
Then you can do assertions or whatever on capture.CapturedValue

Resources