Problem Statement
Problem 1
I am getting a 0 when trying to retrieve the count.
The query expression is perfect, by which I assume that the query being built is correct.
However, the result is 0 irrespective of the query.
Problem 2
I am required to specify the db name in the Table clause as Table(dbname.tbname)
This isn't required in the any other queries and is required only when using Count()
What might I be missing?
package main
import (
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mssql"
_ "github.com/jinzhu/gorm/dialects/mysql"
_ "github.com/jinzhu/gorm/dialects/postgres"
_ "github.com/jinzhu/gorm/dialects/sqlite"
)
var db *gorm.DB
var err error
func main() {
db, err = gorm.Open("mysql", "gorm:gorm#tcp(localhost:9910)/gorm?charset=utf8&parseTime=True")
if err != nil {
panic(err)
}
db.LogMode(true)
var count int
var db = "db"
var tb = "tb"
var columnname = "somecol"
var date = "2014-01-02"
err := db.Table(db+"."+tb).Where("? >= ?", columnname, date+" 00:01:00").Count(&count)
if err != nil {
Error.Println(err.Error.Error())
}
fmt.Println("The Count is \n", count)
}
Update 1
The following works.
But this is as per my understanding using the result as *sql.Row and then retrieving the result using scan.
But I don't understand,why ...Count(&count) is giving a run time error?
Related
I have a Gorm delete with the returning result:
expirationDate := time.Now().UTC().Add(-(48 * time.hour))
var deletedUsers Users
res := gormDB.WithContext(ctx).
Table("my_users").
Clauses(clause.Returning{Columns: []clause.Column{{Name: "email"}}}).
Where("created_at < ?", expirationDate).
Delete(&deletedUsers)
Now the test with clauses always fails. e.g. :
sqlMock.ExpectExec(`DELETE`)
.WithArgs(expirationDate)
.WillReturnResult(sqlmock.NewResult(1, 1))
Receiving error:
"call to Query 'DELETE FROM "my_users" WHERE created_at < $1 RETURNING "email"' with args [{Name: Ordinal:1 Value:2023-01-18 06:15:34.694274 +0000 UTC}], was not expected, next expectation is: ExpectedExec => expecting Exec or ExecContext which:\n - matches sql: 'DELETE'\n - is with arguments:\n 0 - 2023-01-18 06:15:34.694274 +0000 UTC\n - should return Result having:\n LastInsertId: 1\n RowsAffected: 1"
I tried many other sqlMock expectations, but they have a similar issue.
Also, we don't have a return value in ExpectExec, only in ExpectQuery...
Any chance someone has to test the Gorm query with the Clauses?
I was able to successfully manage what you need. First, let me share the files I wrote, and then I'll walk you through all of the relevant changes. The files are repo.go for production and repo_test.go for the test code.
repo.go
package gormdelete
import (
"context"
"time"
"gorm.io/gorm"
"gorm.io/gorm/clause"
)
type Users struct {
Email string
}
func Delete(ctx context.Context, gormDB *gorm.DB) error {
expirationDate := time.Now().UTC().Add(-(48 * time.Hour))
var deletedUsers Users
res := gormDB.WithContext(ctx).Table("my_users").Clauses(clause.Returning{Columns: []clause.Column{{Name: "email"}}}).Where("created_at < ?", expirationDate).Delete(&deletedUsers)
if res.Error != nil {
return res.Error
}
return nil
}
As you didn't provide the full file I tried to guess what was missing.
repo_test.go
package gormdelete
import (
"context"
"database/sql/driver"
"testing"
"time"
"github.com/DATA-DOG/go-sqlmock"
"github.com/stretchr/testify/assert"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
// this is taken directly from the docs
// https://github.com/DATA-DOG/go-sqlmock#matching-arguments-like-timetime
type AnyTime struct{}
// Match satisfies sqlmock.Argument interface
func (a AnyTime) Match(v driver.Value) bool {
_, ok := v.(time.Time)
return ok
}
func TestDelete(t *testing.T) {
db, mock, err := sqlmock.New()
if err != nil {
t.Fatalf("an error was not expected: %v", err)
}
conn, _ := db.Conn(context.Background())
gormDb, err := gorm.Open(postgres.New(postgres.Config{
Conn: conn,
}))
row := sqlmock.NewRows([]string{"email"}).AddRow("test#example.com")
mock.ExpectBegin()
mock.ExpectQuery("DELETE FROM \"my_users\" WHERE created_at < ?").WithArgs(AnyTime{}).WillReturnRows(row)
mock.ExpectCommit()
err = Delete(context.Background(), gormDb)
assert.Nil(t, err)
if err = mock.ExpectationsWereMet(); err != nil {
t.Errorf("not all expectations were met: %v", err)
}
}
Here, there are more changes that it's worth mentioning:
I instantiated the AnyTime as per the documentation (you can see the link in the comment).
Again, I guessed the setup of the db, mock, and gormDb but I think it should be more or less the same.
I switch the usage of ExpectExec to ExpectQuery as we'll have back a result set as specified by the Clauses method in your repo.go file.
You've to wrap the ExpectQuery within an ExpectBegin and an ExpectCommit.
Finally, pay attention to the difference in how the driver expects the parameters in the SQL statement. In the production code, you can choose to use ? or $1. However, in the test code, you can only use the ? otherwise it won't match the expectations.
Hope to help a little bit, otherwise, let me know!
I'm working on some code that's meant to dump mysql data to a .csv file. I'd like to pass in a command line arg that allows the user to input what ID is run for the query, eg. go run main.go 2 would run the query
SELECT * FROM table where id = 2;
I know that Go has the os package where I can then pass something like:
args := os.Args
if len(args) < 2 {
fmt.Println("Supply ID")
os.Exit(1)
}
testID := os.Args[1]
fmt.Println(testID)
}
Here's the code Im currently working on. How can I set a command line argument for the Query?
rows, _ := db.Query("SELECT * FROM table where id = ?;")
err := sqltocsv.WriteFile("table.csv",rows)
if err != nil {
panic(err)
}
columns, _ := rows.Columns()
count := len(columns)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := range columns {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
for i, col := range columns {
val := values[i]
b, ok := val.([]byte)
var v interface{}
if ok {
v = string(b)
} else {
v = val
}
fmt.Println(col, v)
}
}
}
Just add your parameters to Query:
rows, _ := db.Query("SELECT * FROM table where id = ?;", os.Args[1])
testID need to be and interface{} type and add to Query
var testID interface{}
testID = os.Args[1]
And add to Query
rows, _ := db.Query("SELECT * FROM table where id = ?;", testID)
Edit:
Why interface{}?
Query function accept interface{} types in arguments. More info
I am using go version go1.10.3 linux/amd64 and mysql 5.7.
Need to runnable with GORM's docker compose config or please provides your config.
package main
import (
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mssql"
_ "github.com/jinzhu/gorm/dialects/mysql"
_ "github.com/jinzhu/gorm/dialects/postgres"
_ "github.com/jinzhu/gorm/dialects/sqlite"
)
var db *gorm.DB
func init() {
var err error
db, err = gorm.Open("mysql", "gorm:gorm#tcp(localhost:9910)/gorm?charset=utf8&parseTime=True")
if err != nil {
panic(err)
}
db.LogMode(true)
}
type Res {
Id int `gorm:"column:id"`
age int `gorm:"column:age"`
}
func main() {
var result []Res
db.Table("A").Select("A.id,A.age").Joins("left join B on A.id=B.id").
Where("A.age=28").
Not("B.id", []{2,3,4,5}).
Scan(&result)
fmt.Printf("%v", result)
}
The sql log is:
select A.id,A.age from A left join B on A.id=B.id where a.age=28 and A.B.id not in(2,3,4,5)
As can be seen, the not operation is appended for the table A (A.B.id not in ...). How can it be appended to table B (B.id not in ...)?
First off, as per my comment: You're only using mysql as a DB, and yet you're importing all the dialect packages (which do call their respective init functions. These functions register callbacks that are dialect-specific (e.g. the init func in the MsSQL package). Remove all the dialects you're not using from your imports:
// remove lines that I've commented out here...
import (
"github.com/jinzhu/gorm"
// _ "github.com/jinzhu/gorm/dialects/mssql"
_ "github.com/jinzhu/gorm/dialects/mysql"
// _ "github.com/jinzhu/gorm/dialects/postgres"
// _ "github.com/jinzhu/gorm/dialects/sqlite"
)
You could move the NOT IN part of the WHERE clause to the JOIN condition based off of the documentation.
I'd also check any errors that you may encounter, they might give you more debug information on top of the log:
err := db.Table("A").Select("A.id,A.age").
Joins("LEFT JOIN B on A.id = B.id AND B.id NOT IN (?)", []int{2, 3, 4, 5}).
Where("age = ?", 28).
Scan(&result).Error
if err != nil {
fmt.Fatalf("Failed to execute query: %+v", err)
}
fmt.Prinln(result)
resolved
use
Where("B.id not in(?)", [] {2,3,4,5}) instead of Not()
anybody has a better idea?
I have a question related to using blob type as partition key.
I use it, as I need to save hash value.
(hash value returns binary data. usually as hexadecimal.)
I tried a select query with gocql, however it failed with following error.
Is there any way to get a successful result for this kind of query?
Your advice highly appreciated!!
-- result
hash_value: [208 61 222 22 16 214 223 135 169 6 25 65 44 237 166 229 50 5 40 221]
/ hash_value: ?=??߇?A,???2(?
/ hash_value: 0xd03dde1610d6df87a90619412ceda6e5320528dd
string
2018/03/22 10:03:17 can not unmarshal blob into *[20]uint8
-- select.go
package main
import (
"fmt"
"log"
"crypto/sha1"
"reflect"
"github.com/gocql/gocql"
)
func main() {
cluster := gocql.NewCluster("10.0.0.1")
cluster.Keyspace = "ks"
cluster.Consistency = gocql.Quorum
cluster.ProtoVersion = 4
cluster.Authenticator = gocql.PasswordAuthenticator{
Username: "cassandra",
Password: "cassandra",
}
session, _ := cluster.CreateSession()
defer session.Close()
text := "text before hashed"
data := []byte(text)
hash_value := sha1.Sum(data)
hexa_string := fmt.Sprintf("0x%x", hash_value)
fmt.Println("hash_value: ", hash_value)
fmt.Println(" / string: ", string(hash_value[:]))
fmt.Println(" / column1: ", hexa_string)
fmt.Println(reflect.TypeOf(hexa_string))
// *** select ***
var column1 int
returned_hash := sha1.Sum(data)
//if err := session.Query(`SELECT hash_value, column1 FROM sample WHERE hash_value= ? LIMIT 1`,
// hexa_string).Consistency(gocql.One).Scan(&returned_hash, &column1); err != nil {
if err := session.Query(`SELECT hash_value, column1 FROM sample WHERE hash_value=0xd03dde1610d6df87a90619412ceda6e5320528dd`).Consistency(gocql.One).Scan(&returned_hash, &column1); err != nil {
//fmt.Println(err)
log.Fatal(err)
}
fmt.Println("comment: ", returned_hash, user_id)
}
-- table definition --
CREATE TABLE IF NOT EXISTS ks.samle (
hash_value blob,
column1 int,
...
PRIMARY KEY((hash_value), column1)
) WITH CLUSTERING ORDER BY (column1 DESC);
I fixed the problem by changing the type of variable: returned_hash.
returned_hash (var to store returned result) should be []byte.
I understands as follows.
marshal: convert the data given in the code to a type cassandra can handle.
unmarshal: convert the data cassandra returned back to a type golang-code can handle.
the original error means latter pattern doesn't go well. so the type of returned_hash must be wrong.
Correct me if I'm wrong. thanks.
package main
import (
"fmt"
"log"
"crypto/sha1"
"reflect"
"github.com/gocql/gocql"
)
func main() {
cluster := gocql.NewCluster("127.0.0.1")
cluster.Keyspace = "browser"
cluster.Consistency = gocql.Quorum
//cluster.ProtoVersion = 4
//cluster.Authenticator = gocql.PasswordAuthenticator{
// Username: "cassandra",
// Password: "cassandra",
//}
session, _ := cluster.CreateSession()
defer session.Close()
text := "text before hashed"
data := []byte(text)
hash_value := sha1.Sum(data)
hexa_string := fmt.Sprintf("0x%x", hash_value)
fmt.Println("hash_value: ", hash_value)
fmt.Println(" / string(hash_value): ", string(hash_value[:]))
fmt.Println(" / hexa(hash_value): ", hexa_string)
fmt.Println(reflect.TypeOf(hexa_string))
// *** select ***
var column1 int
//returned_hash := sha1.Sum(data)
//var returned_hash *[20]uint8
var returned_hash []byte
if err := session.Query(`SELECT hash_value, column1 FROM sample WHERE hash_value=? LIMIT 1`,
hash_value[:]).Consistency(gocql.One).Scan(&returned_hash, &column1); err != nil {
//if err := session.Query(`SELECT hash_value, column1 FROM sample WHERE hash_value=0xd03dde1610d6df87a90619412ceda6e5320528dd`).Consistency(gocql.One).Scan(&returned_hash, &column1); err != nil {
log.Fatal(err)
}
fmt.Printf("Returned: %#x %d \n", returned_hash, column1)
}
Currently on what I've seen so far is that, converting database rows to JSON or to []map[string]interface{} is not simple. I have to create two slices and then loop through columns and create keys every time.
...Some code
tableData := make([]map[string]interface{}, 0)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
Is there any package for this ? Or I am missing some basics here
I'm dealing with the same issue, as far as my investigation goes it looks that there is no other way.
All the packages that I have seen use basically the same method
Few things you should know, hopefully will save you time:
database/sql package converts all the data to the appropriate types
if you are using the mysql driver(go-sql-driver/mysql) you need to add
params to your db string for it to return type time instead of a string
(use ?parseTime=true, default is false)
You can use tools that were written by the community, to offload the overhead:
A minimalistic wrapper around database/sql, sqlx, uses similar way internally with reflection.
If you need more functionality, try using an "orm": gorp, gorm.
If you interested in diving deeper check out:
Using reflection in sqlx package, sqlx.go line 560
Data type conversion in database/sql package, convert.go line 86
One thing you could do is create a struct that models your data.
**Note: I am using MS SQLServer
So lets say you want to get a user
type User struct {
ID int `json:"id,omitempty"`
UserName string `json:"user_name,omitempty"`
...
}
then you can do this
func GetUser(w http.ResponseWriter, req *http.Request) {
var r Role
params := mux.Vars(req)
db, err := sql.Open("mssql", "server=ServerName")
if err != nil {
log.Fatal(err)
}
err1 := db.QueryRow("select Id, UserName from [Your Datavse].dbo.Users where Id = ?", params["id"]).Scan(&r.ID, &r.Name)
if err1 != nil {
log.Fatal(err1)
}
json.NewEncoder(w).Encode(&r)
if err != nil {
log.Fatal(err)
}
}
Here are the imports I used
import (
"database/sql"
"net/http"
"log"
"encoding/json"
_ "github.com/denisenkom/go-mssqldb"
"github.com/gorilla/mux"
)
This allowed me to get data from the database and get it into JSON.
This takes a while to code, but it works really well.
Not in the Go distribution itself, but there is the wonderful jmoiron/sqlx:
import "github.com/jmoiron/sqlx"
tableData := make([]map[string]interface{}, 0)
for rows.Next() {
entry := make(map[string]interface{})
err := rows.MapScan(entry)
if err != nil {
log.Fatal("SQL error: " + err.Error())
}
tableData = append(tableData, entry)
}
If you know the data type that you are reading, then you can read into the data type without using generic interface.
Otherwise, there is no solution regardless of the language used due to nature of JSON itself.
JSON does not have description of composite data structures. In other words, JSON is a generic key-value structure. When parser encounters what is supposed to be a specific structure there is no identification of that structure in JSON itself. For example, if you have a structure User the parser would not know how a set of key-value pairs maps to your structure User.
The problem of type recognition is usually addressed with document schema (a.k.a. XSD in XML world) or explicitly through passed expected data type.
One quick way to go about being able to get an arbirtrary and generic []map[string]interface{} from these query libraries is to populate an array of interface pointers with the same size of the amount of columns on the query, and then pass that as a parameter on the scan function:
// For example, for the go-mssqldb lib:
queryResponse, err := d.pool.Query(query)
if err != nil {
return nil, err
}
defer queryResponse.Close()
// Holds all the end-results
results := []map[string]interface{}{}
// Getting details about all the fields from the query
fieldNames, err := queryResponse.Columns()
if err != nil {
return nil, err
}
// Creating interface-type pointers within an array of the same
// size of the number of columns we have, so that we can properly
// pass this to the "Scan" function and get all the query parameters back :)
var scanResults []interface{}
for range fieldNames {
var v interface{}
scanResults = append(scanResults, &v)
}
// Parsing the query results into the result map
for queryResponse.Next() {
// This variable will hold the value for all the columns, named by the column name
rowValues := map[string]interface{}{}
// Cleaning up old values just in case
for _, column := range scanResults {
*(column.(*interface{})) = nil
}
// Scan into the array of pointers
err := queryResponse.Scan(scanResults...)
if err != nil {
return nil, err
}
// Map the pointers back to their value and the associated column name
for index, column := range scanResults {
rowValues[fieldNames[index]] = *(column.(*interface{}))
}
results = append(results, rowValues)
}
return results, nil