GET blob from sql database Golang - go

import (
"database/sql"
"encoding/json"
"fmt"
_ "github.com/go-sql-driver/mysql"
)
type User struct {
Name string `json:name`
Picture []uint8 `json:picture`
}
func main(){
//straight to the query
rows, err := 'SELECT name, picture FROM ms_users' // picture is longblob type in database
checkErr(err)
var usr User
for rows.Next(){
err = rows.Scan(&usr.Name, &usr.Picture)
checkErr(err)
}
jsn, err := json.Marshal(usr)
fmt.Printf("%v, "string(jsn))
}
With above code, I only get name value but the picture is empty.
How do I store blob value from databse to struct ?
Any answer will be appreciated! thank you!

I'm relatively new to GO I encountered this question while searching a solution for a similar problem I was able to find a solution.
When you get BLOB data from the database you get it as type []byte your struct can look like this below
type User struct {
Name string `json:name`
Picture []byte`json:picture`
}
I guess you can process the byte array according to you need later. In my case I needed a JSON object so I unmarshalled it to a type interface{} variable.

Related

How to use pgtype.Numeric with gorm and sqlite3?

I need to store very large and high precision numbers with gORM, and using a pgtype.Numeric seems like the best bet. However, I cannot because I get an error: sql: Scan error on column index 4, name "foo": cannot scan int64
My model looks something like this:
type Model struct {
gorm.Model
Foo *pgtype.Numeric `gorm:"not null"`
}
Not sure if using pgtype.Numeric is the best (that's what i've seen everyone else use), or I'm doing something wrong. Thanks!
The code that caused the error:
package main
import (
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"math/big"
"github.com/jackc/pgtype"
)
type Model struct {
gorm.Model
Foo *pgtype.Numeric `gorm:"not null"`
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Model{})
// Create
db.Create(&Model{Foo: &pgtype.Numeric{Int: big.NewInt(10000000), Status: pgtype.Present}})
var m Model
db.First(&m) // this line causes the error
}
Sqlite3 does not support big integer so there is no way you can accomplish that directly. I run the code and foo column is create as:
`foo` numeric NOT NULL
Which in sqlite https://www.sqlite.org/datatype3.html means
A column with NUMERIC affinity may contain values using all five storage classes... If the TEXT value is a well-formed integer literal that is too large to fit in a 64-bit signed integer, it is converted to REAL.
So your big int will turn into float64. Good thing it paniced instead of losing accuracy silently.
What you can do is convert the big int to string or bytes first and store that.
When debugging the sql.Scanner interface for database deserialization, it is noticeable that the value from the database arrives either as int64 or float64. This then leads to the corresponding error message.
A possible solution is to use a text data type in the database, by adding the type text to the field tag:
`gorm: "type:text;"`
Using the github.com/shopspring/decimal package, you can conveniently create a decimal number using the NewString function.
The adapted code to insert the data:
num, err := decimal.NewFromString("123456789012345.12345678901")
if err != nil {
panic(err)
}
db.Create(&Model{Foo: &num})
The model structure might then look something like this:
type Model struct {
gorm.Model
Foo *decimal.Decimal `gorm: "not null;type:text;"`
}
This would result in the following schema:
Test
If one inserts a breakpoint in decimal.Scan, one can see that the value comes from the database as expected as a string, resulting in the creation of a decimal with NewFromString (see Decimal's scan method).
If you add this line of code to the end of the main function
fmt.Println(m.Foo)
it would result in the following output in the debug console:
123456789012345.12345678901
Complete Program
Your complete program, slightly adapted to the above points, would then look something like this:
package main
import (
"fmt"
"github.com/shopspring/decimal"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
type Model struct {
gorm.Model
Foo *decimal.Decimal `gorm:"not null;type:text;"`
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Model{})
// Create
num, err := decimal.NewFromString("123456789012345.12345678901")
if err != nil {
panic(err)
}
db.Create(&Model{Foo: &num})
var m Model
db.First(&m)
fmt.Println(m.Foo)
}
pgtype.Numeric and SQLite
If a PostgreSQL database is used, gorm can be used together with pgtype.Numeric to handle decimal numbers like 123456789012345.12345678901. You just need to use the numeric data type on the Postgres side with the appropriate desired precision (e.g. numeric(50,15)).
After all, this is exactly what pgtype is for, see the pgtype readme where it says:
pgtype is the type system underlying the https://github.com/jackc/pgx PostgreSQL driver.
However, if you use a text data type in SQLite for the reasons mentioned above, pgtype.Numeric will not work with SQLite. An attempt with the above number writes 12345678901234512345678901e-11 to the DB and when reading it out the following error occurs:
sql: Scan error on column index 4, name "foo": 12345678901234512345678901e-11 is not a number

db.First() not using primary key name

I am writing some sample code to understand gorm but appear to be having an issue with setting up the primary_key value. As a result, the resulting SQL query is broken.
Please see the sample code:
package main
import (
"os"
"fmt"
"time"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
type Post struct {
id int `json:"id" gorm:"primary_key:id"`
url string `gorm:"url"`
content string `gorm:"content"`
created_at time.Time `gorm:"created_at"`
normalized string `gorm:"normalized"`
account_id int `gorm:"account_id"`
posthash []byte `gorm:"posthash"`
received_at time.Time `gorm:"received_at"`
}
func main() {
dsn := "user=postgres dbname=localtest"
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
fmt.Println("An error occurred", err)
os.Exit(1)
}
db.AutoMigrate(&Post{})
var post Post
db.First(&post, 1)
fmt.Println(post)
}
When I run this code, I receive the following error:
$ go run gorm_test.go
2020/12/01 00:34:06 /home/farhan/gorm_test.go:32 ERROR: syntax error at or near "=" (SQLSTATE 42601)
[0.128ms] [rows:0] SELECT * FROM "posts" WHERE "posts". = 1 ORDER BY "posts". LIMIT 1
{0 {0 0 <nil>} 0 [] {0 0 <nil>}}
The nature of this error suggests to me that the primary_key value is not set. I tried primaryKey and primarykey, but none appeared to work. Any ideas?
#Shubham Srivastava - Your structure declaration is bad; you need to export fields so gorm can use them Declaring Models
❗️Use exported ID field
In Go, when a field name starts with a lowercase letter that means the field is private (called unexported in Go parlance). Gorm, being a third party package, cannot see inside your struct to know there is an id field, or any other unexported field for that matter.
The solution is to make sure all the fields that need to come from the DB are exported:
type Post struct {
ID uint `json:"id" gorm:"primaryKey"`
...
}

sql.Rows to slice of strings

I'm using the standard go sql package to interface with AWS Athena.
My query returns for each record a uuid (string) and an array of emails.
Here is the code:
package main
import (
"fmt"
"database/sql"
_ "github.com/segmentio/go-athena"
_"encoding/json"
)
type Contact struct {
userid string
emails []string
}
func main() {
fmt.Println("hello")
db, err := sql.Open("athena", "db=example")
if err != nil {
panic(err)
}
rows, err := db.Query("SELECT userid, transform(value.emails, x -> x.value) from database LIMIT 10")
// Returns
// Row 1: "abc-123", ["email1#gmail.com", "email2#gmail.com"]
// Row 2: "def-456", ["email3#gmail.com"]
if err != nil {
panic(err)
}
for rows.Next() {
var contact Contact
rows.Scan(&contact.userid, &contact.emails)
fmt.Println(contact)
}
}
However, I get this error in the for loop:
panic: unknown type `array` with value [email1#gmail.com]
I'm confused about the array type mentioned and I can't make sense of the error.
How can I map the list of emails returned to a slice of strings in the Contact struct ?
Athena supports structural data types. For example the structural data type array:
Structural types
ARRAY < data_type >
From the message you get I assume the column email is of type ARRAY<VARCHAR>. In addition segmentio/go-athena panics on unsupported operations like Begin for a transaction (which are not supported in Athena). To read data in a Go array you have to put in some logic. See read "SELECT *" columns into []string in go or Read a Postgresql array directly into a Golang Slice for a starter. As you can see with the pq driver, reading an array might be implemented differently than just scanning a row

What's the format of timestamp to write parquet file in go

I am trying to write a Go struct in a Parquet file and upload to S3. What format and type do I specify for timestamp parameter in the struct so that athena displays correct timestamp when reading from the parquet file.
type example struct {
ID int64 `parquet:"name=id, type=INT64"`
CreatedAt int64 `parquet:"name=created_at,type=TIMESTAMP_MILLIS"`
}
ex := example{}
ex.ID = int64(10)
ex.CreatedAt = time.Now().Unix()
fw, err := ParquetFile.NewLocalFileWriter("new.parquet")
pw, err := ParquetWriter.NewParquetWriter(fw, new(example), 1)
pw.Write(ex)
Upload the file new.parquet to S3
Reference - https://github.com/xitongsys/parquet-go. I created a table in Athena with int and timestamp field for the same and trying querying the table. The date is showing something like - 1970-01-18 21:54:23.751.
which no where matches the current timestamp.
For example,
package main
import (
"fmt"
"time"
)
func main() {
type example struct {
CreatedAt int64 `parquet:"name=created_at,type=TIMESTAMP_MILLIS"`
}
ex := example{}
ex.CreatedAt = time.Now().UnixNano() / int64(time.Millisecond)
fmt.Println(ex.CreatedAt)
}
Playground: https://play.golang.org/p/ePOlUKiT6fD
Output:
1257894000000

Could't convert <nil> into type ...?

I tried to using database/sql for query database row into a Go type, my codes snippet following:
type User struct {
user_id int64
user_name string
user_mobile string
password string
email interface{}
nickname string
level byte
locked bool
create_time string
comment string // convert <nil> to *string error
}
func TestQueryUser(t *testing.T) {
db := QueryUser(driverName, dataSourceName)
stmtResults, err := db.Prepare(queryAll)
defer stmtResults.Close()
var r *User = new(User)
arr := []interface{}{
&r.user_id, &r.user_name, &r.user_mobile, &r.password, &r.email,
&r.nickname, &r.level, &r.locked, &r.create_time, &r.comment,
}
err = stmtResults.QueryRow(username).Scan(arr...)
if err != nil {
t.Error(err.Error())
}
fmt.Println(r.email)
}
MySQL:
As your see, some fields that has NULL value, so I have to set interface{} type into User struct of Go, which convert NULL to nil.
--- FAIL: TestQueryUser (0.00s)
user_test.go:48: sql: Scan error on column index 9: unsupported Scan, storing driver.Value type <nil> into type *string
Somebody has a better way? or I must change the MySQL field and set its DEFAULT ' '
First the short answer : There is some types in sql package for example sql.NullString (for nullable string in your table and guess Nullint64 and NullBool and ... usage :) ) and you should use them in your struct.
The long one : There is two interface for this available in go , first is Scanner and the other is Valuer for any special type in database, (for example,I use this mostly with JSONB in postgres) you need to create a type, and implement this two(or one of them) interface on that type.
the scanner is used when you call Scan function. the data from the database driver, normally in []byte is the input and you are responsible for handling it. the other one, is used when the value is used as input in query. the result "normally" is a slice of byte (and an error) if you need to only read data, Scanner is enough, and vice-versa, if you need to write parameter in query the Valuer is enough
for an example of implementation, I recommend to see the types in sql package.
Also there is an example of a type to use with JSONB/JSON type in postgresql
// GenericJSONField is used to handle generic json data in postgres
type GenericJSONField map[string]interface{}
// Scan convert the json field into our type
func (v *GenericJSONField) Scan(src interface{}) error {
var b []byte
switch src.(type) {
case []byte:
b = src.([]byte)
case string:
b = []byte(src.(string))
case nil:
b = make([]byte, 0)
default:
return errors.New("unsupported type")
}
return json.Unmarshal(b, v)
}
// Value try to get the string slice representation in database
func (v GenericJSONField) Value() (driver.Value, error) {
return json.Marshal(v)
}
driver value is often []byte but string and nil is acceptable. so this could handle null-able fields too.

Resources