This is my first script using go-sql-driver.
My mysql table (PRODUCT) looks like:
id int
name varchar(255)
IsMatch tinyint(1)
created datetime
I want to simply load a row from a table, and bind it to a struct.
I have this so far:
package main
import (
"database/sql"
"fmt"
_ "github.com/go-sql-driver/mysql"
)
type Product struct {
Id int64
Name string
IsMatch ??????????
Created ?????
}
func main() {
fmt.Printf("hello, world!\n")
db, err := sql.Open("mysql", "root:#/product_development")
defer db.Close()
err = db.Ping()
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
rows, err := db.Query("SELECT * FROM products where id=1")
if err != nil {
panic(err.Error()) // proper error handling instead of panic in your app
}
}
Now I need to:
1. What datatype in Go do I use for tinyint and datetime?
2. How to I map the rows to a Product struct?
What datatype in Go do I use for tinyint and datetime?
For a hint as to the types that the database/sql package will be using, have a look at the documentation for database/sql.Scanner, which lists the Go types used within database/sql itself:
int64
float64
bool
[]byte
string
time.Time
nil - for NULL values
This would lead you to try int64 for IsMatch and time.Time for Created. I believe in reality you can use pretty much any sized int (maybe even bool, you'd have to check the source) for IsMatch because it can be stored "without loss of precision." The documentation for go-mysql-driver explains that you will need to add parseTime=true to your DSN in order for it to parse into a time.Time automatically or use NullTime.
How to I map the rows to a Product struct?
It should be something pretty strightforward, using Rows.Scan, like:
var products []*Product
for rows.Next() {
p := new(Product)
if err := rows.Scan(&p.ID, &p.Name, &p.IsMatch, &p.Created); err != nil { ... }
products = append(products, p)
}
if err := rows.Err() { ... }
This scans the columns into the fields of a struct and accumulates them into a slice. (Don't forget to Close the rows!)
How to I map the rows to a Product struct?
You can use reflect to bind table rows in db to a struct, and
automatically match values without long Hard-Code sql string which is easy to make mistakes.
this is a light demo: sqlmapper
Related
type Book struct {
tableName struct{} `pg:"book" json:"-"`
Id int `pg:"id,pk" json:"id"`
Author int `pg:"author_id,notnull" json:"-"`
Author *Author `pg:"fk:author_id" json:"author,omitempty"`
}
I want select book and author in one query.
If I try this:
var r []model.Book
_, err := dao.FusedDb.Query(&r, `SELECT * FROM book b INNER JOIN author a on a.id = b.author_id`)
I get an error
pg: can't find column=name in model=Book (try discard_unknown_columns)
I wrote down a piece of code that I always use when I've to deal with this scenario. First, let me show the code and then I'll comment on the relevant parts:
package main
import (
"database/sql"
"fmt"
_ "github.com/lib/pq"
)
type Book struct {
gorm.Model
Title string
Description string
AuthorID uint
Author Author
}
type Author struct {
gorm.Model
FirstName string
LastName string
Books []Book
}
type Result struct {
BookId int
AuthorId int
Title string
FirstName string
LastName string
}
func main() {
conn, err := sql.Open("postgres", "host=localhost user=postgres password=postgres dbname=postgres port=5432 sslmode=disable")
if err != nil {
panic(err)
}
defer conn.Close()
// query
var result []Result
rows, err := conn.Query("select b.id, a.id, b.title, a.first_name, a.last_name from authors a inner join books b on a.id = b.author_id")
if err != nil {
panic(err)
}
defer rows.Close()
for rows.Next() {
var record Result
if err := rows.Scan(&record.BookId, &record.AuthorId, &record.Title, &record.FirstName, &record.LastName); err != nil {
panic(err)
}
result = append(result, record)
}
if err := rows.Err(); err != nil {
panic(err)
}
fmt.Printf("%v", result)
}
Structs definition
The Book and Author structs represent the tables defined in my database. Result is used to hold the fetched records through the query specified below.
The query
The query is pretty straightforward. We only used the method Query on the SQL client opened at the beginning of the main function. Then, we've to defer a call to the method Close on the rows variable to clean up.
Scanning
The for makes sure that we scan all of the rows retrieved with the Query method. To understand if there are other rows to fetch we use the method Next that returns a bool value indicating whether or not there are other rows to scan.
In the body of the for we declare a loop-scoped variable to hold the current record. Thanks to the Scan method we'll be able to assign each column to the relative field of the struct.
Lastly, we've to check for any error by invoking the method Err on the rows variable and handle it.
Let me know if this clarifies your question, thanks!
So I have this table Schema that looks like the following
CREATE TABLE artists (
id SERIAL PRIMARY KEY,
foo_user_id TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,
time_span TIME_SPAN NOT NULL,
items artist [] NOT NULL
);
In addition, I also have the custom type created in PostgreSQL that is defined as follows
CREATE TYPE artist AS (
artist_name TEXT,
foo_artist_id TEXT,
artist_image_url TEXT,
artist_rank int
);
I am trying to query all rows that have the "foo_user_id" equal to what I pass into the function. Here is the sample code.
func GetHistoricalTopArtists(foo_user_id string) ([]TopArtists, error) {
// connect to DB etc..
// create prepared statement
stmtStr := `SELECT * FROM artists WHERE foo_user_id=$1`
// error check...
// iterate through all rows to get an array of []TopArtists
defer rows.Close()
for rows.Next() {
topArtist := new(TopArtists)
err := rows.Scan(&topArtist.Id, &topArtist.FooUserId, &topArtist.CreatedAt, &topArtist.TimeSpan, &topArtist.Artists)
if err != nil {
log.Fatalf("Something went wrong %v", err)
}
topArtists = append(topArtists, *topArtist)
}
}
To represent this data in Go I created the following structs
// Represents a row
type TopArtists struct {
Id int64 `json:"id" db:"id"`
FooUserId string `json:"foo_user_id" db:"foo_user_id"`
CreatedAt string `json:"created_at" db:"created_at"`
TimeSpan string `json:"time_span" db:"time_span"`
Artists []Artist `json:"items" db:"items"`
}
// Represents the artist column
type Artist struct {
ArtistName string `json:"artist_name"`
ArtistId string `json:"foo_artist_id"`
ArtistImageURL string `json:"artist_image_url"`
ArtistRank int `json:"artist_rank"`
}
When I call the function that does the query (the one I described above). I get the following error.
Scan error on column index 4, name "items": unsupported Scan, storing driver.Value type []uint8 into type *[]database.Artist.
I have a Value() function, but I am unsure how to implement a Scan() function for the array of the custom struct I have made.
Here is my Value() function, I have attempted to read documentation and similar posts on scanning arrays of primitive types (strings, int, etc) but I could not apply the logic to custom PostgreSQL types.
func (a Artist) Value() (driver.Value, error) {
s := fmt.Sprintf("(%s, %s, %s, %d)",
a.ArtistName,
a.FooArtistId,
a.ArtistImageURL,
a.ArtistRank)
return []byte(s), nil
}
#mkopriva - ...You need to declare a slice type, e.g. type ArtistSlice []Artist,
use that as the field's type, and implement the Value/Scan methods on
that.
Created custom Composite Types Artist in Postgresq has a strict struct as
{(david,38,url,1),(david2,2,"url 2",2)}
then you have to implement Value/Scan method with custom marshal/unmarshal algorithm
For example
type Artists []Artist
func (a *Artists) Scan(value interface{}) error {
source, ok := value.(string) // input example 👉🏻 {"(david,38,url,1)","(david2,2,\"url 2\",2)"}
if !ok {
return errors.New("incompatible type")
}
var res Artists
artists := strings.Split(source, "\",\"")
for _, artist := range artists {
for _, old := range []string{"\\\"","\"","{", "}","(",")"} {
artist = strings.ReplaceAll(artist, old, "")
}
artistRawData := strings.Split(artist, ",")
i, err := strconv.Atoi(artistRawData[1])
if err != nil {
return fmt.Errorf("parce ArtistRank raw data (%s) in %d iteration error: %v", artist, i, err)
}
res = append(res, Artist{
ArtistName: artistRawData[0],
ArtistId: artistRawData[1],
ArtistImageURL: artistRawData[2],
ArtistRank: i,
})
}
*a = res
return nil
}
I've defined a data structure like so:
type Person struct {
Name string `firestore:"name,omitempty"`
}
When I query all the documents in a collection I'd like to be able to attach the ID to the documents for later reference, but not necessarily have ID as an attribute stored in Firestore (unless its the only way). In javascript or python this is straightforward as the data structures are dynamic and I can just query the ID post get() and add it as a dynamic key/value. myObj.id = doc.id
How would I do this with Go?
package main
import (
"fmt"
"cloud.google.com/go/firestore"
"context"
"google.golang.org/api/iterator"
"log"
)
type Person struct {
Name string `firestore:"name,omitempty"`
}
func main() {
ctx := context.Background()
c, err := firestore.NewClient(ctx, "my-project")
if err != nil {
log.Fatalf("error: %v", err)
}
var people []Person
iter := c.Collection("people").Documents(ctx)
for {
doc, err := iter.Next()
if err == iterator.Done {
break
}
if err != nil {
log.Fatalf("error: %v", err)
}
var p Person
err = doc.DataTo(p)
if err != nil {
log.Fatalf("error: %v", err)
}
// id := doc.Ref.ID
people = append(people, p)
}
fmt.Println(people)
}
output, no ID
>> [{John Smith}]
I believe that the firestore struct tags works the same way as the tags in the encoding/json package. So a tag with a value of "-" would mean ignore the field.
So
type Person struct {
ID int `firestore:"-"`
Name string `firestore:"name,omitempty"`
}
should do the trick.
You can set ID yourself, but the firestore pkg will ignore it when reading/writing data.
If you want to store the firestore document ID on the Person type, your struct must have a declared field for it.
Golang firestore docs don't mention this explicitly, but since a firestore doc ID is not part of the document fields, the func (*DocumentSnapshot) DataTo does not populate the ID. Instead, you may get the document ID from the DocumentRef type and add it to Person yourself.
The doc also states that:
Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty
Therefore, if you want to omit the ID when marshaling back for firestore, your could use the tag firestore:"-"
The Person would look like this:
type Person struct {
ID string `firestore:"-"`
Name string `firestore:"name,omitempty"`
}
inside the loop:
var p Person
err := docSnap.DataTo(&p)
if err != nil {
// handle it
}
p.ID = doc.Ref.ID
here is basically my creation script for bigquery in golang :
type data_pix struct {
Id string
IdC string
Stamp int64
Tag []string
}
func createTable(client *bigquery.Client, datasetID, tableID string) error {
ctx := context.Background()
// [START bigquery_create_table]
schema, err := bigquery.InferSchema(data_pix{})
if err != nil {
return err
}
table := client.Dataset(datasetID).Table(tableID)
if err := table.Create(ctx, schema); err != nil {
return err
}
// [END bigquery_create_table]
return nil
}
for the moment i use mainly a timestamp in Int64.
But i am looking for any example on how to add Datetime to my struct and btw add Datetime to my data
Thanks and regards
I have not used the bigquery, however I had a look at the godoc and source code.
It seems, you have to use data type civil.DateTime reference in the struct.
For e.g:
As per godoc and source code, following should create DateTime field.
type data_pix struct {
Id string
IdC string
Stamp civil.DateTime
Tag []string
}
schema, err := bigquery.InferSchema(data_pix{})
// now schema should represent DateTime Field
There is a function to get civil.DateTime from time.Time. I would suggest you have a look at this go sourcecode to know more.
I have a table in Cassandra defined as the following :
CREATE TABLE book.book
(
title text PRIMARY KEY,
amount decimal,
available int,
createdon timestamp
)
I am trying to select * from that table and return the values in json format. I am able to achieve that using
type Book struct {
Title string `json:"title"`
Amount inf.Dec `json:"amount"`
CreatedOn time.Time `json:"createdon"`
Available int `json:"available"`
}
with
func cassandraDisplay(query string, w http.ResponseWriter) {
cluster := gocql.NewCluster("xxxxxxxx:xxxx")
session, _ := cluster.CreateSession()
defer session.Close()
iter := session.Query("SELECT * FROM book.book").Iter()
var book Book
for iter.Scan(&book.Title ,&book.Amount ,&book.CreatedOn,&book.Available{
fmt.Println(book.Title , book.Amount,book.CreatedO,book.Available)
j, ERR:= json.Marshal(&iter)
if ERR != nil {panic(ERR)}
//do things with j
}
if err := iter.Close(); err != nil {log.Fatal(err)}
}
but the requirement require a dynamic and no hard coding any info; since it is http service and the query will be passed through the url.
Any idea how to get this to work?
#Michael,
You may want to use MapScan: https://godoc.org/github.com/gocql/gocql#Iter.MapScan
This is as abstract as it can get.
From https://github.com/gocql/gocql/blob/master/cassandra_test.go:
...
testMap := make(map[string]interface{})
if !session.Query(`SELECT * FROM slice_map_table`).Iter().MapScan(testMap) {
t.Fatal("MapScan failed to work with one row")
}
...
And after that you'll need to reflect/explore map content, but that's a different topic.