http.ResponseWriter.Write with interface{} - go

I'm running a SQL query using the sample from denisenkom but coupled with a http.ResponseWriterand I'm struggling with interface{} type conversion. There are a few posts that are close to what I'm doing but the solution seems kind of heavy handed and always use fmt (which I'm not using).
Note that my query works and returns a result. I'm just trying to display that result.
Here is my code that I think is relatively close but doesn't work. I've tried a couple other things but none even compile.
vals := make([]interface{}, len(cols))
for i := 0; i < len(cols); i++ {
vals[i] = new(interface{})
if i != 0 {
w.Write([]byte("\t"))
}
w.Write([]byte(cols[i]))
}
for rows.Next() {
err = rows.Scan(vals...)
if err != nil {
w.Write([]byte("Row problem: " + err.Error()))
continue
}
for i := 0; i < len(vals); i++ {
if i != 0 {
w.Write([]byte("\t"))
}
//THIS IS THE PART I'M STUCK ON
switch v := vals[i].(type) {
case int:
w.Write([]byte("int!\n" + string(v)))
case int8:
w.Write([]byte("int8!\n" + string(v)))
//etc, etc
//more types
//etc, etc
case float64:
w.Write([]byte("float64!\n" + string(v))) //This fails, can't convert, will need something else
case string:
w.Write([]byte("string!\n" + v))
default:
w.Write([]byte("something else!\n"))
}
}
}
But isn't there a better way to dynamically check the underlying type and convert it to something readable? All I want to do is spit out the results of a query, this seems like I'm doing something wrong.
Note that it always hits the default case, even when it's explicitly the same type.

The example at denisekom prints to stdout using the fmt.Print family of functions. Change the example to print to a response writer by using the fmt.Fprint family of functions.
The fmt.Fprint functions write to the io.Writer specified as the first argument. The response writer satisfies the io.Writer interface. Here's the modified code from the original example where w is the http.ResponseWriter.
vals := make([]interface{}, len(cols))
for i := 0; i < len(cols); i++ {
vals[i] = new(interface{})
if i != 0 {
fmt.Fprint(w, "\t")
}
fmt.Fprint(w, cols[i])
}
fmt.Fprintln(w)
for rows.Next() {
err = rows.Scan(vals...)
if err != nil {
fmt.Fprintln(w, err)
continue
}
for i := 0; i < len(vals); i++ {
if i != 0 {
fmt.Fprint(w, "\t")
}
printValue(w, vals[i].(*interface{}))
}
fmt.Fprintln(w)
}
...
func printValue(w io.Writer, pval *interface{}) {
switch v := (*pval).(type) {
case nil:
fmt.Fprint(w, "NULL")
case bool:
if v {
fmt.Fprint(w, "1")
} else {
fmt.Fprint(w, "0")
}
case []byte:
fmt.Fprint(w, string(v))
case time.Time:
fmt.Fprint(w, v.Format("2006-01-02 15:04:05.999"))
default:
fmt.Fprint(w, v)
}
}
Regarding the code in the question: the expression string(v) is a string conversion. A string conversion does not convert numbers to a decimal representation as the code is assuming. See the spec for details on string conversions. Use either the fmt package as above or the strconv package to convert numbers to decimal strings. The type switch should be switch v := (*(vals[i].(*interface{})).(type) {. That's a bit of mess, and that leads to my next point.
The original example uses more indirection than needed. Here's a simplified version ready to be called with a response writer:
func exec(w io.Writer, db *sql.DB, cmd string) error {
rows, err := db.Query(cmd)
if err != nil {
return err
}
defer rows.Close()
cols, err := rows.Columns()
if err != nil {
return err
}
if cols == nil {
return nil
}
vals := make([]interface{}, len(cols))
args := make([]interface{}, len(cols))
for i := 0; i < len(cols); i++ {
args[i] = &vals[i]
if i != 0 {
fmt.Fprint(w, "\t")
}
fmt.Fprint(w, cols[i])
}
for rows.Next() {
err = rows.Scan(args...)
if err != nil {
fmt.Fprintln(w, err)
continue
}
for i := 0; i < len(vals); i++ {
if i != 0 {
fmt.Print("\t")
}
printValue(w, vals[i])
}
fmt.Fprintln(w)
}
if rows.Err() != nil {
return rows.Err()
}
return nil
}
func printValue(w io.Writer, v interface{}) {
switch v := v.(type) {
case nil:
fmt.Fprint(w, "NULL")
case bool:
if v {
fmt.Fprint(w, "1")
} else {
fmt.Fprint(w, "0")
}
case []byte:
fmt.Fprint(w, string(v))
case time.Time:
fmt.Fprint(w, v.Format("2006-01-02 15:04:05.999"))
default:
fmt.Fprint(w, v)
}
}

Related

Testing with Golang, redis and time

I was trying to test a bit with Redis for the first time and I bumped into some confusion with HGET/HSET/HGETALL. My main problem was that I needed to store time, and I wanted to use a hash as I'll continuously update the time.
At first I read about how a MarshalBinary function such as this would save me:
func (f Foo) MarshalBinary() ([]byte, error) {
return json.Marshal(f)
}
What that did was that it saved the struct as a json string, but only as a string and not as an actual Redis hash. What I ended up doing in the end was a fairly large boilerplate code that makes my struct I want to save into a map, and that one is properly stored as a hash in Redis.
type Foo struct {
Number int `json:"number"`
ATime time.Time `json:"atime"`
String string `json:"astring"`
}
func (f Foo) toRedis() map[string]interface{} {
res := make(map[string]interface{})
rt := reflect.TypeOf(f)
rv := reflect.ValueOf(f)
if rt.Kind() == reflect.Ptr {
rt = rt.Elem()
rv = rv.Elem()
}
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
v := rv.Field(i)
switch t := v.Interface().(type) {
case time.Time:
res[f.Tag.Get("json")] = t.Format(time.RFC3339)
default:
res[f.Tag.Get("json")] = t
}
}
return res
}
Then to parse back into my Foo struct when calling HGetAll(..).Result(), I'm getting the result as a map[string]string and create a new Foo with these functions:
func setRequestParam(arg *Foo, i int, value interface{}) {
v := reflect.ValueOf(arg).Elem()
f := v.Field(i)
if f.IsValid() {
if f.CanSet() {
if f.Kind() == reflect.String {
f.SetString(value.(string))
return
} else if f.Kind() == reflect.Int {
f.Set(reflect.ValueOf(value))
return
} else if f.Kind() == reflect.Struct {
f.Set(reflect.ValueOf(value))
}
}
}
}
func fromRedis(data map[string]string) (f Foo) {
rt := reflect.TypeOf(f)
rv := reflect.ValueOf(f)
for i := 0; i < rt.NumField(); i++ {
field := rt.Field(i)
v := rv.Field(i)
switch v.Interface().(type) {
case time.Time:
if val, ok := data[field.Tag.Get("json")]; ok {
if ti, err := time.Parse(time.RFC3339, val); err == nil {
setRequestParam(&f, i, ti)
}
}
case int:
if val, ok := data[field.Tag.Get("json")]; ok {
in, _ := strconv.ParseInt(val, 10, 32)
setRequestParam(&f, i, int(in))
}
default:
if val, ok := data[field.Tag.Get("json")]; ok {
setRequestParam(&f, i, val)
}
}
}
return
}
The whole code in its ungloryness is here
I'm thinking that there must be a saner way to solve this problem? Or am I forced to do something like this? The struct I need to store only contains ints, strings and time.Times.
*edit
The comment field is a bit short so doing an edit instead:
I did originally solve it like 'The Fool' suggested in comments and as an answer. The reason I changed to the above part, while more complex a solution, I think it's more robust for changes. If I go with a hard coded map solution, I'd "have to" have:
Constants with hash keys for the fields, since they'll be used at least in two places (from and to Redis), it'll be a place for silly mistakes not picked up by the compiler. Can of course skip that but knowing my own spelling it's likely to happen
If someone just wants to add a new field and doesn't know the code well, it will compile just fine but the new field won't be added in Redis. An easy mistake to do, especially for junior developers being a bit naive, or seniors with too much confidence.
I can put these helper functions in a library, and things will just magically work for all our code when a time or complex type is needed.
My intended question/hope though was: Do I really have to jump through hoops like this to store time in Redis hashes with go? Fair, time.Time isn't a primitive and Redis isn't a (no)sql database, but I would consider timestamps in cache a very common use case (in my case a heartbeat to keep track of timed out sessions together with metadata enough to permanently store it, thus the need to update them). But maybe I'm misusing Redis, and I should rather have two entries, one for the data and one for the timestamp, which would then leave me with two simple get/set functions taking in time.Time and returning time.Time.
You can use redigo/redis#Args.AddFlat to convert struct to redis hash we can map the value using redis tag.
package main
import (
"fmt"
"time"
"github.com/gomodule/redigo/redis"
)
type Foo struct {
Number int64 `json:"number" redis:"number"`
ATime time.Time `json:"atime" redis:"atime"`
AString string `json:"astring" redis:"astring"`
}
func main() {
c, err := redis.Dial("tcp", ":6379")
if err != nil {
fmt.Println(err)
return
}
defer c.Close()
t1 := time.Now().UTC()
var foo Foo
foo.Number = 10000000000
foo.ATime = t1
foo.AString = "Hello"
tmp := redis.Args{}.Add("id1").AddFlat(&foo)
if _, err := c.Do("HMSET", tmp...); err != nil {
fmt.Println(err)
return
}
v, err := redis.StringMap(c.Do("HGETALL", "id1"))
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("%#v\n", v)
}
Then to update ATime you can use redis HSET
if _, err := c.Do("HMSET", "id1", "atime", t1.Add(-time.Hour * (60 * 60 * 24))); err != nil {
fmt.Println(err)
return
}
And to retrieve it back to struct we have to do some reflect magic
func structFromMap(src map[string]string, dst interface{}) error {
dt := reflect.TypeOf(dst).Elem()
dv := reflect.ValueOf(dst).Elem()
for i := 0; i < dt.NumField(); i++ {
sf := dt.Field(i)
sv := dv.Field(i)
if v, ok := src[strings.ToLower(sf.Name)]; ok {
switch sv.Interface().(type) {
case time.Time:
format := "2006-01-02 15:04:05 -0700 MST"
ti, err := time.Parse(format, v)
if err != nil {
return err
}
sv.Set(reflect.ValueOf(ti))
case int, int64:
x, err := strconv.ParseInt(v, 10, sv.Type().Bits())
if err != nil {
return err
}
sv.SetInt(x)
default:
sv.SetString(v)
}
}
}
return nil
}
Final Code
package main
import (
"fmt"
"time"
"reflect"
"strings"
"strconv"
"github.com/gomodule/redigo/redis"
)
type Foo struct {
Number int64 `json:"number" redis:"number"`
ATime time.Time `json:"atime" redis:"atime"`
AString string `json:"astring" redis:"astring"`
}
func main() {
c, err := redis.Dial("tcp", ":6379")
if err != nil {
fmt.Println(err)
return
}
defer c.Close()
t1 := time.Now().UTC()
var foo Foo
foo.Number = 10000000000
foo.ATime = t1
foo.AString = "Hello"
tmp := redis.Args{}.Add("id1").AddFlat(&foo)
if _, err := c.Do("HMSET", tmp...); err != nil {
fmt.Println(err)
return
}
v, err := redis.StringMap(c.Do("HGETALL", "id1"))
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("%#v\n", v)
if _, err := c.Do("HMSET", "id1", "atime", t1.Add(-time.Hour * (60 * 60 * 24))); err != nil {
fmt.Println(err)
return
}
var foo2 Foo
structFromMap(v, &foo2)
fmt.Printf("%#v\n", foo2)
}
func structFromMap(src map[string]string, dst interface{}) error {
dt := reflect.TypeOf(dst).Elem()
dv := reflect.ValueOf(dst).Elem()
for i := 0; i < dt.NumField(); i++ {
sf := dt.Field(i)
sv := dv.Field(i)
if v, ok := src[strings.ToLower(sf.Name)]; ok {
switch sv.Interface().(type) {
case time.Time:
format := "2006-01-02 15:04:05 -0700 MST"
ti, err := time.Parse(format, v)
if err != nil {
return err
}
sv.Set(reflect.ValueOf(ti))
case int, int64:
x, err := strconv.ParseInt(v, 10, sv.Type().Bits())
if err != nil {
return err
}
sv.SetInt(x)
default:
sv.SetString(v)
}
}
}
return nil
}
Note: The struct field name is matched with the redis tag

Golang: update slice in loop for empty interface

For example, we have 3 CSV files and common for all is Email column. In first file are Name and Email, in another are Email (plus different info) and no Name field. So, if I need to fill in 2 and 3 files field Name based on the correspondence of the Name and Đ•mail from the first file than... I wrote code like this:
package main
import (
"fmt"
"io/ioutil"
"log"
"path/filepath"
"strings"
"github.com/jszwec/csvutil"
)
type User struct {
Name string `csv:"name"`
Email string `csv:"email"`
}
type Good struct {
User
Dt string `csv:"details"`
}
type Strange struct {
User
St string `csv:"status"`
Dt string `csv:"details"`
}
var lst map[string]string
func readCSV(fn string, dat interface{}) error {
raw, err := ioutil.ReadFile(fn)
if err != nil {
return fmt.Errorf("Cannot read CSV: %w", err)
}
if err := csvutil.Unmarshal(raw, dat); err != nil {
return fmt.Errorf("Cannot unmarshal CSV: %w", err)
}
return nil
}
func fixNames(fl string, in interface{}) error {
if err := readCSV(fl, in); err != nil {
return fmt.Errorf("CSV: %w", err)
}
switch in.(type) {
case *[]Good:
var vals []Good
for _, v := range *in.(*[]Good) {
v.Name = lst[strings.TrimSpace(strings.ToLower(v.Email))]
vals = append(vals, v)
}
in = vals
case *[]Strange:
var vals []Strange
for _, v := range *in.(*[]Strange) {
v.Name = lst[strings.TrimSpace(strings.ToLower(v.Email))]
vals = append(vals, v)
}
in = vals
}
b, err := csvutil.Marshal(in)
if err != nil {
return fmt.Errorf("Cannot marshal CSV: %w", err)
}
ext := filepath.Ext(fl)
bas := filepath.Base(fl)
err = ioutil.WriteFile(bas[:len(bas)-len(ext)]+"-XIAOSE"+ext, b, 0644)
if err != nil {
return fmt.Errorf("Cannot save CSV: %w", err)
}
return nil
}
func main() {
var users []User
if err := readCSV("./Guitar_Contacts.csv", &users); err != nil {
log.Fatalf("CSV: %s", err)
}
lst = make(map[string]string)
for _, v := range users {
lst[strings.TrimSpace(strings.ToLower(v.Email))] = v.Name
}
var usersGood []Good
if err := fixNames("./Guitar-Good.csv", &usersGood); err != nil {
log.Fatalf("fix: %s", err)
}
var usersStrange []Strange
if err := fixNames("./Guitar-Uknown.csv", &usersStrange); err != nil {
log.Fatalf("fix: %s", err)
}
fmt.Println("OK")
}
in this code I don't like part in func fixNames where is switch:
switch in.(type) {
case *[]Good:
var vals []Good
for _, v := range *in.(*[]Good) {
v.Name = lst[strings.TrimSpace(strings.ToLower(v.Email))]
vals = append(vals, v)
}
in = vals
case *[]Strange:
var vals []Strange
for _, v := range *in.(*[]Strange) {
v.Name = lst[strings.TrimSpace(strings.ToLower(v.Email))]
vals = append(vals, v)
}
in = vals
}
because I just repeat code in part where *in.(SOME_TYPE). I want one loop and one action for different types, structs where are Name and Email fields...
Also was idea to do it with reflection smth. like this:
v := reflect.ValueOf(in)
v = v.Elem()
for i := 0; i < v.Len(); i++ {
fmt.Println(v.Index(i))
}
but I do not know what to do next, how to add in that v value for Name
You don't need reflection for this particular case. You can clean the code up by realizing that you are only working on the User part of the structs, and that you can simplify the type switch:
fix:=func(in *User) {
in.Name = lst[strings.TrimSpace(strings.ToLower(in.Email))]
}
switch k:=in.(type) {
case *[]Good:
for i := range *k {
fix( &(*k)[i].User )
}
case *[]Strange:
for i := range *k {
fix( &(*k)[i].User )
}
}
You have to repeat the for loop, but above code does the correction in place.
You can clean up a bit more by not passing a reference to the slice.
With reflect package, you can do that like this.
func fixNames(fl string, in interface{}) error {
//other code
v := reflect.ValueOf(in)
if v.Kind() == reflect.Ptr {
arr := v.Elem()
fmt.Println(arr.Len())
if arr.Kind() == reflect.Slice || arr.Kind() == reflect.Array {
for i := 0; i < arr.Len(); i++ {
elem := arr.Index(i)
f := elem.FieldByName("Name")
f.SetString("NameOfUser")
}
}
}
// other code
}
Also playground example: https://play.golang.org/p/KrGvLVprslH

How to Unmarshal float64 records in database to create insert

The project I'm working on is supposed to read a file, then generate inserts based on that file.
Right now, the script works (Kind of).
I'm running into an issue when creating the structs for the tables, specifically for fields are float64's.
The script I've made accounts for fields that are type string, and int. I'm able to insert into tables that have those field types, but if there's a field in my database which is a float, the script doesn't insert any of the data.
Here's how my script is currently set-up to check for types of int and string.
func UnmarshalCsvRecord(readerTest *csv.Reader, v interface{}) error {
recordtest, err := readerTest.Read()
if err != nil {
return err
}
s := reflect.ValueOf(v).Elem()
if s.NumField() != len(recordtest) {
return &CheckField{s.NumField(), len(recordtest)}
}
for i := 0; i < s.NumField(); i++ {
f := s.Field(i)
switch f.Type().String() {
case "string":
f.SetString(recordtest[i])
case "int":
ival, err := strconv.ParseInt(recordtest[i], 10, 0)
if err != nil {
return err
}
f.SetInt(ival)
default:
return &UnsupportedCheck{f.Type().String()}
}
}
return nil
}
type CheckField struct {
expected, found int
}
func (e *CheckField) Error() string {
return "CSV line fields mismatch. Expected " + strconv.Itoa(e.expected) + " found " + strconv.Itoa(e.found)
}
type UnsupportedCheck struct {
TypeCheck string
}
func (e *UnsupportedCheck) Error() string {
return "Unsupported type: " + e.TypeCheck
}
What can I do to account for float64 fields in my struct?
type ClientSalesTable struct {
ID int `csv:"id"`
Amount float64 `csv:"amount"`
ClientName string `csv:"clientName"`
}
Handle floats using the same pattern as integers. Replace strconv.ParseInt with strconv.ParseFloat. Replace f.SetInt(ival) with f.SetFloat(fval).
Bonus fix: Switch on f.Type().Kind() instead of the string.
func UnmarshalCsvRecord(readerTest *csv.Reader, v interface{}) error {
recordtest, err := readerTest.Read()
if err != nil {
return err
}
s := reflect.ValueOf(v).Elem()
if s.NumField() != len(recordtest) {
return &CheckField{s.NumField(), len(recordtest)}
}
for i := 0; i < s.NumField(); i++ {
f := s.Field(i)
switch f.Type().Kind() {
case reflect.String:
f.SetString(recordtest[i])
case reflect.Int:
ival, err := strconv.ParseInt(recordtest[i], 10, f.Type().Bits())
if err != nil {
return err
}
f.SetInt(ival)
case reflect.Float64:
fval, err := strconv.ParseFloat(recordtest[i], f.Type().Bits())
if err != nil {
return err
}
f.SetFloat(fval)
default:
return &UnsupportedCheck{f.Type().String()}
}
}
return nil
}
By adding more cases, the code above can be extend to handle all integer and float types:
switch f.Type().Kind() {
...
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
...
case reflect.Float64, reflect.Float32:
...
}

golang leveldb get snapshot error

I am get leveldb's all key-val to a map[string][]byte, but it is not running as my expection.
code is as below
package main
import (
"fmt"
"strconv"
"github.com/syndtr/goleveldb/leveldb"
)
func main() {
db, err := leveldb.OpenFile("db", nil)
if err != nil {
panic(err)
}
defer db.Close()
for i := 0; i < 10; i++ {
err := db.Put([]byte("key"+strconv.Itoa(i)), []byte("value"+strconv.Itoa(i)), nil)
if err != nil {
panic(err)
}
}
snap, err := db.GetSnapshot()
if err != nil {
panic(err)
}
if snap == nil {
panic("snap shot is nil")
}
data := make(map[string][]byte)
iter := snap.NewIterator(nil, nil)
for iter.Next() {
Key := iter.Key()
Value := iter.Value()
data[string(Key)] = Value
}
iter.Release()
if iter.Error() != nil {
panic(iter.Error())
}
for k, v := range data {
fmt.Println(string(k) + ":" + string(v))
}
}
but the result is below
key3:value9
key6:value9
key7:value9
key8:value9
key1:value9
key2:value9
key4:value9
key5:value9
key9:value9
key0:value9
rather not key0:value0
Problem is with casting around types (byte[] to string, etc.).
You are trying to print string values. To avoid unnecessary casting apply the following modifications:
Change data initialization into data := make(map[string]string)
Assign values into data with `data[string(Key)] = string(Value) (by the way, don't use capitalization for variables you aren't intend to export)
Print data's values with fmt.Println(k + ":" + v))
This should produce the following result:
key0:value0
key1:value1
key7:value7
key2:value2
key3:value3
key4:value4
key5:value5
key6:value6
key8:value8
key9:value9

What is the "idiomatic" version of this function?

Trying to understand the mentality of Go. I wrote the following function which looks for *.txt files of a folder that have a date in the filename, get the latest date and return that date.
func getLatestDate(path string) (time.Time, error) {
if fns, e := filepath.Glob(filepath.Join(path, "*.txt")); e == nil {
re, _ := regexp.Compile(`_([0-9]{8}).txt$`)
max := ""
for _, fn := range fns {
if ms := re.FindStringSubmatch(fn); ms != nil {
if ms[1] > max {
max = ms[1]
}
}
}
date, _ := time.Parse("20060102", max)
return date, nil
} else {
return time.Time{}, e
}
}
What would be the more idiomatic version of this function, if there is one?
Here is my take
Use MustCompile to compile a static regexp. This will panic if it doesn't compile and saves an error check
Hoist compiling the regexp out of the function - you only need it compiled once. Note that I've called it with a lowercase initial letter so it won't be visible outside the package.
Use an early return when checking errors - this saves indentation and is idiomatic go
Use named return parameters for those early returns - saves defining nil values for types and typing in general (not to everyone's taste though)
return time.Parse directly which checks the errors (you weren't before)
The code
var dateRe = regexp.MustCompile(`_([0-9]{8}).txt$`)
func getLatestDate(path string) (date time.Time, err error) {
fns, err := filepath.Glob(filepath.Join(path, "*.txt"))
if err != nil {
return
}
max := ""
for _, fn := range fns {
if ms := dateRe.FindStringSubmatch(fn); ms != nil {
if ms[1] > max {
max = ms[1]
}
}
}
return time.Parse("20060102", max)
}
Here's how I would have written it. Don't ignore errors, use guard clauses for error handling, and don't recompile regexps inside a loop.
var datePat = regexp.MustCompile(`_([0-9]{8}).txt$`)
func getLatestDate(path string) (time.Time, error) {
fns, err := filepath.Glob(filepath.Join(path, "*.txt"))
if err != nil {
return time.Time{}, err
}
max := time.Time{}
for _, fn := range fns {
if ms := re.FindStringSubmatch(fn); ms != nil {
if t, err := time.Parse("20060102", ms[1]); err == nil && t.After(max) {
max = t
}
}
}
return max, nil
}

Resources