Can't store byte array in google's datastore - go

I'm using Google's datastore in my Go app. I have a Song struct, which has a uuid.UUID field.
type Song struct {
ID: uuid.UUID
Title: string
...
}
This UUID is taken from github.com/satori/go.uuid and is defined as
type UUID [16]byte
It seems that datastore can't handle byte arrays but in this use case only byte slices or strings. In the json package I can use a tag to interpret it as a string
type Song struct {
ID: uuid.UUID `json:"id,string"`
....
}
Is there a way of telling datastore to interpret the UUID as a slice/string or do I either have to give up "type"-safety and just store a string or use a custom PropertyLoadSaver?

Per Google's Documentation:
Valid value types are:
signed integers (int, int8, int16, int32 and int64),
bool,
string,
float32 and float64,
[]byte (up to 1 megabyte in length),
any type whose underlying type is one of the above predeclared types,
ByteString,
*Key,
time.Time (stored with microsecond precision),
appengine.BlobKey,
appengine.GeoPoint,
structs whose fields are all valid value types,
slices of any of the above.
So, you will have to use a byte slice or string. You could do some behind the scenes manipulation when you need to do your setting or getting like (Playground Example):
type uuid [16]byte
type song struct {
u []byte
}
func main() {
var b [16]byte
copy(b[:], "0123456789012345")
var u uuid = uuid(b) //this would represent when you get the uuid
s := song{u: []byte(u[:])}
copy(b[:], s.u)
u = uuid(b)
fmt.Println(u)
}
This could also be done through methods. (Playground example)
Alternatively, you could have an entity specific to the datastore that carries the byte slice, and the transformers that go to and from that entity know how to do the conversion.

Related

How to convert map[string][]byte to map[string]interface{}

I have a function that excepts parameter of type map[string]interface{} but I have variable of type map[string][]byte. my question is how can I convert map[string][]byte to map[string]interface{} in Go.
This is a common miss-expectation from go. In this case each element of the map needs to be converted to interface.
So here's a workaround with sample code:
func foo(arg map[string]interface{}){
fmt.Println(arg)
}
// msaToMsi convert map string array of byte to map string interface
func msaToMsi(msa map[string][]byte) map[string]interface{}{
msi := make(map[string]interface{}, len(msa))
for k, v := range msa {
msi[k] = v
}
return msi
}
func main() {
msa := map[string][]byte{"a": []byte("xyz")}
foo(msaToMsi(msa))
}
The solution would be similar for the following map or array conversion as well:
map[string]string to map[string]interface{}
[]string to [] interface {}
Etc..
Ok so to answer your question an interface in GO can be used where you are passing or receiving a object/struct of where you are not sure of its type.
For example:
type Human struct {
Name string
Age int
School string
DOB time.Time
}
type Animal struct {
Name string
HasTail bool
IsMamal bool
DOB time.Time
}
func DisplayData(dataType interface{}, data byte)
This Display Data function can Display any type of data, it takes data and a struct both of which the function doesn't know until we pass them in... The data could be a Human or an Animal, both having different values which can be mapped depending on which interface we pass to the function...
This means we can reuse the code to display any data as long as we tell the function the data type we want to map the data to and display (i.e. Animal or Human...)
In your case the solution would be to define the data type, the structure of the data in the byte as a struct and where you make the map instead of map[string][]byte
try changing to
map[string]YourDefinedStructure
and pass that to the function that accepts map[string]interface{}.
Hopefully this helps, the question although you supply data types is rather vague as a use case and nature of the function that accepts map[string]interface{} can affect the approach taken.
You don't really have to convert while passing your map[string][]byte to the function.
The conversion needs to happen at the point where you want to use the value from the map.

Converting time from DB to custom time fails

I need to read dates from a db, convert it to a certain timestamp and convert it to JSON.
I have the following code:
package usages
import (
"fmt"
"time"
)
type SpecialOffer struct {
PublishedDate jsonTime `gorm:"column:publishing_date" json:"published_date"`
ExpirationDate jsonTime `gorm:"column:expiration_date" json:"expiration_date"`
}
type jsonTime struct {
time.Time
}
func (tt jsonTime) MarshalJSON() ([]byte, error) {
jsonTime := fmt.Sprintf("\"%s\"", tt.Format("20060102"))
return []byte(jsonTime), nil
}
When I run it like this I get the following error:
sql: Scan error on column index 8, name "publishing_date": unsupported Scan, storing driver.Value type time.Time into type *usages.trvTime
And the data is wrong:
{"published_date":"00010101","expiration_date":"00010101"}
If I change the SpecialOffer struct to use time.Time, it return correct, but obviously the format is wrong:
{"published_date":"2020-03-12T00:00:00Z","expiration_date":"2020-06-12T00:00:00Z"}
What am I doing wrong?
You should implement the sql.Scanner and driver.Valuer interfaces.
Something like this:
func (j *jsonTime) Scan(src interface{}) error {
if t, ok := src.(time.Time); ok {
j.Time = t
}
return nil
}
func (j jsonTime) Value() (driver.Value, error) {
return j.Time, nil
}
This is necessary because the database/sql package which is used by gorm and some other go ORMs, if not all of them, provides out-of-the-box support for only a handful of types.
Most of the supported types are the language's basic builtin types like string, int, bool, etc. by extension it also supports any custom user defined type whose underlying type is one of the aforementioned basic types, then there's supports for the []byte type and the related sql.RawBytes type, and lastly the time.Time type is also supported ootb.
Any other type that you may want to write to or read from the database will need to implement the two interfaces above. The sql.Scanner's Scan method is invoked automatically after a column's value is decoded into one of the supported types (that's why you need to type assert against time.Time rather than against, say []byte). The driver.Valuer's Value method is invoked automatically before the driver encodes it into a format that's valid for the target column (that's why you can return time.Time directly rather than having the do the encoding yourself).
And keep in mind that
type jsonTime struct {
time.Time
}
or even
type jsonTime time.Time
declares a new type that is not equal to time.Time and that is why it's not picked up by the database/sql package.

Go build with protocol buffer error: too few values in struct initializer

I have a proto file:
syntax = "proto3";
package main;
message Client {
int32 Id = 1;
string Name = 2;
string Email = 3;
}
The compiled Client struct like below:
type Client struct {
Id int32 `protobuf:"varint,1,opt,name=Id,proto3" json:"Id,omitempty"`
Name string `protobuf:"bytes,2,opt,name=Name,proto3" json:"Name,omitempty"`
Email string `protobuf:"bytes,3,opt,name=Email,proto3" json:"Email,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
When I am trying to init this Client struct like below:
client := &Client{123, "John", "john#aol.com"}
I am getting building error: too few values in struct initializer. I found a way to fix it by adding XXX_NoUnkeyedLiteral, XXX_unrecognized, XXX_sizecache. I don't know what these are, and wondering if this is the right way to do it:
client := &Client{123, "John", "john#aol.com", struct{}{}, []byte{}, int32(0)}
In struct composite literals you may omit the field names to which you list values for (this is called unkeyed literal), but then you have to list initial values for all fields and in their declaration order. Or you may use a keyed literal where you explicitly state which fields you specify initial values for. In the latter, you are allowed to omit any of the fields, you may just list the ones you want to give an initial value different from the field's zero value).
You used unkeyed composite literal, in which case you must list values for all fields, which you didn't. This is what the error message tells you: "too few values in struct initializer".
The field name (generated by protobuf) itself should give you the hint: XXX_NoUnkeyedLiteral. It suggests you should not use a composite literal without keys.
So use a composite literal with keys, like this::
client := &Client{
Id: 123,
Name: "John",
Email: "john#aol.com",
}
This form is more readable, and it is immune to struct changes. E.g. if the Client struct would get new fields, or fields would get rearranged, this code would still be valid and compile.
Add field name before the value can solve the building error, as
client := &Client{Id: 123, Name: "John", Email: "john#aol.com"}
I find this out by checking the grpc golang example, but maybe somebody can explain why? ;)

Parse and validate "key1:value1; key2:value2" string to Go struct efficiently?

I have a "key1:value1; key2:value2" like string (string with key:value pattern concated by ;).
Now I wish to parse this string to a Go struct:
type CustomStruct struct {
KeyName1 string `name:"key1" somevalidation:"xxx"`
KeyName2 int `name:"key2" somevalidation:"yyy"`
}
In the above example, the struct tag defines the name of the key in the string and can provide some validation for its corresponding value (it can set a default value if validation fails). For instance, KeyName2 is an int value, so I wish the somevalidation can check whether the KeyName2 satisfy, let's say, greater than 30 and less equal than 100.
And in another senario, I can define another struct CustomStruct2 for string like key3:value3; key4:value4;
How can I archive this kind of requirement efficiently and elegantly?
I'll assume that you can parse the data to a map[string]interface{}.
Use the reflect package to set the fields. Here's the basic function:
// set sets fields in struct pointed to by pv to values in data.
func set(pv interface{}, data map[string]interface{}) {
// pv is assumed to be pointer to a struct
s := reflect.ValueOf(pv).Elem()
// Loop through fields
t := s.Type()
for i := 0; i < t.NumField(); i++ {
// Set field if there's a data value for the field.
f := t.Field(i)
if d, ok := data[f.Tag.Get("name")]; ok {
s.Field(i).Set(reflect.ValueOf(d))
}
}
}
This code assumes that the values in the data map are assignable to the corresponding field in the struct and that the first argument is a pointer to a struct. The code will panic if these assumptions are not true. You can protect against this by checking types and assignability using the reflect package.
playground example

GetBSON method for individual elements [Go + mgo]

In Go, I get the json marshalling/unmarshalling. If a struct or type has a MarshalJSON method, when calling json.Marshal on another struct which has the former as a field, that structs's MarshalJSON method will be called. So from what I gather and have seen in practice...
type MyType struct has a MarshalJSON method to marshal itself a string.
type MyDocument struct has MyType as a field.
When calling json.Marshal() on MyDocument, the MyType field will be marshalled as a string due to it implementing json.Marshaller.
I'm trying to make my system database-agnostic and am implementing a service for MongoDB using the mgo driver, which means implementing bson.Getter and bson.Setter on all structs and things I want marshalled in a specific way. This is where it gets confusing.
Because Go doesn't have native fixed-point arithmetic, I'm using Shopspring's decimal package (found here) to deal with currency values. Decimal marshals to JSON perfectly but I have a named type type Currency decimal.Decimal which I just can't get to marshal down to BSON.
These are my implementations which convert the decimal to a float64 and try marshalling that in the same way I've done for json:
/*
Implements the bson.Getter interface.
*/
func (c Currency) GetBSON() (interface{}, error) {
f, _ := decimal.Decimal(c).Float64()
return f, nil
}
/*
Implements the bson.Setter interface.
*/
func (c *Currency) SetBSON(raw bson.Raw) error {
var f float64
e := raw.Unmarshal(&f)
if e == nil {
*c = Currency(decimal.NewFromFloat(f))
}
return e
}
Only problem is in the documentation for the bson package:
Marshal serializes the in value, which may be a map or a struct value.
Because it's not a struct or a map it just produces an empty document.
I'm just trying to marshal one bit of data which will only need to be marshalled as part of larger structs, but the package will only let me do it on whole documents. What should I do to get the results I need?

Resources