Coming from python I am used to be able to write a class with custom methods. Initiate it, save the object and load this again. I am trying to accomplish something similar in go. After reading about serialization I tried to use the marshall/unmarshall approach.
But below code deos not work as it results in method.Method = nil. I want to be able to call method.Method.Calculcate()
Below is my attempt.
import (
"encoding/json"
"fmt"
"log"
)
type Calculator struct {
Method Method `json:"Method"`
}
type method1 struct {
Input string `json:"input"`
}
func NewMethod1(input string) Method {
return &method1{
Input: input,
}
}
type method2 struct {
Input string `json:"input"`
}
func NewMethod2(input string) Method {
return &method2{
Input: input,
}
}
type Method interface {
Calculcate()
}
func (m *method1) Calculcate() {
}
func (m *method2) Calculcate() {
}
func main() {
model := Calculator{
Method: NewMethod1("inputData"),
}
model.Method.Calculcate()
var jsonData []byte
jsonData, err := json.Marshal(model)
if err != nil {
log.Println(err)
}
fmt.Println(string(jsonData))
var method Calculator
json.Unmarshal(jsonData, &method)
if err != nil {
log.Println(err)
}
}
I want to load the struct and use the same code to initiate method1 as method 2 when calculcating. This means I don't know in advance whether I am loading method 1 or method 2. Below is a little bit of pseudocode explaining what I want.
Calculator :=Load Calculator()
model := Calculator{
Method: NewMethod1("inputData"),
}
model.Method.Calculcate()
The problem is that decoding a JSON does not know what concrete type to use when a field has an interface type (multiple types may implement an interface). It might be obvious in your example, but just think a little further: you send this JSON over to another computer trying to decode it. The JSON representation does not record the concrete type for interface values (which in this case is *main.method1), so the other end does not know what to instantiate for the interface field Calculator.Method.
If you need this kind of serialization and deserialization, use the encoding/gob package instead. That package also writes type information so the decoder part at least knows the type name. encoding/gob does not transmit type definition, so if the other part has a different version of the type, the process can also fail. This is documented in the package doc:
Interface types are not checked for compatibility; all interface types are treated, for transmission, as members of a single "interface" type, analogous to int or []byte - in effect they're all treated as interface{}. Interface values are transmitted as a string identifying the concrete type being sent (a name that must be pre-defined by calling Register), followed by a byte count of the length of the following data (so the value can be skipped if it cannot be stored), followed by the usual encoding of concrete (dynamic) value stored in the interface value. (A nil interface value is identified by the empty string and transmits no value.) Upon receipt, the decoder verifies that the unpacked concrete item satisfies the interface of the receiving variable.
Also for the encoding/gob package to be able to instantiate a type based on its name, you have to register these types, more specifically values of these types.
This is a working example using encoding/gob:
First let's improve the Calculate() methods to see that they are called:
func (m *method1) Calculcate() {
fmt.Println("method1.Calculate() called, input:", m.Input)
}
func (m *method2) Calculcate() {
fmt.Println("method2.Calculate() called, input:", m.Input)
}
And now the serialization / deserialization process:
// Register the values we use for the Method interface
gob.Register(&method1{})
gob.Register(&method2{})
model := Calculator{
Method: NewMethod1("inputData"),
}
model.Method.Calculcate()
buf := &bytes.Buffer{}
enc := gob.NewEncoder(buf)
if err := enc.Encode(model); err != nil {
log.Println(err)
return
}
fmt.Println(buf.Bytes())
var model2 Calculator
dec := gob.NewDecoder(buf)
if err := dec.Decode(&model2); err != nil {
log.Println(err)
return
}
model2.Method.Calculcate()
This will output (try it on the Go Playground):
method1.Calculate() called, input: inputData
[35 255 129 3 1 1 10 67 97 108 99 117 108 97 116 111 114 1 255 130 0 1 1 1 6 77 101 116 104 111 100 1 16 0 0 0 48 255 130 1 13 42 109 97 105 110 46 109 101 116 104 111 100 49 255 131 3 1 1 7 109 101 116 104 111 100 49 1 255 132 0 1 1 1 5 73 110 112 117 116 1 12 0 0 0 16 255 132 12 1 9 105 110 112 117 116 68 97 116 97 0 0]
method1.Calculate() called, input: inputData
Related
The Go Postgres library defines a type UUID as such:
type UUID struct {
UUID uuid.UUID
Status pgtype.Status
}
func (dst *UUID) Set(src interface{}) error {
<Remainder Omitted>
My code uses this library:
import pgtype/uuid
string_uuid := uuid.New().String()
fmt.Println("string_uuid = ", string_uuid)
myUUID := pgtype.UUID{}
err = myUUID.Set(string_uuid)
if err != nil {
panic()
}
fmt.Println("myUUID.Bytes = ", myUUID.Bytes)
fmt.Println("string(myUUID.Bytes[:]) = ", string(myUUID.Bytes[:]))
Here is the output:
string_uuid = abadf98f-4206-4fb0-ab91-e77f4380e4e0
myUUID.Bytes = [171 173 249 143 66 6 79 176 171 145 231 127 67 128 228 224]
string(myUUID.Bytes[:]) = ����BO����C���
How can I get back to the original human-readable UUID string abadf98f-4206-4fb0-ab91-e77f4380e4e0 once it is put into myUUID which is of type pgtype.UUID{}?
The code in the question uses the pgtype.UUID, not the gofrs UUID linked from question's prose.
The pgtype.UUID type does not have a method to get the UUID string representation, but it's easy enough to do that in application code:
s := fmt.Sprintf("%x-%x-%x-%x-%x", myUUID.Bytes[0:4], myUUID.Bytes[4:6], myUUID.Bytes[6:8], myUUID.Bytes[8:10], myUUID.Bytes[10:16])
Do this if you want hex without the dashes:
s := fmt.Sprintf("%x", myUUID.Bytes)
If the application uses the gofrs UUID, then use:
s := myUUID.UUID.String()
I have to handle multiple versions of communication protocol. I have packages with all packets and their IDs which look like
package v1_8
//Clientbound packet IDs
const (
KeepAliveClientbound byte = iota
JoinGame
ChatMessageClientbound
TimeUpdate
EntityEquipment
SpawnPosition
UpdateHealth
Respawn
PlayerPositionAndLookClientbound
HeldItemChangeClientbound
UseBed
AnimationClientbound
SpawnPlayer
CollectItem
...
and
package v1_14_3
//Clientbound packet IDs
const (
SpawnObject byte = iota //0x00
SpawnExperienceOrb
SpawnGlobalEntity
SpawnMob
SpawnPainting
SpawnPlayer
AnimationClientbound
Statistics
BlockBreakAnimation
UpdateBlockEntity
BlockAction
BlockChange
BossBar
ServerDifficulty
ChatMessageClientbound
MultiBlockChange
...
There are slight differences between versions (different IDs for the same packets or changed type of data) how do I go about handling them? For now, I am just using switch statement to check what packet is being processed but that works only for one version. I have information about client protocol version so I just need to change package that is being used in switch statement.
for {
pkt, err := src.ReadPacket()
if err != nil {
...handle error
}
switch pkt.ID {
case v1_14_3.<Packet Const>:
...handle packet - there might be different data type assigned for different versions of protocol
}
err = dst.WritePacket(pkt)
if err != nil {
...handle error
}
}
#Edit: providing more information
Package struct returned by ReadPacket()
type Packet struct {
ID byte
Data []byte
}
And some real-life data
{0 [234 3 9 108 111 99 97 108 104 111 115 116 99 156 1]}
This is one of the packets defined here as Handshake
I then use Scan method of Packet struct like this
switch pkt.ID {
case v1_14_3.Handshake:
var (
protocolVersion packet.VarInt
serverAddress packet.String
serverPort packet.UnsignedShort
nextState packet.VarInt
)
err = pkt.Scan(&protocolVersion, &serverAddress, &serverPort, &nextState)
if err != nil {
...handle error
}
...process packet
}
As I said before types passed to Scan method can be slightly different for different versions as well as packets IDs. So Handshake packet between versions may have different pkt.ID or different payload (therefore properties passed to scan would be different). Scan is just variadic function that propagates passed variables with pkt.Data
I am starting A tour of Go, but I am experimenting on the way.
I wrote a piece of code:
package main
import (
"fmt"
"math/rand"
"time"
)
func main() {
actualTime := time.Now()
fmt.Println(actualTime)
var marshalledtime []byte
marshalledtime,_ := actualTime.MarshalJSON()
fmt.Println(marshalledtime)
actualTime := (*time.Time).UnmarshalJSON(marshalledtime)
fmt.Println(actualTime)
}
I just wanted to marshal a simple date, and then unmarshal it to just see the process.
But I am completely overhelmed with problems. Up to today GO seemed to be so simple and logical, but now... I don't know, I am stuck.
./compile219.go:27:13: cannot use time.(*Time).UnmarshalJSON(marshalledtime) (type error) as type time.Time in assignment
./compile219.go:27:42: not enough arguments in call to method expression time.(*Time).UnmarshalJSON
have ([]byte)
want (*time.Time, []byte)
Why does the last error mean? The documentation clearly says that UnmarshalJson takes only one argument, byte[].
What is with the type conversion error?
actualTime.MarshalJSON() is a method call, it calls the Time.MarshalJSON() method. It returns you the bytes of the JSON representation of the time. Since the bytes printed are not well readable, you should print the byte slice as a string, e.g.:
fmt.Println("Raw:", marshalledtime)
fmt.Println("String:", string(marshalledtime))
Which outputs:
Raw: [34 50 48 48 57 45 49 49 45 49 48 84 50 51 58 48 48 58 48 48 90 34]
String: "2009-11-10T23:00:00Z"
UnmarshalJSON() is also a method of time.Time, so you need a time.Time value to call it "on", for example:
var time2 time.Time
time2.UnmarshalJSON(marshalledtime)
(To be precise, UnmarshalJSON() requires a pointer of type *time.Time because it has to modify the time.Time value, but the Go compiler will rewrite time2.UnmarshalJSON() to take time2's address: (&time2).UnmarshalJSON()).
MarshalJSON() and UnmarshalJSON() also return an error which you should always check, for example:
var marshalledtime []byte
var err error
marshalledtime, err = actualTime.MarshalJSON()
if err != nil {
panic(err)
}
And:
var time2 time.Time
err = time2.UnmarshalJSON(marshalledtime)
if err != nil {
panic(err)
}
Try the fixed code on the Go Playground.
Also note that it's rare that someone calls Time.MarshalJSON() and Time.UnmarshalJSON() "by hand". They are to implement the json.Marshaler and json.Unmarshaler interfaces, so when you marshal / unmarshal time values, the encoding/json package will call these methods to do the JSON conversion.
This is how the same can be achieved using the encoding/json package (try it on the Go Playground):
Marshaling:
var marshalledtime []byte
var err error
marshalledtime, err = json.Marshal(actualTime)
if err != nil {
panic(err)
}
Unmarshaling:
var time2 time.Time
err = json.Unmarshal(marshalledtime, &time2)
if err != nil {
panic(err)
}
Although it's not simpler or shorter in this case, the encoding/json package is capable of marshaling / unmarshaling arbitrary complex data structures, not just simple time values.
The documentation clearly says that UnmarshalJson takes only one argument, byte[]
UnmarshalJson is the method in case you want to redefine the behavior of Unmarshal on a struct. That's not the case here. You don't have to use it unless you want to customize Unmarshal. Then UnmarshalJson will be called under the hood when you will make a call to Unmarshal
By the way, if you see the signature of UnmarshalJson you will see that it does return only an error.
Here is a fix using json.Unmarshal
package main
import (
"fmt"
"encoding/json"
"time"
)
func main() {
actualTime := time.Now()
fmt.Println(actualTime)
var marshalledtime []byte
marshalledtime,_ = actualTime.MarshalJSON()
fmt.Println(marshalledtime)
json.Unmarshal(marshalledtime, &actualTime)
fmt.Println(actualTime)
}
I'm trying to read a UUID retrieved from Postgres, using github.com/jackc/pgx, into a variable of type uuid.UUID (From the github.com/google/uuid package).
An example code could be:
var dbId = uuid.UUID
err = db.Pool.QueryRow("SELECT id FROM users WHERE objectname = $1;", objectUUID.String()).Scan(&dbId)
if err != nil {
log.Printf("Failed to fetch from database: %v", err)
return
}
The quickfix is to store the dbId in a temporary variable and then later convert said temporary variable into the correct type, but I have a feeling that there is a better, or more idiomatic, way to do it.
The error I'm getting is:
2018/02/12 07:09:18 handlers.go:187: Failed to fetch from database: can't scan into dest[1]: cannot assign &{[127 122 68 237 130 120 65 78 159 189 9 188 27 48 117 88] 2} into *uuid.UUID
The errors says that you are trying to Scan a struct pointer to the UUID pointer. The struct is defined here. So basically you should use this struct and Scan it into that and then convert the byte array in the struct to whatever UUID type you are using.
I'm looking for some solution to know whats the struct type of the hash. It is possible to do that without try an error method (casting to a specific type and see the cast is successfully)?
Please check the code:
import (
"bytes"
"encoding/binary"
"fmt"
"reflect"
)
type T struct {
A int64
B float64
}
type D struct {
A int64
B float64
C string
}
func main() {
// Create a struct and write it.
t := T{A: 0xEEFFEEFF, B: 3.14}
buf := &bytes.Buffer{}
err := binary.Write(buf, binary.BigEndian, t)
if err != nil {
panic(err)
}
fmt.Println(buf.Bytes())
out := getType(buf)
fmt.Println(out)
}
func getType(v interface{})(r string){
fmt.Println(reflect.TypeOf(v))
switch t := v.(type) {
case T:
return "Is type T"
case D:
return "Is type D"
default:
_ = t
return "unknown"
}
}
Since the encoding/binary package does not write out type information, it is not possible to tell what type was written / serialized.
And you're in a worse position that you might originally think: even trying to decode into a value of different type might succeed without errors, so there isn't even a reliable way to tell the type.
For example if you serialize a value of this type:
type T struct {
A int64
B float64
}
You can read it into a value of this type:
type T2 struct {
B float64
A int64
}
It will give no errors because the size of both structs is the same, but obviously you will get different numbers in the fields.
You are in a little better position if you use encoding/gob, as the gob package does transmit type information, and encoding a value of type T and then decoding it into a value of type T2 would work: order of fields does not matter, and extra or missing fields also do not cause trouble.
See this example:
// Create a struct and write it.
t := T{A: 0xEEFFEEFF, B: 3.14}
fmt.Println("Encoding:", t)
buf := &bytes.Buffer{}
fmt.Println(binary.Write(buf, binary.BigEndian, t))
fmt.Println(buf.Bytes())
fmt.Println(gob.NewEncoder(buf).Encode(t))
t2 := T2{}
fmt.Println(binary.Read(buf, binary.BigEndian, &t2))
fmt.Println(t2)
t2 = T2{}
fmt.Println(gob.NewDecoder(buf).Decode(&t2))
fmt.Println(t2)
Output (try it on the Go Playground):
Encoding: {4009750271 3.14}
<nil>
[0 0 0 0 238 255 238 255 64 9 30 184 81 235 133 31]
<nil>
<nil>
{1.9810798573e-314 4614253070214989087}
<nil>
{3.14 4009750271}
If you want to be able to detect the type before reading it, you have to take care of it yourself: you have to transmit type information (e.g. name of the type). Or even better, use a serialization method that already does this, for example Google's protocol buffers, and here is the Go implementation for it: github.com/golang/protobuf.