I have to handle multiple versions of communication protocol. I have packages with all packets and their IDs which look like
package v1_8
//Clientbound packet IDs
const (
KeepAliveClientbound byte = iota
JoinGame
ChatMessageClientbound
TimeUpdate
EntityEquipment
SpawnPosition
UpdateHealth
Respawn
PlayerPositionAndLookClientbound
HeldItemChangeClientbound
UseBed
AnimationClientbound
SpawnPlayer
CollectItem
...
and
package v1_14_3
//Clientbound packet IDs
const (
SpawnObject byte = iota //0x00
SpawnExperienceOrb
SpawnGlobalEntity
SpawnMob
SpawnPainting
SpawnPlayer
AnimationClientbound
Statistics
BlockBreakAnimation
UpdateBlockEntity
BlockAction
BlockChange
BossBar
ServerDifficulty
ChatMessageClientbound
MultiBlockChange
...
There are slight differences between versions (different IDs for the same packets or changed type of data) how do I go about handling them? For now, I am just using switch statement to check what packet is being processed but that works only for one version. I have information about client protocol version so I just need to change package that is being used in switch statement.
for {
pkt, err := src.ReadPacket()
if err != nil {
...handle error
}
switch pkt.ID {
case v1_14_3.<Packet Const>:
...handle packet - there might be different data type assigned for different versions of protocol
}
err = dst.WritePacket(pkt)
if err != nil {
...handle error
}
}
#Edit: providing more information
Package struct returned by ReadPacket()
type Packet struct {
ID byte
Data []byte
}
And some real-life data
{0 [234 3 9 108 111 99 97 108 104 111 115 116 99 156 1]}
This is one of the packets defined here as Handshake
I then use Scan method of Packet struct like this
switch pkt.ID {
case v1_14_3.Handshake:
var (
protocolVersion packet.VarInt
serverAddress packet.String
serverPort packet.UnsignedShort
nextState packet.VarInt
)
err = pkt.Scan(&protocolVersion, &serverAddress, &serverPort, &nextState)
if err != nil {
...handle error
}
...process packet
}
As I said before types passed to Scan method can be slightly different for different versions as well as packets IDs. So Handshake packet between versions may have different pkt.ID or different payload (therefore properties passed to scan would be different). Scan is just variadic function that propagates passed variables with pkt.Data
Related
Assume I have the following structure
type Hdr struct{
Src uint16
Dst uint16
Priotity byte
Pktcnt byte
Opcode byte
Ver byte
}
I have two functions Marshal and Unmarshal that encode Hdr to and from a binary format of:
0 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Src |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Dst |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Prio | Cnt | Opcode| Ver |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
I'd like to use Go Fuzz to make random, valid Hdr instances, Marshal then to binary, Unmarshal the binary and make sure the output matches the original input.
The main issue I am having is that I cannot figure out how to tell Go Fuzz that fields like Priotity cannot be greater than 15 otherwise they will get truncated when they are marshalled (only 4 bits). How do I set this constraint?
Update
This is just a toy case. There are many times with protocols like the above where something like the opcode would trigger secondary more complex parsing/vetting. Fuzzing could still find very useful issues within a constraint (IE: if Prio 0x00 and Cnt 0x2F secondary parser will error because delimiter is \ ).
EDIT
I'm not sure Fuzzing is a good fit here. Fuzzing is designed to find unexpected inputs: multi-byte UTF8 inputs (valid and non-valid); negative values; huge values, long lengths etc. These will try to catch "edge" cases.
In your case here, you know the:
Unmarshal input payload must be 6 bytes (should error otherwise)
you know precisely your internal "edges"
so vanilla testing.T tests may be a better fit here.
Keep it simple.
If you don't want to "waste" a Fuzz input & you know the input constraints of your code, you can try something like this:
func coerce(h *Hdr) (skip bool) {
h.Priotity &= 0x0f // ensure priority is 0-15
h.OpCode %= 20 // ensure opcode is 0-19
return false // optionally skip this test
}
and in your test - the coerced value can be tested - or skipped (as #jch showed):
import "github.com/google/go-cmp/cmp"
f.Fuzz(func(t *testing.T, src, dst uint16, pri, count, op, ver byte) {
h := Hdr{src, dst, pri, count, op, ver}
if coerce(&h) {
t.Skip()
return
}
bs, err := Marshal(h) // check err
h2, err := Unmarhsal(bs) // check err
if !cmp.Equal(h, h2) {
t.Errorf("Marshal/Unmarshal validation failed for: %+v", h)
}
}
In order to skip uninteresting results, call t.Skip in your fuzzing function. Something like this:
f.Fuzz(func(t *testing.T, b []byte) {
a, err := Unmarshal(b)
if err != nil {
t.Skip()
return
}
c, err := Marshal(a)
if err != nil || !bytes.Equal(b, c) {
t.Errorf("Eek!")
}
})
I am trying to send and receive protobuff encoded messages in GoLang over TCP, where the sender can cancel the write() halfway through the operation, and the receiver can correctly receive partial messages.
Note that I use a single TCP connection to send messages of different user defined types, infinitely (this is not a per connection message case)
To explain my question concretely, first I will present how I implement the send/receive without partial writes.
In my program, there are multiple types of messages, defined in a .proto file. I will explain the mechanism for one such message type.
message MessageType {
int64 sender = 1;
int64 receiver = 2;
int64 operation = 3;
string message = 4;
}
Then I use Golang Protobuf plugin to generate the stubs.
Then in the sender side, the following is how I send.
func send(w *bufio.Writer, code uint8, oriMsg MessageType) {
err := w.WriteByte(code)
data, err := proto.Marshal(oriMsg)
lengthWritten := len(data)
var b [8]byte
bs := b[:8]
binary.LittleEndian.PutUint64(bs, uint64(lengthWritten))
_, err = w.Write(bs)
_, err = w.Write(data)
w.flush()
}
Then in the receiver side, the following is how I receive.
reader *bufio.Reader
for true {
if msgType, err = reader.ReadByte(); err != nil {
panic()
}
if msgType == 1 || msgType == 2{
var b [8]byte
bs := b[:8]
_, err := io.ReadFull(reader, bs)
numBytes := binary.LittleEndian.Uint64(bs)
data := make([]byte, numBytes)
length, err := io.ReadFull(reader, data)
msg *MessageType = new(GenericConsensus) // an empty message
err = proto.Unmarshal(data[:length], msg)
// do something with the message
} else {
// unknown message type handler
}
}
Now my question is, what if the sender aborts his writes in the middle: more concretely,
Case 1: what if the sender writes the message type byte, and then abort? In this case the receiver will read the message type byte, and waits to receive an 8 byte message length, but the sender doesn't send it.
Case 2: This is an extended version of case 1 where the sender first sends only the message type byte, and the aborts sending the message length and marshaled message, and then send the next message: the type byte, the length and encoded message. Now in the receiver side, everything goes wrong because the order of messages (type, length and encoded message) is violated.
So my question is, how can I modify the receiver such that it can continue to operate despite the sender violating the pre-agreed order of type:length:encoded-message?
Thanks
Why would the sender abort a message, but then send another message? You mean it's a fully byzantine sender? Or are you preparing for fuzzy-testing?
If your API contract says that the sender always needs to send a correct message, then the receiver can simply ignore wrong messages, or even close the connection if it sees a violation of the API contract.
If you really need it, here some ideas of how you could make it work:
start with a unique preamble - but then you will have to make sure this preamble never comes up in the data
add a checksum to the message before sending it to the decoder. So the full packet would be: [msg_type : msg_len : msg : chksum ]. This allows the receiver to check whether it's a correct message or a misformed one.
Also, as the code is currently, it is quite easy to crash by sending a size with the maximum of 64 bits. So you should also check for the size to be in a useful range. I would limit it to 32 bits...
Coming from python I am used to be able to write a class with custom methods. Initiate it, save the object and load this again. I am trying to accomplish something similar in go. After reading about serialization I tried to use the marshall/unmarshall approach.
But below code deos not work as it results in method.Method = nil. I want to be able to call method.Method.Calculcate()
Below is my attempt.
import (
"encoding/json"
"fmt"
"log"
)
type Calculator struct {
Method Method `json:"Method"`
}
type method1 struct {
Input string `json:"input"`
}
func NewMethod1(input string) Method {
return &method1{
Input: input,
}
}
type method2 struct {
Input string `json:"input"`
}
func NewMethod2(input string) Method {
return &method2{
Input: input,
}
}
type Method interface {
Calculcate()
}
func (m *method1) Calculcate() {
}
func (m *method2) Calculcate() {
}
func main() {
model := Calculator{
Method: NewMethod1("inputData"),
}
model.Method.Calculcate()
var jsonData []byte
jsonData, err := json.Marshal(model)
if err != nil {
log.Println(err)
}
fmt.Println(string(jsonData))
var method Calculator
json.Unmarshal(jsonData, &method)
if err != nil {
log.Println(err)
}
}
I want to load the struct and use the same code to initiate method1 as method 2 when calculcating. This means I don't know in advance whether I am loading method 1 or method 2. Below is a little bit of pseudocode explaining what I want.
Calculator :=Load Calculator()
model := Calculator{
Method: NewMethod1("inputData"),
}
model.Method.Calculcate()
The problem is that decoding a JSON does not know what concrete type to use when a field has an interface type (multiple types may implement an interface). It might be obvious in your example, but just think a little further: you send this JSON over to another computer trying to decode it. The JSON representation does not record the concrete type for interface values (which in this case is *main.method1), so the other end does not know what to instantiate for the interface field Calculator.Method.
If you need this kind of serialization and deserialization, use the encoding/gob package instead. That package also writes type information so the decoder part at least knows the type name. encoding/gob does not transmit type definition, so if the other part has a different version of the type, the process can also fail. This is documented in the package doc:
Interface types are not checked for compatibility; all interface types are treated, for transmission, as members of a single "interface" type, analogous to int or []byte - in effect they're all treated as interface{}. Interface values are transmitted as a string identifying the concrete type being sent (a name that must be pre-defined by calling Register), followed by a byte count of the length of the following data (so the value can be skipped if it cannot be stored), followed by the usual encoding of concrete (dynamic) value stored in the interface value. (A nil interface value is identified by the empty string and transmits no value.) Upon receipt, the decoder verifies that the unpacked concrete item satisfies the interface of the receiving variable.
Also for the encoding/gob package to be able to instantiate a type based on its name, you have to register these types, more specifically values of these types.
This is a working example using encoding/gob:
First let's improve the Calculate() methods to see that they are called:
func (m *method1) Calculcate() {
fmt.Println("method1.Calculate() called, input:", m.Input)
}
func (m *method2) Calculcate() {
fmt.Println("method2.Calculate() called, input:", m.Input)
}
And now the serialization / deserialization process:
// Register the values we use for the Method interface
gob.Register(&method1{})
gob.Register(&method2{})
model := Calculator{
Method: NewMethod1("inputData"),
}
model.Method.Calculcate()
buf := &bytes.Buffer{}
enc := gob.NewEncoder(buf)
if err := enc.Encode(model); err != nil {
log.Println(err)
return
}
fmt.Println(buf.Bytes())
var model2 Calculator
dec := gob.NewDecoder(buf)
if err := dec.Decode(&model2); err != nil {
log.Println(err)
return
}
model2.Method.Calculcate()
This will output (try it on the Go Playground):
method1.Calculate() called, input: inputData
[35 255 129 3 1 1 10 67 97 108 99 117 108 97 116 111 114 1 255 130 0 1 1 1 6 77 101 116 104 111 100 1 16 0 0 0 48 255 130 1 13 42 109 97 105 110 46 109 101 116 104 111 100 49 255 131 3 1 1 7 109 101 116 104 111 100 49 1 255 132 0 1 1 1 5 73 110 112 117 116 1 12 0 0 0 16 255 132 12 1 9 105 110 112 117 116 68 97 116 97 0 0]
method1.Calculate() called, input: inputData
Why is my transaction processor not receiving the request I post via the rest API?
I have built a client and Transaction Processor (TP), in Golang, which is not much different to the XO example. I have successfully got the TP running locally to the Sawtooth components, and sending batch lists from a separate cli tool. Currently the apply method in the TP is not being hit and does not receive any of my transactions.
EDIT: In order to simplify and clarify my problem as much as possible, I have abandoned my original source code, and built a simpler client that sends a transaction for the XO sdk example.*
When I run the tool that I have built, the rest api successfully receives the request, processes and returns a 202 response but appears to omit the id of the batch from the batch statuses url. Inspecting the logs it appears as though the validator never receives the request from the rest api, as illustrated in the logs below.
sawtooth-rest-api-default | [2018-05-16 09:16:38.861 DEBUG route_handlers] Sending CLIENT_BATCH_SUBMIT_REQUEST request to validator
sawtooth-rest-api-default | [2018-05-16 09:16:38.863 DEBUG route_handlers] Received CLIENT_BATCH_SUBMIT_RESPONSE response from validator with status OK
sawtooth-rest-api-default | [2018-05-16 09:16:38.863 INFO helpers] POST /batches HTTP/1.1: 202 status, 213 size, in 0.002275 s
My entire command line tool that sends transactions to a local instance is below.
package main
import (
"bytes"
"crypto/sha512"
"encoding/base64"
"encoding/hex"
"flag"
"fmt"
"io/ioutil"
"log"
"math/rand"
"net/http"
"strings"
"time"
"github.com/hyperledger/sawtooth-sdk-go/protobuf/batch_pb2"
"github.com/hyperledger/sawtooth-sdk-go/protobuf/transaction_pb2"
"github.com/hyperledger/sawtooth-sdk-go/signing"
)
var restAPI string
func main() {
var hostname, port string
flag.StringVar(&hostname, "hostname", "localhost", "The hostname to host the application on (default: localhost).")
flag.StringVar(&port, "port", "8080", "The port to listen on for connection (default: 8080)")
flag.StringVar(&restAPI, "restAPI", "http://localhost:8008", "The address of the sawtooth REST API")
flag.Parse()
s := time.Now()
ctx := signing.CreateContext("secp256k1")
key := ctx.NewRandomPrivateKey()
snr := signing.NewCryptoFactory(ctx).NewSigner(key)
payload := "testing_new,create,"
encoded := base64.StdEncoding.EncodeToString([]byte(payload))
trn := BuildTransaction(
"testing_new",
encoded,
"xo",
"1.0",
snr)
trn.Payload = []byte(encoded)
batchList := &batch_pb2.BatchList{
Batches: []*batch_pb2.Batch{
BuildBatch(
[]*transaction_pb2.Transaction{trn},
snr),
},
}
serialised := batchList.String()
fmt.Println(serialised)
resp, err := http.Post(
restAPI+"/batches",
"application/octet-stream",
bytes.NewReader([]byte(serialised)),
)
if err != nil {
fmt.Println("Error")
fmt.Println(err.Error())
return
}
defer resp.Body.Close()
fmt.Println(resp.Status)
body, err := ioutil.ReadAll(resp.Body)
fmt.Println(string(body))
elapsed := time.Since(s)
log.Printf("Creation took %s", elapsed)
resp.Close = true
}
// BuildTransaction will build a transaction based on the information provided
func BuildTransaction(ID, payload, familyName, familyVersion string, snr *signing.Signer) *transaction_pb2.Transaction {
publicKeyHex := snr.GetPublicKey().AsHex()
payloadHash := Hexdigest(string(payload))
addr := Hexdigest(familyName)[:6] + Hexdigest(ID)[:64]
transactionHeader := &transaction_pb2.TransactionHeader{
FamilyName: familyName,
FamilyVersion: familyVersion,
SignerPublicKey: publicKeyHex,
BatcherPublicKey: publicKeyHex,
Inputs: []string{addr},
Outputs: []string{addr},
Dependencies: []string{},
PayloadSha512: payloadHash,
Nonce: GenerateNonce(),
}
header := transactionHeader.String()
headerBytes := []byte(header)
headerSig := hex.EncodeToString(snr.Sign(headerBytes))
return &transaction_pb2.Transaction{
Header: headerBytes,
HeaderSignature: headerSig[:64],
Payload: []byte(payload),
}
}
// BuildBatch will build a batch using the provided transactions
func BuildBatch(trans []*transaction_pb2.Transaction, snr *signing.Signer) *batch_pb2.Batch {
ids := []string{}
for _, t := range trans {
ids = append(ids, t.HeaderSignature)
}
batchHeader := &batch_pb2.BatchHeader{
SignerPublicKey: snr.GetPublicKey().AsHex(),
TransactionIds: ids,
}
return &batch_pb2.Batch{
Header: []byte(batchHeader.String()),
HeaderSignature: hex.EncodeToString(snr.Sign([]byte(batchHeader.String())))[:64],
Transactions: trans,
}
}
// Hexdigest will hash the string and return the result as hex
func Hexdigest(str string) string {
hash := sha512.New()
hash.Write([]byte(str))
hashBytes := hash.Sum(nil)
return strings.ToLower(hex.EncodeToString(hashBytes))
}
// GenerateNonce will generate a random string to use
func GenerateNonce() string {
return randStringBytesMaskImprSrc(16)
}
const (
letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
letterIdxBits = 6 // 6 bits to represent a letter index
letterIdxMask = 1<<letterIdxBits - 1 // All 1-bits, as many as letterIdxBits
letterIdxMax = 63 / letterIdxBits // # of letter indices fitting in 63 bits
)
func randStringBytesMaskImprSrc(n int) string {
rand.Seed(time.Now().UnixNano())
b := make([]byte, n)
// A rand.Int63() generates 63 random bits, enough for letterIdxMax letters!
for i, cache, remain := n-1, rand.Int63(), letterIdxMax; i >= 0; {
if remain == 0 {
cache, remain = rand.Int63(), letterIdxMax
}
if idx := int(cache & letterIdxMask); idx < len(letterBytes) {
b[i] = letterBytes[idx]
i--
}
cache >>= letterIdxBits
remain--
}
return string(b)
}
There were a number of issues with this, and hopefully I can explain each in isolation to help shed a light on the ways in which these transactions can fail.
Transaction Completeness
As #Frank C. comments above my transaction headers were missing a couple of values. These were addresses and also the nonce.
// Hexdigest will hash the string and return the result as hex
func Hexdigest(str string) string {
hash := sha512.New()
hash.Write([]byte(str))
hashBytes := hash.Sum(nil)
return strings.ToLower(hex.EncodeToString(hashBytes))
}
addr := Hexdigest(familyName)[:6] + Hexdigest(ID)[:64]
transactionHeader := &transaction_pb2.TransactionHeader{
FamilyName: familyName,
FamilyVersion: familyVersion,
SignerPublicKey: publicKeyHex,
BatcherPublicKey: publicKeyHex,
Inputs: []string{addr},
Outputs: []string{addr},
Dependencies: []string{},
PayloadSha512: payloadHash,
Nonce: uuid.NewV4(),
}
Tracing
The next thing was to enable tracing in the batch.
return &batch_pb2.Batch{
Header: []byte(batchHeader.String()),
HeaderSignature: batchHeaderSignature,
Transactions: trans,
Trace: true, // Set this flag to true
}
With the above set the Rest API will decode the message to print additional logging information, and also the Validator component will output more useful logging.
400 Bad Request
{
"error": {
"code": 35,
"message": "The protobuf BatchList you submitted was malformed and could not be read.",
"title": "Protobuf Not Decodable"
}
}
The above was output by the Rest API once the tracing was turned on. This proved that there was something awry with the received data.
Why is this the case?
Following some valuable advice from the Sawtooth chat rooms, I tried to deserisalise my batches using the SDK for another language.
Deserialising
To test deserialising the batches in another SDK, I built a web api in python that I could easily send my batches to, which could attempt to deserialise them.
from flask import Flask, request
from protobuf import batch_pb2
app = Flask(__name__)
#app.route("/batches", methods = [ 'POST' ])
def deserialise():
received = request.data
print(received)
print("\n")
print(''.join('{:02x}'.format(x) for x in received))
batchlist = batch_pb2.BatchList()
batchlist.ParseFromString(received)
return ""
if __name__ == '__main__':
app.run(host="0.0.0.0", debug=True)
After sending my batch to this I received the below error.
RuntimeWarning: Unexpected end-group tag: Not all data was converted
This was obviously what was going wrong with my batches, but seeing as this was all being handled by the Hyperledger Sawtooth Go SDK, I decided to move to Python and built my application with that.
[Edit]
Managed to get document updated with information on writing a client in Go https://sawtooth.hyperledger.org/docs/core/nightly/master/app_developers_guide/go_sdk.html
[Original Answer]
Answering to the question a bit late, hope this will help others who are facing similar issues with Go client.
I was recently trying out a sample Go client for Sawtooth. Faced similar issues as you have asked here, currently it's hard to debug what has gone wrong in the composed batch list. Problem is lack of sample code and documentation for using Go SDK in client application development.
Here's a link for the sample Go code that's working: https://github.com/arsulegai/contentprotection/tree/master/ContentProtectionGoClient
Please have a look at the file src/client/client.go which composes single Transaction and Batch, puts it to batch list and sends it to the validator. Method I followed for debugging is to compose the expected batch list (for specific transaction) in another language and comparing each step with the result from equivalent step in Go code.
Coming to the question, along with missing information in the composed transaction header other problem could be the way protobuf messages are serialized. Please use protobuf library for serializing.
Example: (extending the answer from #danielcooperxyz)
transactionHeader := &transaction_pb2.TransactionHeader{
FamilyName: familyName,
FamilyVersion: familyVersion,
SignerPublicKey: publicKeyHex,
BatcherPublicKey: publicKeyHex,
Inputs: []string{addr},
Outputs: []string{addr},
Dependencies: []string{},
PayloadSha512: payloadHash,
Nonce: uuid.NewV4(),
}
transactionHeaderSerializedForm, _ := proto.Marshal(transactionHeader)
(protobuf library methods can be found in github.com/golang/protobuf/proto)
http://play.golang.org/p/RQXB-hCq_M
type Header struct {
ByteField1 uint32 // 4 bytes
ByteField2 [32]uint8 // 32 bytes
ByteField3 [32]uint8 // 32 bytes
SkipField1 []SomethingElse
}
func main() {
var header Header
headerBytes := make([]byte, 68) // 4 + 32 + 32 == 68
headerBuf := bytes.NewBuffer(headerBytes)
err := binary.Read(headerBuf, binary.LittleEndian, &header)
if err != nil {
fmt.Println(err)
}
fmt.Println(header)
}
I don't want to read from the buffer into the header struct in chunks. I want to read into the bytefield in one step but skip non byte fields. If you run the program in the given link (http://play.golang.org/p/RQXB-hCq_M) you will find that binary.Read to throw an error: binary.Read: invalid type []main.SomethingElse
Is there a way that I can skip this field?
Update:
Based on dommage's answer, I decided to embed the fields inside the struct instead like this
http://play.golang.org/p/i0xfmnPx4A
You can cause a field to be skipped by prefixing it's name with _ (underscore).
But: binary.Read() requires all fields to have a known size. If SkipField1 is of variable or unknown length then you have to leave it out of your struct.
You could then use io.Reader.Read() to manually skip over the skip field portion of your input and then call binary.Read() again.