gRPC context on the client side - go

I am building a client/server system in go, using gRPC and protobuf (and with a gRPC gateway to REST).
I use metadata in the context on the server side to carry authentication data from the client, and that works perfectly well.
Now, I'd like the server to set some metadata keys/values so that the client can get them, alongside with the response. How can I do that? Using SetHeader and SendHeader? Ideally, I'd like every single response from the server to integrate that metadata (can be seen as some kind of UnaryInterceptor, but on the response rather than the request?)
Here is the code for the server and for the client.

I finally found my way: https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md
So basically, grpc.SetHeader() + grpc.SendHeader() and grpc.SetTrailer() are totally what I was looking for. On the client side, grpc.Header() and grpc.Trailer() functions need to be passed to the RPC call, and their argument is a metadata.MD object to be filled.
On the client side, define your receiving metadata:
var header, trailer metadata.MD
Then, pass it to the SomeRPCCall() unary RPC:
response, err := client.SomeRPCCall(
context.Background(),
proto.MyMessage{},
grpc.Header(&header),
grpc.Trailer(&trailer),
)
And now, you can check what's in your metadata:
for key, value := range header {
fmt.Printf("%s => %s", key, value)
}
for key, value := range trailer {
fmt.Printf("%s => %s", key, value)
}
On the server side, you can:
force the data to be sent right after the RPC is received (but before it's processed):
grpc.SendHeader(ctx, metadata.New(map[string]string{"my-key": "my-value"}))
or set & send the metadata at the end of the RPC process (along with the Status):
grpc.SetTrailer(ctx, metadata.New(map[string]string{"my-key": "my-value"}))

Related

Setting status code in grpc server method call

How do I set the response status code in a grpc method in golang. For example lets say I have the following grpc method
func (i *ItemServerImp) Register(ct context.Context, it *item.RegisterItemRequest) (*item.RegisterItemReply, error) {
}
How do I set the response status to 200 or a 400 based on the input or some processing. I had a look around and could not find a proper way to do this.
However I did find the following https://chromium.googlesource.com/external/github.com/grpc/grpc/+/refs/heads/chromium-deps/2016-07-27/doc/statuscodes.md which says the status code can be set.
You can return a gRPC error using the google.golang.org/grpc/status package as follows:
return nil, status.Error(codes.InvalidArgument, "Incorrect request argument")
The different status codes are available in the google.golang.org/grpc/codes package.

How I encode the length 32 []byte nonce from golang to hit rust's jormungandr rpc server?

If you look at how this go-cardano-client is making it's handshake request payload:
https://github.com/gocardano/go-cardano-client/blob/master/shelley/handshake.go#L64
versionTable.Add(cbor.NewPositiveInteger8(1), cbor.NewPositiveInteger(764824073))
versionTable.Add(cbor.NewPositiveInteger16(32770), cbor.NewPositiveInteger(764824073))
versionTable.Add(cbor.NewPositiveInteger16(32771), cbor.NewPositiveInteger(764824073))
But the grpc generated struct is:
type HandshakeRequest struct {
// Nonce for the server to authenticate its node ID with.
Nonce []byte `protobuf:"bytes,1,opt,name=nonce,proto3" json:"nonce,omitempty"`
}
And this []byte needs to come through for the nonce referenced:
https://github.com/input-output-hk/jormungandr/blob/master/jormungandr/src/network/service.rs#L60
It's length 32:
https://github.com/input-output-hk/jormungandr/blob/master/jormungandr/src/network/client/connect.rs#L58
https://github.com/input-output-hk/jormungandr/blob/6f324b706a13273afb6a0808e589735020bb59da/jormungandr/src/network/mod.rs#L73
So this line in the golang code:
versionTable.Add(cbor.NewPositiveInteger8(1), cbor.NewPositiveInteger(764824073))
can't be length 32 []byte right? How do I encode this:
req := HandshakeRequest{}
req.Nonce = []byte{}
for i := 0; i < 32; i++ {
req.Nonce = append(req.Nonce, byte(rand.Intn(256)))
}
into this versionTable "params"?
see also and proto
Edit: You seem to assume that the the handshake from gocardano/go-cardano-client and the one described in node.proto somehow are related to the same implementation. Actually, I don't think they do.
The TCP-based handshake follows the Shelley protocol specs and sends a payload with the encoded versionTable. The gRPC-based HandshakeRequest instead is, as you also considered, just a nonce. There's nothing in the proto schema that hints to the Shelley protocol. The comments on the Nonce field also say that quite explicitly: "Nonce for the server to authenticate its node ID with."
So it would be a bit strange to assume that this nonce and the versionTable payload have anything in common at all.
Edit 2: In addition, it seems the "Jormungandr" rust node implementation does not support Shelley at all, so when you say you can't connect to the nodes in the relay topology, I think you shouldn't look for answers in the Jormungandr repository. Instead, I think the relays run the Haskell implementations of the Ouroboros network.
Now as for why you can't connect, the go-cardano client panics on some unchecked type assertions, because after the QueryTip Shelley message chainSyncBlocks.RequestNext, the relay servers respond with a different mini-protocol altogether, transactionSubmission.msgRequestTxIds as shown by running the client with TCP and tracing the messages:
MiniProtocol: 4 / MessageMode: 1 / f1bb7f80800400058400f50003
Array: [4]
PositiveInteger8(0)
False
PositiveInteger8(0)
PositiveInteger8(3)
You'll also have the same result when sending a sync chain request with MiniProtocol 2 (ChainSyncHeaders). I checked the Shelley protocol specs but couldn't find an explicit indication about why the server would switch protocols... Unfortunately I'm not familiar enough with Haskell to gain further insight from the Ouroboros sources.
In the unexpected case that the nonce in the proto HandshakeRequest is indeed related to the Shelley protocol, its content might be the CBOR array in your linked Cardano client (speculations follows):
arr := cbor.NewArray()
arr.Add(cbor.NewPositiveInteger8(handshakeMessagePropose))
versionTable := cbor.NewMap()
arr.Add(versionTable)
versionTable.Add(...)
versionTable.Add(...)
versionTable.Add(...)
return []cbor.DataItem{arr}
By inspecting the client where the handshake request is used, we can see:
messageResponse, err := c.queryNode(multiplex.MiniProtocolIDMuxControl, handshakeRequest())
and then in queryNode:
sdu := multiplex.NewServiceDataUnit(miniProtocol, multiplex.MessageModeInitiator, dataItems)
...
c.socket.Write(sdu.Bytes())
The sdu.Bytes() method serializes the entire payload, in particular:
// EncodeList return CBOR representation for each item in the list
func EncodeList(list []DataItem) []byte {
result := []byte{}
for _, item := range list {
result = append(result, item.EncodeCBOR()...)
}
return result
}
The EncodeCBOR() method is implemented by both the Array and the Map used in the handshakeRequest() []cbor.DataItem function. Watch out that the handshake function returns a slice []cbor.DataItem which contains one Array item which contains (as documented) the handshakeMessagePropose and the versionTable map.
If you carefully follow how the serialization proceeds, you'll eventually obtain the breakdown of the byte array — hereafter in decimal:
[130 0 163 1 26 45 150 74 9 25 128 2 26 45 150 74 9 25 128 3 26 45 150 74 9]
Where:
130 is the array data item prefix
0 is the handshakeMessagePropose
163 is the map data item prefix
and the subsequent bytes are the versionTable map
It is 25 bytes in total. At this point, I don't know if the multiplex wrapper built in queryNode function is part of the nonce or not. With the full wrapper, the length of the serialized byte array goes up to 33. So excluding some control bits or whatnot, this might be what you're supposed to write into the HandshakeRequest.Nonce.

Pushing data to websocket browser client in Lua

I want to use a NodeMCU device (Lua based top level) to act as a websocket server to 1 or more browser clients.
Luckily, there is code to do this here: NodeMCU Websocket Server
(courtesy of #creationix and/or #moononournation)
This works as described and I am able to send a message from the client to the NodeMCU server, which then responds based on the received message. Great.
My questions are:
How can I send messages to the client without it having to be sent as a response to a client request (standalone sending of data)? When I try to call socket.send() socket is not found as a variable, which I understand, but cannot work out how to do it! :(
Why does the decode() function output the extra variable? What is this for? I'm assuming it will be for packet overflow, but I can never seem to make it return anything, regardless of my message length.
In the listen method, why has the author added a queuing system? is this essential or for applications that perhaps may receive multiple simultaneous messages? Ideally, I'd like to remove it.
I have simplified the code as below:
(excluding the decode() and encode() functions - please see the link above for the full script)
net.createServer(net.TCP):listen(80, function(conn)
local buffer = false
local socket = {}
local queue = {}
local waiting = false
local function onSend()
if queue[1] then
local data = table.remove(queue, 1)
return conn:send(data, onSend)
end
waiting = false
end
function socket.send(...)
local data = encode(...)
if not waiting then
waiting = true
conn:send(data, onSend)
else
queue[#queue + 1] = data
end
end
conn:on("receive", function(_, chunk)
if buffer then
buffer = buffer .. chunk
while true do
local extra, payload, opcode = decode(buffer)
if opcode==8 then
print("Websocket client disconnected")
end
--print(type(extra), payload, opcode)
if not extra then return end
buffer = extra
socket.onmessage(payload, opcode)
end
end
local _, e, method = string.find(chunk, "([A-Z]+) /[^\r]* HTTP/%d%.%d\r\n")
local key, name, value
for name, value in string.gmatch(chunk, "([^ ]+): *([^\r]+)\r\n") do
if string.lower(name) == "sec-websocket-key" then
key = value
break
end
end
if method == "GET" and key then
acceptkey=crypto.toBase64(crypto.hash("sha1", key.."258EAFA5-E914-47DA-95CA-C5AB0DC85B11"))
conn:send(
"HTTP/1.1 101 Switching Protocols\r\n"..
"Upgrade: websocket\r\nConnection: Upgrade\r\n"..
"Sec-WebSocket-Accept: "..acceptkey.."\r\n\r\n",
function ()
print("New websocket client connected")
function socket.onmessage(payload,opcode)
socket.send("GOT YOUR DATA", 1)
print("PAYLOAD = "..payload)
--print("OPCODE = "..opcode)
end
end)
buffer = ""
else
conn:send(
"HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: 12\r\n\r\nHello World!",
conn.close)
end
end)
end)
I can only answer 1 question, the others may be better suited for the library author. Besides, SO is a format where you ask 1 question normally.
How can I send messages to the client without it having to be sent as a response to a client request (standalone sending of data)?
You can't. Without the client contacting the server first and establishing a socket connection the server wouldn't know where to send the messages to. Even with SSE (server-sent events) it's the client that first initiates a connection to the server.

How do i know, which Peer did the Transaction in Hyperledger Fabric Go?

I am working on getting a transaction id info, which will give the peer details for the transaction. Currently, I am able to retrieve the History for a key, which gives me the list of transaction committed to that key.
MyCode:
historyRes, err := stub.GetHistoryForKey(userNameIndexKey)
if err != nil {
return shim.Error(fmt.Sprintf("Unable to get History key from the ledger: %v", err))
}
for historyRes.HasNext() {
history, errIt := historyRes.Next()
if errIt != nil {
return shim.Error(fmt.Sprintf("Unable to retrieve history in the ledger: %v", errIt))
}
deleted := history.GetIsDelete()
ds := strconv.FormatBool(deleted)
fmt.Println("History TxId = "+history.GetTxId()+" -- Delete = "+ds)
}
Output
History TxId = 78c8d17c668d7a9df8373fd85df4fc398388976a1c642753bbf73abc5c648dd8 -- Deleted = false
History TxId = 102bbb64a7ca93367334a8c98f1f7be17e6a8d5277f0167c73da47072d302fa3 -- Deleted = true
But I don't know, which peer did this transaction. Is there any API available in fabric-sdk-go to retrieve peer info for a transaction id.
please suggest me some solution.
The call stub.GetHistoryForKey(userNameIndexKey) will query the state database and not the ledger (channel). The information about the identity who made the transaction is stored in the block.
I have implemented the same thing with NodeJS SDK. However, Go SDK too contains similar API calls. The following steps worked for me:
Using your SDK, get the transactionId
Use the SDK function for querying block by transactionId. References here.
At this step, you'll get the block. Now the identity of the submitter is located within this block. Hints: Payload -> Header -> Signature Header -> Creater -> IdBytes.
These identity bytes are the X509 certs of the submitter. You can read the certificate info to find out which member did submit this transaction. The subject and OUs will indicate the Organization of the peer that did the transaction.

routing files with zeromq (jeromq)

I'm trying to implement a "file dispatcher" on zmq (actually jeromq, I'd rather avoid jni).
What I need is to load balance incoming files to processors:
each file is handled only by one processor
files are potentially large so I need to manage the file transfer
Ideally I would like something like https://github.com/zeromq/filemq but
with a push/pull behaviour rather than publish/subscribe
being able to handle the received file rather than writing it to disk
My idea is to use a mix of taskvent/tasksink and asyncsrv samples.
Client side:
one PULL socket to be notified of a file to be processed
one DEALER socket to handle the (async) file transfer chunk by chunk
Server side:
one PUSH socket to dispatch incoming file (names)
one ROUTER socket to handle file requests
a few DEALER workers managing the file transfers for clients and connected to the router via an inproc proxy
My first question is: does this seem like the right way to go? Anything simpler maybe?
My second question is: my current implem gets stuck on sending out the actual file data.
clients are notified by the server, and issue a request.
the server worker gets the request, and writes the response back to the inproc queue but the response never seems to go out of the server (can't see it in wireshark) and the client is stuck on the poller.poll awaiting the response.
It's not a matter of sockets being full and dropping data, I'm starting with very small files sent in one go.
Any insight?
Thanks!
==================
Following raffian's advice I simplified my code, removing the push/pull extra socket (it does make sense now that you say it)
I'm left with the "non working" socket!
Here's my current code. It has many flaws that are out of scope for now (client ID, next chunk etc..)
For now, I'm just trying to have both guys talking to each other roughly in that sequence
Server
object FileDispatcher extends App
{
val context = ZMQ.context(1)
// server is the frontend that pushes filenames to clients and receives requests
val server = context.socket(ZMQ.ROUTER)
server.bind("tcp://*:5565")
// backend handles clients requests
val backend = context.socket(ZMQ.DEALER)
backend.bind("inproc://backend")
// files to dispatch given in arguments
args.toList.foreach { filepath =>
println(s"publish $filepath")
server.send("newfile".getBytes(), ZMQ.SNDMORE)
server.send(filepath.getBytes(), 0)
}
// multithreaded server: router hands out requests to DEALER workers via a inproc queue
val NB_WORKERS = 1
val workers = List.fill(NB_WORKERS)(new Thread(new ServerWorker(context)))
workers foreach (_.start)
ZMQ.proxy(server, backend, null)
}
class ServerWorker(ctx: ZMQ.Context) extends Runnable
{
override def run()
{
val worker = ctx.socket(ZMQ.DEALER)
worker.connect("inproc://backend")
while (true)
{
val zmsg = ZMsg.recvMsg(worker)
zmsg.pop // drop inner queue envelope (?)
val cmd = zmsg.pop //cmd is used to continue/stop
cmd.toString match {
case "get" =>
val file = zmsg.pop.toString
println(s"clientReq: cmd: $cmd , file:$file")
//1- brute force: ignore cmd and send full file in one go!
worker.send("eof".getBytes, ZMQ.SNDMORE) //header indicates this is the last chunk
val bytes = io.Source.fromFile(file).mkString("").getBytes //dirty read, for testing only!
worker.send(bytes, 0)
println(s"${bytes.size} bytes sent for $file: "+new String(bytes))
case x => println("cmd "+x+" not implemented!")
}
}
}
}
client
object FileHandler extends App
{
val context = ZMQ.context(1)
// client is notified of new files then fetches file from server
val client = context.socket(ZMQ.DEALER)
client.connect("tcp://*:5565")
val poller = new ZMQ.Poller(1) //"poll" responses
poller.register(client, ZMQ.Poller.POLLIN)
while (true)
{
poller.poll
val zmsg = ZMsg.recvMsg(client)
val cmd = zmsg.pop
val data = zmsg.pop
// header is the command/action
cmd.toString match {
case "newfile" => startDownload(data.toString)// message content is the filename to fetch
case "chunk" => gotChunk(data.toString, zmsg.pop.getData) //filename, chunk
case "eof" => endDownload(data.toString, zmsg.pop.getData) //filename, last chunk
}
}
def startDownload(filename: String)
{
println("got notification: start download for "+filename)
client.send("get".getBytes, ZMQ.SNDMORE) //command header
client.send(filename.getBytes, 0)
}
def gotChunk(filename: String, bytes: Array[Byte])
{
println("got chunk for "+filename+": "+new String(bytes)) //callback the user here
client.send("next".getBytes, ZMQ.SNDMORE)
client.send(filename.getBytes, 0)
}
def endDownload(filename: String, bytes: Array[Byte])
{
println("got eof for "+filename+": "+new String(bytes)) //callback the user here
}
}
On the client, you don't need PULL with DEALER.
DEALER is PUSH and PULL combined, so use DEALER only, your code will be simpler.
Same goes for the server, unless you're doing something special, you don't need PUSH with ROUTER, router is bidirectional.
the server worker gets the request, and writes the response back to
the inproc queue but the response never seems to go out of the server
(can't see it in wireshark) and the client is stuck on the poller.poll
awaiting the response.
Code Problems
In the server, you're dispatching files with args.toList.foreach before starting the proxy, this is probably why nothing is leaving the server. Start the proxy first, then use it; Also, once you call ZMQProxy(..), the code blocks indefinitely, so you'll need a separate thread to send the filepaths.
The client may have an issue with the poller. The typical pattern for polling is:
ZMQ.Poller items = new ZMQ.Poller (1);
items.register(receiver, ZMQ.Poller.POLLIN);
while (true) {
items.poll(TIMEOUT);
if (items.pollin(0)) {
message = receiver.recv(0);
In the above code, 1) poll until timeout, 2) then check for messages, and if available, 3) get with receiver.recv(0). But in your code, you poll then drop into recv() without checking. You need to check if the poller has messages for that polled socket before calling recv(), otherwise, the receiver will hang if there's no messages.

Resources