SMB protocol, linking a request to a response - smb

I am implementing a SMB protocol decoder.
I don’t understand how, when reading a file/named pipe, the client understands that the response is associated with one of many open files/named pipes.
The client sends the file descriptor (file id), but the server does not send it back in the response. The server does not send any data linking the request to the response.
You can verify this by reading about SMB operations such as: SMB_COM_READ, TRANS_READ_NMPIPE in the MS-CIFS standard.
If there were several read requests or even several files/named pipes open, then how does the client understand which request the server responded to?

One can bind requests using a map using file_id as a key, while each request is added to the queue, and the decoding of each response is based on this queue.
(Binding requests) Each request will be placed in the map with the key file_id, and when subsequent requests to the file received, I can find out all the previous requests.
(Binding requests and responses) Also, each request is placed in a queue and an file_id is assigned to it, the file_ids of requests to one file are equal, when a response is received, the top element of the queue will be retrieved and an file_id will be obtained from it.

Related

Why Not Get Channel Info From HTTP Header At The Beginning When Using WebSocket

recently I take "github.com/gorilla/websocket" as the underlying websocket implementation and gin web framework for my project.
I googled many examples and found many people used handshakes for their websocket channels, e.g. they let the client register the channel after the connection is upgraded to web socket and then got the client's identity information from the registration request.
My question is:
from the server side, the server is able to get registration information, such as userId, userName etc., from HTTP Header (before the connection is upgraded to websocket), the only thing to do is letting the client put the identity into into HTTP Headers. Why people don't do it this way, and instead they use the handshake which is much more troublesome?
update
The client is able to open many channels, so we have to register every channel to track their usage. And because we will use the channel to send multiple types of messages and they are distingushed by cmd field in the body, so though we are able to get id information from headers, we still need to get channel usage information from websocket data for all the other commucations except the initial registration. To keep constistency between all messages, we register the channel by websocket data other than http header. Is it that reason??

Understanding HTTP2 server push and multiplexing

From what I understand, what multiplexing means is that the client just need to create one TCP connection with the server and it can send multiple requests at the same time without having to wait for the response of one request before continuing another. So if I send 3 request at the same time, there are also 3 response streams.
And for server push, the client sends one request to the server, the server then guesses that the client needs other resources (also called promises) other than the one it requested, so it sends push promise streams hinting the client with the URL of the additional resources. The client may choose to request those additional resources or not.
My questions are:
For any response sent from the server to the client, does it have to be a request initiated first? I mean, it I created a connection to
the server, I did not send any request. Could I be getting responses
from server via server push? In multiplexing, I get same number of
responses for same number of requests. In server push, I can get
multiple responses for one request. So does there always have to be a
request first?
In server push, when a promise stream is sent by the server to the client containing the URL of the additional resources, does that mean
the server will only push the additional resources only when the
client accepts the promises?
[the server] sends push promise streams hinting the client with the URL of the additional resources. The client may choose to request those additional resources or not.
This is not correct. When the server sends to the client a PUSH_PROMISE, the server will then send the resource content associated to that pushed resource.
The only thing that the client can do is to reset the pushed stream via a RST_STREAM frame, but it may well be that the whole pushed resource is already in-flight so resetting the pushed stream has no effect: the client will receive the pushed resource bytes and may discard them if not interested.
To answer your specific questions:
Yes, a response from the server is always client-initiated. If a client does not send any request to the server, the server cannot push to the client. Even in the case of server push, the client always initiated a stream by making a request, and server pushes are always associated to that "parent" request.
The PUSH_PROMISE frame is an indication from the server to the client of what resource the server is about to push. The client does not "accept" pushes, the server forces them to the client. The only thing the client can do is to reset the stream associated with the pushed resource; as I said, the server may have already pushed the whole resource by the time it receives the RST_STREAM frame from the client.

http2: order of push promise data

The spec says:
The server SHOULD send PUSH_PROMISE (Section 6.6) frames prior to
sending any frames that reference the promised responses. This avoids
a race where clients issue requests prior to receiving any
PUSH_PROMISE frames.
For example, if the server receives a request for a document
containing embedded links to multiple image files and the server
chooses to push those additional images to the client, sending
PUSH_PROMISE frames before the DATA frames that contain the image
links ensures that the client is able to see that a resource will be
pushed before discovering embedded links.
In the example given, I assume it would be okay for the server to send the image data before or after the "document containing embedded links to multiple image files".
Are all of these allowed?
Series A
client requests document
server sends PUSH_PROMISE of images
server sends document
server sends images
Series B
client requests document
server sends PUSH_PROMISE of images
server sends images
server sends document
Series C
client requests document
server sends PUSH_PROMISE of images
server sends images/document concurrently, i.e. frames are interspersed
(In all cases, When the client makes a request for the images, it blocks on them being received locally on the promised stream id.)
All three options are viable for a server.
For example, Jetty implements option C.
However, I would not make any assumption on the behavior of the client, assuming that it will wait because it received the PUSH_PROMISE.
For example, if the client urgently needs one of the resources that have been promised, it may cancel the pushed resource and issue a request for that resource with a high priority.

ZeroMQ Request/Response pattern with Node.js

I'm implementing a distributed system for a project and am a bit confused as to how I should properly implement the Req/Res pattern. Basically I have a few endpoints that will send a request to a client for processing tasks and responding.
So basically:
Incoming request is received
The endpoint opens a req and res socket type with the broker
Broker receives the request, proxies it to an available worker
Worker responds and the endpoint receives the processed value, reports it back via the endpoint.
I've found a decent load balance broker script here: http://zguide.zeromq.org/js:lbbroker. There's also an async client/server pattern I'm interested in implementing: http://zguide.zeromq.org/js:asyncsrv which I might adapt into a load balanced implementation.
My question is perhaps a bit simplistic but, would each endpoint open a new socket on EVERY request or maintain and open socket for every request? That means there would be n connections for every request made to the endpoint.
You'd keep the sockets open, there's no need to close them after each request. And there'd be a single socket one every endpoint (client and server). At the server end you read a request from the socket, and write your response back to the socket; zmq takes care of ensuring that the response goes back from the right client.

ZMQ: Multiple request/reply-pairs

ZeroMQs Pub/Sub pattern makes it easy for the server to reply to the right client. However, it is less obvious how to handle communication that cannot be resolved within two steps, i.e. protocols where multiple request/reply pairs are necessary.
For example, consider a case where the client is a worker which asks the server for new work of a specific type, the server replies with the parameters of the work, the client then sends the results and the server checks these and replies whether they were correct.
Obviously, I can't just use recv,send,recv,send sequentially and assume that the first and the second recv are from the same client. What would be the idiomatic way to use multiple recv,send pairs without having to handle messages from other clients inbetween?
Multiple Request/Reply pairs can be made through the use of ZMQ_ROUTER sockets. I recommend using ZMQ_REQ sockets on the clients for bidirectional communication.
If you want to have multiple clients accessing a single server you could use a router socket on the server and request sockets on the clients.
Check out the ZMQ guide's section on this pattern:
http://zguide.zeromq.org/php:chapter3#The-Asynchronous-Client-Server-Pattern
All the clients will interact with the server in the same pattern as Pub/Subs except they will all point at a single server Router socket.
The server on the other hand will receive three messages for every single message a client sends. These parts represent:
Part0 = Identity of connection (random number of which client it is)
Part1 = Empty frame
Part2 = Data of the ZMQ message.
Reference:
http://zguide.zeromq.org/php:chapter3#ROUTER-Broker-and-REQ-Workers
The identity can be used to differentiate between clients accessing on a single port. Repacking the message in the same order and responding on the router socket (with a different data frame) will automatically route it to the client who sent the message.

Resources