Resend frames of message without copy - zeromq

Question:
Can I initialize a new message with part of another message without copying? Modifying the message in place to drop the first few frames would also work.
Scenario:
I'm using the ROUTER-REQ pattern for a load balancing implementation. The REQ end sends a message to the ROUTER which prepends the identity and delimiter frame to the message. After my application uses that first frame to push the identity of the worker into an idle list, it needs to forward the final frame(s) of the message onto a PUB socket. Those final frames may be very large, and after extracting that first identity frame, I no longer need the rest of the received message, only to forward it on. This seems like a good place for zero-copy; I just need to drop those first two frames which were inserted by the ROUTER.

I haven't tried, but I think you can, I would try it with
using zmq_msg_copy, according to the man page.
The implementation may choose not to physically copy the message
content, rather to share the underlying buffer between src and dest.
I don't know why the man page uses "may choose", looking into the code I think it does all the time.
Here when I'm talking about message I mean "message part". so when you create your multipart message, you just zero-copy the necessary parts (frames)
Which binding do you use?

Related

Jmeter - Websocket plugin - unable to find solution to constantly read incoming messages

Good afternoon.
I have implemented websockets in my project using the plugin "WebSocket Samplers by Peter Doornbosch" in Jmeter.
So far the requirements were simply to send and receive some payload, or simply send and forget.
However I got a new requirement where the server will constantly send back websocket message to client at every 5 second interval.
Ex: I send ABC , XYZ, ABD at every 3 second interval and I need to read XYC, YTZ at every 5 second interval both should happen simultaneously
I'm unable to use Parallel Controller as each item within the controller is a separate thread and thus I will loose web socket connection for the second one.
Is there any way I can achieve this using some listeners or something.
Thanks for your response in advance
As per documentation:
Fragmentation
WebSocket messages may be fragmented into several frames. In such cases the first frame is an ordinary text or binary frame, but it will have the final bit cleared. The succeeding frames will be continuation frames (whether they are text or binary is inferred by the first frame) and the last continuation frame will have the final bit set. The plugin supports continuation frames, but as the plugin is frame-oriented, you'll have to read them yourself. In cases where the number of fragments is known beforehand, this is as easy as adding an extra WebSocketReadSampler for each continuation frame you expect. If the number of continuation frames is not known, you need to create a loop to read all the continuation frames. For this purpose, the plugin provides a new JMeter variable called websocket.last_frame_final that indicates whether the last frame read was final. This enables you to write a simple loop with a standard JMeter While Controller; use the expression ${__javaScript(! ${websocket.last_frame_final},)} as condition. With a JMeter If Controller, the condition can be simplified to ! ${websocket.last_frame_final} because that controller automatically interprets the condition as JavaScript. See the sample Read continuation frames.jmx test plan for examples of using the While or the If controller to read continuation frames.
If you are unsure whether continuation frames are sent by your server or how much, switch on debug logging: samplers reading a frame will log whether the received frame is a "normal" single frame, a non-final frame (i.e. 1st fragment), a continuation frame or a final continuation frame (last fragment).

Spring-integration: keep a context for a Message throught a chain

I am using spring-integration, and I have messages that goes through an int:chain with multiple elements: int:service-activator, int:transformers, etc. In the end, a message is sent to another app's Rest endpoint. There is also an errorHandler that will save any Exception in a text file.
For administration purpose, I would like to keep some information about what happened in the chain (ex: "this DB call returned this", "during this transformation, this rule was applied", etc.). This would be equivalent to a log file, but bound to a Message. Of course there is already a logger, but in the end, I need to create (either after the Rest called is made, or when an error occurs) a file for this specific Message with the data.
I was wondering if there was some kind of "context" for the Message that I could call through any part of the chain, and where I could store stuff. I didn't found anything in the official documentation, but I'm not really sure about what to look for.
I've been thinking about putting it all in the Message itself, but:
It's an immutable object, so I would need to rebuild it each time I want to add something to its header (or the payload).
I wouldn't be able to retrieve any new data from the error handler in case of Exception, because it takes the original message.
I can't really add it to the payload object because some native transformers/service-activators are directly using it (and that would also mean rewriting a lot of code ...)
I've been also thinking to some king of "thread-bound" bean that would act as a context for each Message, but I see too many problem arising from this.
Maybe I'm wrong about some of these ideas. Anyway, I just need a way to keep data though multiple element of a Spring integration chain and also be able to access it in the error handler.
Add a header, e.g. a map or list, and add to it in each stage.
The framework does something similar when message history is enabled.

Golang http write response without waiting to finish

I'm building an application that builds a pdf file and returns it to the client whenever it receives a request.
Since some of these pdf files might take some time to generate, I would like to periodically send some sort of status update back to client while it is running.
When it's finished building the pdf file, it should be returned to the client as well.
Something akin to:
func buildReport(writer http.ResponseWriter, request *http.Request){
//build pdf build pdf file
for { //for example purposes only
writer.Write([]byte("building. Please wait."))
}
pdf.OutputFileAndClose("report.pdf")
//set header to pdf so that the client knows it's a PDF
writer.Header().Set("Content-Type", "application/pdf")
http.ServeFile(writer, request, "report.pdf")
}
func main() {
http.HandleFunc("/", buildReport)
http.ListenAndServe(":8081", nil)
}
Setting the header might not work, as the writer can only have one header.
TL;DR is that it cannot be implemented that way. You need to
An API that requests the PDF creation. That queues PDF creation job in a task queue (so that too many PDF creation requests won't blow the HTTP server worker pool)
Provide an API that allows you to check where are you with the PDF rendering (I am assuming that the job can provide interim stats). This is going to be polled by the client on a regular basis.
An API to pull the PDF once it is ready.
Hope this helps and best of luck with your project.
This is by no means comprehensive, but a reasonable example of how you might construct your API (which needs to be asynchronous, as the previous respondent pointed out) can be found here: https://www.adayinthelifeof.nl/2011/06/02/asynchronous-operations-in-rest/
The job queue model is a pretty common one. I would recommend you also write a basic API binding library (you'd want this for your own testing purposes in any case) so that your users can understand how you intend them to use the API, and in writing it, you'll get a better sense of how asynchronous REST interactions feel from the end user side.
Contrary to what others have said, what you want is in fact
directly possible but requires fullfillment of the two preconditions:
HTTP/1.1 and above.
You'll be sending custom content to the clients — not PDF data
directly, — and they're prepared to accept and parse it.
You can then employ the so-called "chunked" payload encoding specifically
invented to handle "streamed" downloads where the server does not know how
many bytes it's about to send.
So you may invent some creative kind of payload where you first periodically
stream a "no op" / "progress" marker and then the actual payload.
Say, while the file is being prepared you periodically send a line of text
reading "PROCESSING" + LF then, when a result is ready you send
a line of text "READY" SIZE + LF where SIZE is the size, in bytes,
of the immediately following PDF document. After the document is streamed,
the server signals the end of data.
Hence the stream would look like
PROCESSING
PROCESSING
…
PROCESSING
READY 8388608
%PDF-1.3
…
%%EOF
The clients have to be able to parse this information from the stream
they're receiving and have a simple FSM in place to switch from state to
state as they fetch your stream.
The server has to make sure it flushes the stream after each "informational" line otherwise the whole thing would not be "interactive".
If you have a good idea about the overall state of the processing of the
document, each "status update" line could include the percentage of the work done, like in "PROCESSINGNN" + LF.

Progress bar on data transfer

I am sending files (up to 100Mo on my android handled) using the Channel Api.
I decided to create a handler to update the progress of the transfer to that the user is aware of the progress.
I use the Message Api to send the file size to my handled and I update the progress checking each x milliseconds the size of the file.
The matter is that I don't know first if that's a good way of doing what I want, and second, due to the fact that it's asynchronous, I have to wait that I correctly received the file size in the onMessageReceived before sending the file.
If you are using ChannelApis, you can use the low level version of transfer (using output stream and input stream) and then on the sender side, you can update your progress bar with the amount that you are writing to the output stream. If you are using sendFile() method, you don't have any view into the progress on the sender side so you need to report that back using, say, Message Apis as you are doing. Instead of doing that at x milliseconds, you may decide to make it a bit smarter; if you have the size of the whole file, you probably wouldn't want to send a message if the visual change in the progress bar is not going to be much or noticeable; in other words, try to reduce the number of communications as much as possible.

How do you know when all the data has been received by the Winsock control that has issued a POST or GET to an Web Server?

I'm using the VB6 Winsock control. When I do a POST to a server I get back the response as multiple Data arrival events.
How do you know when all the data has arrived?
(I'm guessing it's when the Winsock_Close event fires)
I have used VB6 Winsock controls in the past, and what I did was format my messages in a certain way to know when all the data has arrived.
Example: Each message starts with a "[" and ends with a "]".
"[Message Text]"
When data comes in from the DataArrival event check for the end of the message "]". If it is there you received at least one whole message, and possibly the start of a new one. If more of the message is waiting, store your message data in a form level variable and append to it when the DataArrival event fires the next time.
In HTTP, you have to parse and analyze the reply data that the server is sending back to you in order to know how to read it all.
First, the server sends back a list of CRLF-delimited header lines, which are terminated by a blank CRLF-delimited line by itself. You then have to look at the actual values of the 'Content-Length' and 'Transfer-Encoding' headers to know how to read the remaining data.
If there is no 'Transfer-Encoding' header, or if it does not contain a 'chunked' item in it, then the 'Content-Length' header specifies how many remaining bytes to read. But if the 'Transfer-Encoding' header contains a 'chunked' item, then you have to read and parse the remaining data in chunks, one at a time, in order to know when the data ends (each chunk reports its own size, and the last chunk reports a size of 0).
And no, you cannot rely on the connection being closed after the reply has been sent, unless the 'Connection' header explicitally says 'close'. For HTTP 1.1, that header is usually set to 'keep-alive' instead, which means the socket is left open so the client can send more requests on the same socket.
Read RFC 2616 for more details.
No, the Close event doesn't fire when all the data has arrived, it fires when you close the connection. It's not the Winsock control's job to know when all the data has been transmitted, it's yours. As part of your client/server communication protocol implementation, you have to tell the client what to expect.
Suppose your client wants the contents of a file from the server. The client doesn't know how much data is in the file. The exchange might go something like this:
client sends request for the data in the file
the server reads the file, determines the size, attaches the size to the beginning of the data (let's say it uses 4 bytes) that tells the client how much data to expect, and starts sending it
your client code knows to strip the first 4 bytes off any data that arrives after a file request and store it as the amount of data that is to follow, then accumulate the subsequent data, through any number of DataArrival events, until it has that amount
Ideally, the server would append a checksum to the data as well, and you'll have to implement some sort of timeout mechanism, figure out what to do if you don't get the expected amount of data, etc.

Resources