I understand that the small message size limit is predefined as 30 in the source code (C++) . but when I change it to let say 512, it parsed out fine. however when I tried to send it, it did not show up on the wire by looking at it using wireshark ( data length = 0 )
Any suggestions for me? My messages are pretty much between 100 to 200, if i cannot change it, How can i send my message via pub-sub pattern?
Thanks a lot for any suggestion!
Why do you care at all? The small message limit is completely transparent to the user.
Related
Best way to send & receive an array over websockets
On the client side I'm using javascript that sends the data.
I'm sending to a microcontroller (ESP8266) that is programmed in c++ using the websocket library with the arduino IDE
At the moment I'm sending the variable which I build up on the client side.
It is then sent to the microcontroller and received by the payload buffer.
I am sending this from the client
#,tank,pond,1537272000,1537272000,Normal,4789,12
I received here in the code:
case WStype_TEXT: Serial.printf("[%u] get Text: %s\n", num, payload);
this is the result of what I receive
[0] here it is: #,tank,pond,1537272000,1537272000,Normal,4789,12
I am using a hash(#) to mark the start of the data.
I have been googling and searching forums for days but can't fathom which is the best way to do this.
What is the fastest most elegant code to split this up into different variables so that they can be compared?
Briefly, I prefer JSON. You can use the library below for ESP8266:
https://github.com/bblanchon/ArduinoJson
I think your websocket server is written in a high level language so neither JSON parsing nor composing will not be an issue for you.
Good luck.
I am looking to build an IM app using the Sinch SDK, but wanted to know what the max number of characters that I can send in a message is? Can't see that in the documentation.
There isn’t a fixed limit to how many characters can be included in a message. Messages will be split into multiple if size is too large, however we discourage from sending very large messages such as encoding images into headers…
I want to know if it's possible to get print something while Ajax is processing the Request.If yes then please let me know, Because i am facing one problem and i want to get to print something in between Ajax call request and it's response comes
actually i want to read csv of 3000+ rows and in between this process i want to display no of rows read and copied in another csv.
so i want to show something like process bar that out of 3000 there are 50 rows copies completely and this will continue process until it will reach to 3000 rows.
it there a way then let me know!
You can use XHR progress events (if your browser supports them):
https://developer.mozilla.org/en-US/docs/DOM/XMLHttpRequest/Using_XMLHttpRequest#Monitoring_progress
https://dvcs.w3.org/hg/progress/raw-file/tip/Overview.html#interface-progressevent
How to check in JavaScript if XMLHttpRequest object supports W3C Progress Events?
But your question is more than simply I have 1000 bytes out of a possible 9999. You want to know exactly how many rows you have read up to that point.
I think you can read xhr.responseText on each progress event, but only if the request is parseable when incomplete (such as plain text), and also when not using an async request - https://stackoverflow.com/a/5319647/319878.
But for libraries like zip.js that use non-text binary data with arraybuffers when requesting zip content, this will not work, as these requests must be async (http://www.w3.org/TR/XMLHttpRequest/#the-responsetype-attribute). If you try to use xhr.responseText in this case, you will get an error like this (example from Firefox):
[Exception... "An attempt was made to use an object that is not, or is no longer, usable"
code: "11"
nsresult: "0x8053000b (InvalidStateError)"
location: "http://localhost/xhr-progress-test/test-xhr-on-progress.html Line: 1075"]
So for zipped content (using zip.js) I think you would have to wait for all of the response to arrive before using it. This is not what you want.
Much simpler would be to modify your server code to return a selection of rows at a time. E.g. rows 1-100, then 101-200, etc.
You may try one of multiple approaches to solve this problem:
JSONP Callback:
This is an apt solution for your problem. You may batch size your response to 50 rows of CSV data. Each JSONP call will have a start and will get 50 rows of data. So, to fetch 300 rows of data you'll have 6 callbacks. You may pass a start parameter and return start - start+49 rows (total 50) and when start == 1 you may pass the header too. On each successful callback you may increase the progress bar by a proportionate value.
Pros: Easy to show progress on no. of rows.
Cons: Data to return in the response has to be in JSON format; Need to implement callback function.
Calculating bytes:
Showing progress on no. of bytes transferred. One example is available on Stackoverflow. Not sure if it would apply in your case because you want to show progress on the basis of no. of rows.
Pros: Easy to show progress on no. of bytes.
Cons: May not apply in your case.
Loading icon or bar:
This is the easiest approach of all. Just show a loading image or a continuous progress bar which keeps on repeating.
Pros: Easy to show continuous progress.
Cons: Just shows the user that something is happening, nothing on percentage basis.
There could be other solutions too based on your situation, so please take this as a starting point and explore further.
Since it does not seem to be possible to query/inspect the underlying ZeroMQ queues/buffers sockets to see how much they are utilized, is there some way to detect when a message is dropped due to full buffers in a Publisher socket when sent/queued?
For example, if the publisher queue is full, the zmq_send operation will simply drop the message.
Basically, what I want to achieve is a way to detect situations where the queues are getting stressed and/or full to be able to (later on) tune the solution to work better. One alternative way would be to add a sequence number to each message and do a simple calculation in the subscriber but I can never be sure that a message was lost due to full buffers in the publisher.
There is an example for this in the ZeroMQ Guide (which you should read and digest if you want to use 0MQ happily): http://zguide.zeromq.org/page:all#Slow-Subscriber-Detection-Suicidal-Snail-Pattern
The mechanism is as you answered yourself, to add a sequence number in the message, and allow the subscriber to detect gaps and take appropriate action. For most pubsub scenarios you can raise the default HWM, which is 1,000, to something much higher; it depends on your average message size.
I know this is an old post but here is what I did when recently facing the same issue.
I opted to use a DEALER/ROUTER and set the ZMQ_SNDHWM option to 1. Also I provided the timeout parameter on each zmq_send(). The timeout could be anything between 10 ms to 3 seconds, depending on what your scenario is ( a local or remote send ).
If the message is not sent within the timeout or the send-buffer is full the zmq_send() will return false. That enabled me to set up a retry queue in front of zmq. I know it's not a perfect solution but for me it worked just fine. What puzzles me though is the meaning of true/false returned by the DEALER-socket zmq_send(). I have not been able to find the answer to that question. Whether it indicates that the message has been buffered or that the message has been delivered to the ROUTER has eluded me. In my case I got the results needed anyway.
Just for the record this was done using netmq but I guess it applies to ZeroMQ as well.
I do agree wtih james though. ZeroMQ ( and netmq ) should at least provide a way to inspect the queue ( and get the messages out ) and also a way to tell the various sockets not to drop messages. The best option would be to send messages not delivered in timely fashion according to the configured options to some sort of deadletter queue. The deadletter queue could then be handled separately.
Hi all ActiveMQ experts!
I had a look via WireShark what's happening under the hood when ActiveMQ "/examples" producer sends messages and it revealed that every TextMessage shorter than 1000 bytes is padded with spaces (' ' or HEX 20) until it fills exactly 1000 bytes.
(using the ActiveMQ's "native" transport - TCP "OpenWire")
Wonder what is doing that?
(I presume the ActivemQ JMS Client implementation)
but WHY??
And most importantly, is there a way to optimize it so that sending short messages does not incur almost 1K overhead of unnecessary spaces?
Thank you!
cheers,
O.K.
Your are correct that the client implementation is adding the spaces to the messages (see the createMessageText method). This is simply so that the messages are an even size. You can either change the size when running the example (use the arg: -DmessageSize=<size>) or you can change the ProducerTool so that it does not pad each message by simply editing the code and running it again (Ant will compile it before running it). I've done this many times to remove the spaces altogether, to add extra text to the message, to add message headers, to format the message body using XML, etc.
Bruce