Azure IOT hub message fails if larger than 5000 bytes - esp32

I have been testing the SDK with code based on the Azure-ESP-Starter code.
When I try to send a larger message body the message fails. I am using an ESP32 Wroom and the SDK installed in VScode.
Message length of 5438 transmits ok, but 5458 fails.
I was looking for a reason and noticed that in the TCP settings, the default send buffer size = 5744
If I change this to 15744 I can send a 10,000 byte message.
The SDK seems to be failing when trying to fragment & send larger messages.
At first I thought it may have something to do with the TLS maximum outgoing fragment length setting (default 4096), but increasing this did not resolve the problem. The TCP buffer setting does allow larger messages.
I thought that Azure allows a 256k message. Has anyone else noticed this issue?

The following document section lists the limits associated with the different service tiers S1, S2, S3, and F1. Device-to-cloud messages can be at most 256 KB, and can be grouped in batches to optimize sends. Batches can be at most 256 KB.
I would suggest making use of a different Azure IoT Library or a different device which you can repro the same issue or maybe with another protocol?
Throttling details
IoT Hub measures message size in a protocol-agnostic way, considering only the actual payload. The size in bytes is calculated as the sum of the following values:
The body size in bytes.
The size in bytes of all the values of the message system properties.
The size in bytes of all user property
names and values.
device-to-cloud messages can be at most 256 KB
The total message size, including the enrichments, can't exceed 256 KB. If a message size exceeds 256 KB, the IoT Hub will drop the message. You can use IoT Hub metrics to identify and debug errors when messages are dropped. For example, you can monitor the telemetry messages incompatible (d2c.telemetry.egress.invalid) metric in the routing metrics. To learn more, see Monitor IoT Hub.

Related

Pusher error: The data content of this event exceeds the allowed maximum (10240 bytes)

I'm working on a project where I want to live display users' points whenever the gain or lose points.
I have an api that sends event to the frontend:
public function test(Request $request){
$message = $request->message;
$users = User::all()->where('company_id', $message);
event(new MyEvent([$users]));
}
Whenever the api is called, I recieve the following error:
Illuminate\Broadcasting\BroadcastException: Pusher error: The data content of this event exceeds the allowed maximum (10240 bytes).
How can this be solved?
The simple answer is to reduce the payload size - Pusher has a size limit on the body of an event - https://pusher.com/docs/channels/library_auth_reference/rest-api/#post-event-trigger-an-event
They also list some strategies for reducing payload size, such as:
sending multiple small events instead one one large event (chunking)
using compression to reduce payload size
sending a link the client that downloads the content instead of transmitting it via Channels.
See https://support.pusher.com/hc/en-us/articles/4412243423761-What-Is-The-Message-Size-Limit-When-Publishing-an-Event-in-Channels- for more info
Source: https://support.pusher.com/hc/en-us/articles/4412243423761-What-Is-The-Message-Size-Limit-When-Publishing-an-Event-in-Channels-
The message size limit is 10KB.
There are several approaches to work around this limit:
Chunking. One approach is to split your large message into smaller
chunks before publishing, then recombine them on receipt. This
repository shows an example of a chunking protocol: https://github.com/pusher/pusher-channels-chunking-example.
Linking. Instead of sending a large message via Pusher Channels, you could store that message elsewhere, and just send your clients send a link to that content.
Compression, e.g.
Compression algorithms like gzip.
Removing unnecessary characters. Whitespace is one example.
Removing/shortening keys. Instead of {"user":"jim","email":"jim#example.com"}, you could use {"u":"jim","e":"jim#example.com"}, or ["jim","jim#example.com"].
A dedicated cluster. If none of the above are suitable and you really need to sender larger single messages, we can set up a dedicated cluster. If you are interested in provisioning a dedicated cluster as part of an enterprise package please contact Pusher sales.

HTTP2 ERR CONNECTION CLOSED (Too much overhead)

We are developing a project using Angular in the front and Spring at the backend. Nothing new. But we have set-up the backend to use HTTP2 and from time to time we find weird problems.
Today I started playing with "Network Log Export" from chrome and I found this interesting piece of information in the HTTP2_SESSION line of the log.
t=43659 [st=41415] HTTP2_SESSION_RECV_GOAWAY
--> active_streams = 4
--> debug_data = "Connection [263], Too much overhead so the connection will be closed"
--> error_code = "11 (ENHANCE_YOUR_CALM)"
--> last_accepted_stream_id = 77
--> unclaimed_streams = 0
t=43659 [st=41415] HTTP2_SESSION_CLOSE
--> description = "Connection closed"
--> net_error = -100 (ERR_CONNECTION_CLOSED)
t=43661 [st=41417] HTTP2_SESSION_POOL_REMOVE_SESSION
t=43661 [st=41417] -HTTP2_SESSION
It looks like the root of the problem for the ERR_CONNECTION_CLOSED is the server decides there are too much overhead from the same client and closes the connection.
The question is ¿Can we tune the server to accept overhead up to a certain limit? ¿how? I believe this is something we should be able to tune up in Spring or tomcat or somewhere there.
Cheers
Ignacio
The overhead protection was put in place in response to a collection of CVE's reported against HTTP/2 in the middle of 2019. While Tomcat wasn't directly affected (the malicious input didn't trigger excessive load) we did take steps to block input that matched the malicious profile.
From your GitHub comment, you see issues with POSTs. That strongly suggests that the client is sending the POST data in multiple small packets rather than a smaller number of larger packets. Some clients (e.g. Chrome) are know to do this occasionally due to they way they buffer data.
A number of the HTTP/2 DoS attacks could be summarized as sending more overhead than data. While Tomcat wasn't directly affected, we took the decision to monitor for clients operating in this way and drop connections if any were found on the grounds that the client was likely to be malicious.
Generally, data packets reduce the overhead count, non-data packets increase the overhead count and (potentially) malicious packets increase the overhead count significantly. The idea is that an established, generally well-behaved, connection should be able to survive the occasional 'suspect' packet but any more than that will quickly trigger the connection to be closed.
In terms of small POST packets the key configuration setting is:
overheadCountFactor
overheadDataThreshold
The overhead count starts at -10. For every DATA frame received it is reduced by 1. For every SETTINGS, PRIORITY and PING frame it is increased by overheadCountFactor.If the overhead count goes above 0, the connection is closed.
In addition, if the average size of a received non-final DATA frame and the previously received DATA frame (on that same stream) is less than overheadDataThreshold then the overhead count is increased by overheadDataThreshold/(average size of current and previous DATA frames). In this way, the smaller the DATA frame, the greater the increase in the overhead. A small number of small non-final DATA frames should be enough to trigger connection closure.
The averaging is there so buffering such as exhibited by Chrome does not trigger the overhead protection.
To diagnose this problem you need to look at the logs to see what size non-final DATA frames are being sent by the client. I suspect that will show a series of non-final DATA frames with size less than 1024 (the default for overheadDataThreshold).
To fix the issue my recommendation is to look at the client first. Why is it sending small non-final DATA frames and what can be done to stop it?
If you need an immediate mitigation then you can reduce overheadDataThreshold. The information you get on DATA frame sizes sent by the client should guide you as to what to set this to. It needs to be smaller than DATA frames being sent by the client. In extremis you can set overheadDataThreshold to zero to disable the protection.

Registered I/O Sockets and Tcp Window size

Since Windows Registered I/O (RIO) Sockets don't have an internal buffer and SO_RCVBUF socket option doesn't apply. How is the Tcp window calculated/advertised for those?
RIO API extensions are more helpful for large numbers of small messages transmission scenario. It uses queue technique to speed up receiving and sending operations.
For multiple RIOReceive, you can point to different sub-buffer in the registered buffer using different Offset and Length of the RIO_BUF structure.
Registered buffer will not affect the receive window size. Refer to the following documents if you want to change it.
TCP Receive Window Auto-Tuning Level feature in Windows
SIO_SET_COMPATIBILITY_MODE Control Code
SetTcpWindowSize method of the Win32_NetworkAdapterConfiguration class

send a message larger than 1MB using nats-streaming?

I am trying to send a file using the nats messaging service. The size of the files may vary. Is there a way to send more than 1MB of data in a message body, or possible break and join the message body?
UPDATE 2022-09-19
According to the docs (https://docs.nats.io/reference/faq#is-there-a-message-size-limitation-in-nats) default size is 1M and can be increased up to 64M. (also see the other answer)
OUTDATED INFO
According to NATS FAQ you cannot send a message which size exceeds 1M (https://docs.nats.io/reference/faq#is-there-a-message-size-limitation-in-nats):
NATS does have a message size limitation that is enforced by the server and communicated to the client during connection setup. Currently, the limit is 1MB.
Messaging systems are not supposed to be used for file transfer. Use a distributed storage service to hold files and pass file ID in the message.
You can start Nats with a configuration file to define messages max size:
$ nats-server --config /path/to/nats.config
Configuration file example:
# Override message size limit (bytes):
max_payload: 100000000
See available options at https://docs.nats.io/nats-server/configuration#configuration-properties

ZeroMQ Message Size Length Limit?

Suppose that several machines are interacting together using python's zeroMQ client.
These messages are naturally formatted as strings.
Is there a limit to the length of a message (string)?
There is no limit to the size of messages being sent however small messages are handled differently than large messages (see here).
The max size of a small messages is defined in the source code at 30 bytes (see here, look for ZMQ_MAX_VSM_SIZE).
There is the socket option ZMQ_MAXMSGSIZE which causes a peer sending an oversized message to be disconnected, but the default is "no limit".
No limit
As for small size messages transmitted within zmq_msg_t structures, their limit is 29 bytes (for zmq version 3.2.2)
"max_vsm_size = 29," quoted from https://github.com/zeromq/libzmq/blob/master/src/msg.hpp
Some socket types support up to 2^64, but some less than 2^31.
You should build a protocol that keeps chunks below that size anyway, but this is the real answer.
https://github.com/zeromq/libzmq/issues/1332

Resources