I am trying to send a file using the nats messaging service. The size of the files may vary. Is there a way to send more than 1MB of data in a message body, or possible break and join the message body?
UPDATE 2022-09-19
According to the docs (https://docs.nats.io/reference/faq#is-there-a-message-size-limitation-in-nats) default size is 1M and can be increased up to 64M. (also see the other answer)
OUTDATED INFO
According to NATS FAQ you cannot send a message which size exceeds 1M (https://docs.nats.io/reference/faq#is-there-a-message-size-limitation-in-nats):
NATS does have a message size limitation that is enforced by the server and communicated to the client during connection setup. Currently, the limit is 1MB.
Messaging systems are not supposed to be used for file transfer. Use a distributed storage service to hold files and pass file ID in the message.
You can start Nats with a configuration file to define messages max size:
$ nats-server --config /path/to/nats.config
Configuration file example:
# Override message size limit (bytes):
max_payload: 100000000
See available options at https://docs.nats.io/nats-server/configuration#configuration-properties
Related
I'm working on a project where I want to live display users' points whenever the gain or lose points.
I have an api that sends event to the frontend:
public function test(Request $request){
$message = $request->message;
$users = User::all()->where('company_id', $message);
event(new MyEvent([$users]));
}
Whenever the api is called, I recieve the following error:
Illuminate\Broadcasting\BroadcastException: Pusher error: The data content of this event exceeds the allowed maximum (10240 bytes).
How can this be solved?
The simple answer is to reduce the payload size - Pusher has a size limit on the body of an event - https://pusher.com/docs/channels/library_auth_reference/rest-api/#post-event-trigger-an-event
They also list some strategies for reducing payload size, such as:
sending multiple small events instead one one large event (chunking)
using compression to reduce payload size
sending a link the client that downloads the content instead of transmitting it via Channels.
See https://support.pusher.com/hc/en-us/articles/4412243423761-What-Is-The-Message-Size-Limit-When-Publishing-an-Event-in-Channels- for more info
Source: https://support.pusher.com/hc/en-us/articles/4412243423761-What-Is-The-Message-Size-Limit-When-Publishing-an-Event-in-Channels-
The message size limit is 10KB.
There are several approaches to work around this limit:
Chunking. One approach is to split your large message into smaller
chunks before publishing, then recombine them on receipt. This
repository shows an example of a chunking protocol: https://github.com/pusher/pusher-channels-chunking-example.
Linking. Instead of sending a large message via Pusher Channels, you could store that message elsewhere, and just send your clients send a link to that content.
Compression, e.g.
Compression algorithms like gzip.
Removing unnecessary characters. Whitespace is one example.
Removing/shortening keys. Instead of {"user":"jim","email":"jim#example.com"}, you could use {"u":"jim","e":"jim#example.com"}, or ["jim","jim#example.com"].
A dedicated cluster. If none of the above are suitable and you really need to sender larger single messages, we can set up a dedicated cluster. If you are interested in provisioning a dedicated cluster as part of an enterprise package please contact Pusher sales.
I have been testing the SDK with code based on the Azure-ESP-Starter code.
When I try to send a larger message body the message fails. I am using an ESP32 Wroom and the SDK installed in VScode.
Message length of 5438 transmits ok, but 5458 fails.
I was looking for a reason and noticed that in the TCP settings, the default send buffer size = 5744
If I change this to 15744 I can send a 10,000 byte message.
The SDK seems to be failing when trying to fragment & send larger messages.
At first I thought it may have something to do with the TLS maximum outgoing fragment length setting (default 4096), but increasing this did not resolve the problem. The TCP buffer setting does allow larger messages.
I thought that Azure allows a 256k message. Has anyone else noticed this issue?
The following document section lists the limits associated with the different service tiers S1, S2, S3, and F1. Device-to-cloud messages can be at most 256 KB, and can be grouped in batches to optimize sends. Batches can be at most 256 KB.
I would suggest making use of a different Azure IoT Library or a different device which you can repro the same issue or maybe with another protocol?
Throttling details
IoT Hub measures message size in a protocol-agnostic way, considering only the actual payload. The size in bytes is calculated as the sum of the following values:
The body size in bytes.
The size in bytes of all the values of the message system properties.
The size in bytes of all user property
names and values.
device-to-cloud messages can be at most 256 KB
The total message size, including the enrichments, can't exceed 256 KB. If a message size exceeds 256 KB, the IoT Hub will drop the message. You can use IoT Hub metrics to identify and debug errors when messages are dropped. For example, you can monitor the telemetry messages incompatible (d2c.telemetry.egress.invalid) metric in the routing metrics. To learn more, see Monitor IoT Hub.
I am using Apache.NMS.AMQP (v1.8.0) to connect to AWS managed ActiveMQ (v5.15.9) broker but am having problems with setting prefetch size for connection/consumer/destination (couldn't set custom value on either of them).
While digging through source code I've found that default prefetch value (DEFAULT_CREDITS) is set to 200.
To test this behavior I've written test that enqueues 220 messages on a single queue, creates two consumers and then consumes messages. The result was, as expected, that first consumer dequeued 200 messages and second dequeued 20 messages.
After that I was looking for a way to set prefetch size on my consumer without any success since LinkCredit property of ConsumerInfo class is readonly.
Since my usecase requires me to set one prefetch size for connection that is what I've tried next according to this documentation page, but no success. This are URLs that I've tried:
amqps://*my-broker-url*.amazonaws.com:5671?transport.prefetch=50
amqps://*my-broker-url*.amazonaws.com:5671?jms.prefetchPolicy.all=50
amqps://*my-broker-url*.amazonaws.com:5671?jms.prefetchPolicy.queuePrefetch=50
After trying everything stated above I've tried setting prefetch for my queue destinations by appending
?consumer.prefetchSize=50 to queue name. Resulting in something like this:
queue://TestQueue?consumer.prefetchSize=50
All of above attempts resulted with effective prefetch size of 200 (determined through test described above).
Is there any way to set custom prefetch size per connection when connecting to broker using AMQP? Is there any other way to configure broker than through query parameters stated on this documentation page?
From a quick read of the code there isn't any means of setting the consumer link credit in the NMS.AMQP client implementation at this time. This seems to be something that would need to be added as it currently seems to just use a default value to supply to the AmqpNetLite receiver link for auto refill.
Their issue reporter is here.
In the following JBoss/HornetQ user manual page you can see how HornetQ provides a mechanism for streaming data to a Message for a Queue using a java.io.InputStream. A JMS version of the same code is given. Has anyone come across an equivalent using IBM MQSeries / WebsphereMQ?
Say I have a large amount of data to place in the JMS Message which to me is just a stream of bytes. In the Hornet example, the stream is only read when the message is sent, so if it is, say a FileInputStream, then we only need enough memory to buffer a chunk of the bytes. I can use a javax.jms.BytesMessage to send in chunks of bytes and use the BytesMessage to buffer them. The problem with this is that the IBM implementation of BytesMessage (com.ibm.msg.client.jms.internal.JmsBytesMessageImpl) has to cache them until the Message is sent and if that is a large amount of data it is a problem. Worse it appears that although I am only sending bytes, the IBM implementation appears to keep duplicate copies, one in a BytesArrayOutputStream the other in a DataOutputStream.
In WebSphere MQ the closest thing to what you describe is a reference message. The method described in the Infocenter requires custom programming of channel exits to grab the filesystem object and put it into a message before it is transmitted over the channel. A complementary exit on the remote side saves the payload to a file and puts a reference to the file in the message that is returned to the app.
We also have programs in WMQ that take STDIN or a pipe at one end and put messages to a queue on the other end. A pair of these can act as a pipe through which line-oriented ASCII data flows between processes on separate machines. However, there's no JMS implementation of this and it doesn't work too well for binary data.
In WMQ, we have concept of Group and Segment.
Segmentation is supported in all OS except Z/OS.
Check for details here Segmentation In WMQ
Make use of GroupId, MsgSeqNumber, and Offset while putting the message.
While getting the message if you give MQGMO_COMPLETE_MSG in GMO, then all segments are joined automatically according to the MsgSeqNumber and
you will get a single message on the recieving application with a
single GET.
Suppose that several machines are interacting together using python's zeroMQ client.
These messages are naturally formatted as strings.
Is there a limit to the length of a message (string)?
There is no limit to the size of messages being sent however small messages are handled differently than large messages (see here).
The max size of a small messages is defined in the source code at 30 bytes (see here, look for ZMQ_MAX_VSM_SIZE).
There is the socket option ZMQ_MAXMSGSIZE which causes a peer sending an oversized message to be disconnected, but the default is "no limit".
No limit
As for small size messages transmitted within zmq_msg_t structures, their limit is 29 bytes (for zmq version 3.2.2)
"max_vsm_size = 29," quoted from https://github.com/zeromq/libzmq/blob/master/src/msg.hpp
Some socket types support up to 2^64, but some less than 2^31.
You should build a protocol that keeps chunks below that size anyway, but this is the real answer.
https://github.com/zeromq/libzmq/issues/1332