I deployed EMQ 3.0 on AWS Ec2 instance and configured it mostly with default configuration but I changed buffer sizes as my requirement is to send an MQTT data of 4KB. But it is not working and EMQ broker is not receving the message. Is there any restriction on packet size on AWS side and if so how to increase it? I verified that EMQ configuration allows a packet up 64kb and I increased external buffer size to 4KB without success.
Any sugestions or approach to fix this issue? Please note that I am able to send data upto 2KB.
Thanks for the help.
Is there any restriction on packet size on AWS side and if so how to increase it?
There is no such packet size restriction from AWS (EC2) side,
according to shared responsibility model.
The problem with the Web-Sockets support which means that messages with large payloads or special payload size do not reach the EMQ code responsible for implementing broker behavior, so changing .conf max_packet_size has no effect in this case.
Bug fix link https://github.com/emqx/emqx/issues/643
Related
I have a Go app which is deployed to two 8 core pods instances on Kubernetes.
From it, I receive a list of ids than later on I use to retrieve some data from another service by sending each id to a POST endpoint.
I am using a bounded concurrency pattern to have a maximum number of simulataneous goroutines (and therefore, of requests) to this external service.
I set the limit of concurrency as:
sem := make(chan struct{}, MAX_GO_ROUTINES)
With this setup I started playing around with the MAX_GO_ROUTINES number by increasing it. I usually receive around 20000 ids to check. So I have played around by setting MAX_GO_ROUTINES from anywhere between 100 and 20000.
One thing I notice is as I go higher and higher some requests start to fail with the message: connection reset from this external service.
So my questions are:
What is the blocker in this case?
What is the limit of concurrent HTTP POST requests a server with 8 cores and 4GB of ram can send? Is it a memory limit? or file descriptors limit?
Is the error I am getting coming from my server or from the external one?
What is the blocker in this case?
As the comment mentioned: HTTP "connection reset" generally means:
the connection was unexpectedly closed by the peer. The server appears
to have dropped the connection on the unsuspecting HTTP client before
sending back a response. This is most likely due to the high load.
Most webservers (like nginx) have a queue where they stage connections while they wait to be serviced. When the queue exceeds some limit the connections may be shed and "reset". So it's most likely this is your upstream service being saturated (i.e. your app sends more requests than it can service and overloads its queue)
What is the limit of concurrent HTTP POST requests a server with 8 cores and 4GB of ram can send? Is it a memory limit? or file descriptors limit?
All :) At some point your particularl workload will overload a logical limit (like file descriptors) or a "physical" limit like memory. Unfortunately the only way to truly understand which resource is going to be exhausted (and which constraints you hit up against) is to run tests and profile and benchmark your workload :(
Is the error I am getting coming from my server or from the external one?
HTTP Connection reset is most likely the external, it indicates the connection peer (the upstream service) reset the connection.
Use case: Need to send a huge files (multiple of 100 MB) over the queue from one server to another.
At present, am using Activemq Artemis server to send huge file as ByteMessage with help of input stream over tcp protocol with reattempt interval as -1 for unlimited fail over retry. Here the main problem is consumer endpoints connection will be mostly unstable I.e disconnect from network because of mobility nature.
So while send a file in queue, if the connection is dropped and reconnectd broker should resume from where the transaction interrupted (e.g) while tranfer 300 mb of file to consumer queue ,assume that a portion of 100mb is transferred to consumer queue server , then the connection is dropped and reconnected after a while, then process should resume from transferring remaining 200 mb not the whole 300mb again.
My question is which one is the best protocol(tcp, Stomp and openwire) and best practice (blobmessage, bytemessage input stream)to achieve it in ActiveMQ Artemis
Apache ActiveMQ Artemis supports "large" messages which will stream over the network (which uses Netty TCP). This is covered in the documentation. Note: this functionality is only for "core" clients. It doesn't work with STOMP or OpenWire. It also doesn't support "resume" functionality where the message transfer will pick-up where it left off in the case of a disconnection.
My recommendation would be to send the message in smaller chunks in individual messages that will be easier to deal with in the case of network slowness or disconnection. The messages can be grouped together with a correlation ID or something and then the end client can take the pieces of the message and assemble them together once they are all received.
I am building a proxy server using Java. This application is deployed in docker container (multiple instances)
Below are requirements I am working on.
Clients send http requests to my proxy server
Proxy server forward those requests in the order it received to destination node server.
When destination is not reachable, proxy server store those requests and forward it when it is available in future.
Similarly when a request fails, request will be re-tried after "X" time
I implemented a node wise queue implantation (Hash Map - (Key) node name - (value) reachability status + requests queue in the order it received).
Above solution works well when there is only one instance. But I would like to know how to solve this when there are multiple instances? Is there any shared datastructure I can use to solve this issue. ActiveMQ, Redis, Kafka something of that kind (I am very new to shared memory / processing).
Any help would be appreciated.
Thanks in advance.
Ajay
There is an Open Source REST Proxy for Kafka based on Jetty which you might get some implementation ideas from.
https://github.com/confluentinc/kafka-rest
This proxy doesn’t store messages itself because kafka clusters are highly available for writes and there are typically a minimum of 3 kafka nodes available for Message persistence. The kafka client in the proxy can be configured to retry if the cluster is temporarily unavailable for write.
We have a BatchJob application which is configured in Websphere Application Server (8.0.0.7). The application processes the requests put in the source MQ queue. Through WAS, the MQ queue is polled to see if there are any new requests available for processing.
We have been recently notified by a MQ resource that, there is high CPU resource utilization due to the MQ channel used by our application. When looked at the numbers, the MQGETS and MQINQS requests are humongous. This is not a 1 off incidence. It has always been like that since the day our application was installed. So I believe there is some configuration at Websphere that is causing this high volume of MQGETS and MQINQS requests.
Can somebody give any pointers which configs need to be checked? I am from application development side, so don't have in-detailed knowledge about WAS.
Thanks in advance.
I want to create a udp-based message broker service.
I have a few dozens of sources, each transmiting at different rates, part of them streaming and part of them forward the data in batch.
I want all the data to go to one destination - a Cloudera Hadoop cluster (using RedHat 6.6 OS), that will use Kafka/Flume as it's message broker.
I need to create the inbetween message broker service. It has to be robust and fault tollerant. It can receive the data from the sources using any protocol, but it has to forward the messages using UDP (or any one-way protocol, no ACK/SYN or any other respond allowed).
For that reason it has to use a PUSH mechanism, and the data cannot be pulled by the Hadoop cluster.
As much as i know Kafka and Flume - they use TCP to forward messages. I found "udp-kafka-bridge" and "flume-udp-source", but I do not have any experience with them.
The message broker has to be robust and fault tollerant. It has to be able to deal with changing rates of incoming data, and also preferred to be Near Real Time broker.
Do you have any recommendation for tools/architecture I should use?
thank you!