Latency issue in messages passed through Hive MQTT broker - performance

I have a Hive MQTT server deployed as docker (with 8GB storage) on a CentOS machine.
Through a simulator, I am sending 10k messages sequentially on a fixed MQTT topic on this server.
It takes 5.37s to send and 9.62s to receive the last message on a docker Hive MQTT deployement.
The same 10k messages take 5.98s when Hive MQTT is deployed on the machine.
In both the cases, there is a big latency issue.
The same latency when measured with other MQTT server is very low, all messages are recieved in lesser than 2s.
Please help as to what can be the reason for so much latency while using Hive MQTT..

Related

Increasing number of clients connections on Mosquitto broker

I am working on a project where I want to use MQTT, some of my project requirements are around 25k clients connected and a message rate around 4000 messages/sec, after looking some open-source broker option I have been making some test whit mosquitto.
I am using a software called JMTER(it can simulate by threads the clients and messages that I need).
The machine where I am doing the test has 2 Intel® Xeon® Processors E5-2620 v3, it has 6 cores and 12 threads each one, and 9 GB ram,15M Cache, 2.40 GHz, my OS is windows server 2012 R2, so I have a good machine to host a broker.
To monitor my testing attempts I use MQTT explorer which is a plugging specially design to mosquitto.
I have been making some test trying 2k clients (1k publishing 1000 messages/second during 15 seconds, the message was “Hola1” and also 1k subscribers) these numbers where the highest ones that I could get, every time I tried to pass this number of clients mosquitto just died and I lost the connections.
I have been looking in web sites and some people say that mosquitto can handle up to 100k connections, some people say that you can configure your broker to support more connections but I haven’t figured out the way to configure my broker, is there any guide or documentation available to do this?

NIFI - ListenTCP Max Connections Setting - Scaling for IoT to 10k

Is it possible to scale the number of inbound TCP connections into Nifi to tens of thousands ? Scaling of NIFI with TCP connections : The docs state a max setting of 2.
We are expecting to handle between 10-25,000 long running TCP connections (max connection duration would be 4 hours). Deploying multiple redundant NIFI clusters to handle the load would not be a problem.
The domain is IoT with TCP devices. Devices can only send messages over TCP. Each device sends a message every 2 minutes. We are considering moving over to a NIFI cluster/containerised solution on AWS if it can scale to handle our connection load.
Any similar challenges or experiences or workarounds?
Thanks!
You can increase the max connections to whatever your NiFi instances can handle. There will be a thread reading from each connection, so it would end up creating 10k threads if there were 10k concurrent connections to a single instance. You would likely want a TCP load balancer in front of a NiFi cluster, so maybe a 10 node cluster would get you 1k connections per nod.

Best protocol to send huge file over unstable connection

Use case: Need to send a huge files (multiple of 100 MB) over the queue from one server to another.
At present, am using Activemq Artemis server to send huge file as ByteMessage with help of input stream over tcp protocol with reattempt interval as -1 for unlimited fail over retry. Here the main problem is consumer endpoints connection will be mostly unstable I.e disconnect from network because of mobility nature.
So while send a file in queue, if the connection is dropped and reconnectd broker should resume from where the transaction interrupted (e.g) while tranfer 300 mb of file to consumer queue ,assume that a portion of 100mb is transferred to consumer queue server , then the connection is dropped and reconnected after a while, then process should resume from transferring remaining 200 mb not the whole 300mb again.
My question is which one is the best protocol(tcp, Stomp and openwire) and best practice (blobmessage, bytemessage input stream)to achieve it in ActiveMQ Artemis
Apache ActiveMQ Artemis supports "large" messages which will stream over the network (which uses Netty TCP). This is covered in the documentation. Note: this functionality is only for "core" clients. It doesn't work with STOMP or OpenWire. It also doesn't support "resume" functionality where the message transfer will pick-up where it left off in the case of a disconnection.
My recommendation would be to send the message in smaller chunks in individual messages that will be easier to deal with in the case of network slowness or disconnection. The messages can be grouped together with a correlation ID or something and then the end client can take the pieces of the message and assemble them together once they are all received.

Benchmark of mosquitto clustering?

I'm trying to build an mosquitto clustering, because mosquitto is single thread and seems cannot handle lots of qos 2 messages.
MQTT server comparison: Benchmark of MQTT Servers
I found mosquitto can use bridge way to build cluster (Cluster forming with Mosquitto broker), but I'm wondering if mosquitto subscribe all message from all other server will cause high overhead for internal message sending.
For example, if I have 10 mosquitto brokers, each of them serve 1,000 messages, originally it's total 10,000 messages. But messages will shared between brokers, so each message will send to another 9 brokers, that's a total 1,000 x 9 x 10 = 90,000 message for internal usage.
Is there some benchmark for mosquitto clustering? Or what's the general solution for sending lots of qos 2 messages?
Thanks
We used to setup a MQTT service platform which use Mosquitto as the broker, with 8 brokers bridged together, about 20k clients subscribed with 20k topics, qos=0, avg pubs 1k messages/sec with 100-2k bytes, the bridge subscribe and publish all the topics, and bring a huge forward latency, sometimes more then 2 mins.
So now we simply broadcast all the pubs to each of the broker, this does work.
But bridge is something else with cluster, which means it does not like a logic MQTT broker that support cluster session, load balance, single point of failure,..
so I have implemented a autonomous Mosquitto cluster, and did some performance test by Tsung, generally speaking, with a scenario that 30k subscriber/2.5k pubs/sec, payload length=744bytes, qos=1, average request response is a bit high then bridge(5.1ms vs 2.32ms), but no message lost and the load did balanced.
you can find the detailed test report under mosquitt-cluster-bridge-benchmark.

How to create a udp-based message broker service

I want to create a udp-based message broker service.
I have a few dozens of sources, each transmiting at different rates, part of them streaming and part of them forward the data in batch.
I want all the data to go to one destination - a Cloudera Hadoop cluster (using RedHat 6.6 OS), that will use Kafka/Flume as it's message broker.
I need to create the inbetween message broker service. It has to be robust and fault tollerant. It can receive the data from the sources using any protocol, but it has to forward the messages using UDP (or any one-way protocol, no ACK/SYN or any other respond allowed).
For that reason it has to use a PUSH mechanism, and the data cannot be pulled by the Hadoop cluster.
As much as i know Kafka and Flume - they use TCP to forward messages. I found "udp-kafka-bridge" and "flume-udp-source", but I do not have any experience with them.
The message broker has to be robust and fault tollerant. It has to be able to deal with changing rates of incoming data, and also preferred to be Near Real Time broker.
Do you have any recommendation for tools/architecture I should use?
thank you!

Resources