I'm trying to build an mosquitto clustering, because mosquitto is single thread and seems cannot handle lots of qos 2 messages.
MQTT server comparison: Benchmark of MQTT Servers
I found mosquitto can use bridge way to build cluster (Cluster forming with Mosquitto broker), but I'm wondering if mosquitto subscribe all message from all other server will cause high overhead for internal message sending.
For example, if I have 10 mosquitto brokers, each of them serve 1,000 messages, originally it's total 10,000 messages. But messages will shared between brokers, so each message will send to another 9 brokers, that's a total 1,000 x 9 x 10 = 90,000 message for internal usage.
Is there some benchmark for mosquitto clustering? Or what's the general solution for sending lots of qos 2 messages?
Thanks
We used to setup a MQTT service platform which use Mosquitto as the broker, with 8 brokers bridged together, about 20k clients subscribed with 20k topics, qos=0, avg pubs 1k messages/sec with 100-2k bytes, the bridge subscribe and publish all the topics, and bring a huge forward latency, sometimes more then 2 mins.
So now we simply broadcast all the pubs to each of the broker, this does work.
But bridge is something else with cluster, which means it does not like a logic MQTT broker that support cluster session, load balance, single point of failure,..
so I have implemented a autonomous Mosquitto cluster, and did some performance test by Tsung, generally speaking, with a scenario that 30k subscriber/2.5k pubs/sec, payload length=744bytes, qos=1, average request response is a bit high then bridge(5.1ms vs 2.32ms), but no message lost and the load did balanced.
you can find the detailed test report under mosquitt-cluster-bridge-benchmark.
Related
I am working on a project where I want to use MQTT, some of my project requirements are around 25k clients connected and a message rate around 4000 messages/sec, after looking some open-source broker option I have been making some test whit mosquitto.
I am using a software called JMTER(it can simulate by threads the clients and messages that I need).
The machine where I am doing the test has 2 Intel® Xeon® Processors E5-2620 v3, it has 6 cores and 12 threads each one, and 9 GB ram,15M Cache, 2.40 GHz, my OS is windows server 2012 R2, so I have a good machine to host a broker.
To monitor my testing attempts I use MQTT explorer which is a plugging specially design to mosquitto.
I have been making some test trying 2k clients (1k publishing 1000 messages/second during 15 seconds, the message was “Hola1” and also 1k subscribers) these numbers where the highest ones that I could get, every time I tried to pass this number of clients mosquitto just died and I lost the connections.
I have been looking in web sites and some people say that mosquitto can handle up to 100k connections, some people say that you can configure your broker to support more connections but I haven’t figured out the way to configure my broker, is there any guide or documentation available to do this?
My experience with setting up Tibco infrastructure is minimal, so please excuse any misuse of terminology, and correct me where wrong.
I am a developer in an organization where I don't have access to how the backend is setup for Tibco. However we have bandwidth issues between our regional centers, which I believe is due to how it's setup.
We have a producer that sends a message to multiple "regional" brokers. However these won't always have a client who needs to subscribe to the messages.
I have 3 questions around this:
For destination bridges: https://docs.tibco.com/pub/ems/8.6.0/doc/html/GUID-174DF38C-4FDA-445C-BF05-0C6E93B20189.html
Is a bridge what would normally be used, to have a producer send the same message to multiple brokers/destinations or is there something else?
It's not clear in the documentation, if a bridge exists to a destination where there is no client consuming a message, does the message still get sent to that destination? I.e., will this consume bandwidth even with no client wanting it?
If the above is true (and messages are only sent to destinations with a consumer), does this apply to both Topics and Message Selectors?
Is a bridge what would normally be used, to have a producer send the same message to multiple brokers/destinations or is there something else?
A bridge can be used to send messages from one destination to multiple destinations (queues or topic).
Alternatively Topics can be used to send a message to multiple consumer applications. Topics are not the best solution if a high level of integrity is needed(no message losses, queuing, etc).
It's not clear in the documentation, if a bridge exists to a destination where there is no client consuming a message, does the message still get sent to that destination? I.e., will this consume bandwidth even with no client wanting it?
If the bridge destination is a queue, messages will be put in the queue.
If the bridge destination is a Topic, messages will be distributed only if there are active consumers applications (or durable subscribers).
3 If the above is true (and messages are only sent to destinations with a consumer), does this apply to both Topics and Message Selectors?
This applies only to Topics (when there is no durable subscriber)
An alternative approach would be to use routing between EMS servers. In this approach Topics are send to remote EMS servers only when there is a consumer connected to the remote EMS server (or if there is a durable subscriber)
https://docs.tibco.com/pub/ems/8.6.0/doc/html/GUID-FFAAE7C8-448F-4260-9E14-0ACA02F1ED5A.html
I have a Hive MQTT server deployed as docker (with 8GB storage) on a CentOS machine.
Through a simulator, I am sending 10k messages sequentially on a fixed MQTT topic on this server.
It takes 5.37s to send and 9.62s to receive the last message on a docker Hive MQTT deployement.
The same 10k messages take 5.98s when Hive MQTT is deployed on the machine.
In both the cases, there is a big latency issue.
The same latency when measured with other MQTT server is very low, all messages are recieved in lesser than 2s.
Please help as to what can be the reason for so much latency while using Hive MQTT..
Is it possible to scale the number of inbound TCP connections into Nifi to tens of thousands ? Scaling of NIFI with TCP connections : The docs state a max setting of 2.
We are expecting to handle between 10-25,000 long running TCP connections (max connection duration would be 4 hours). Deploying multiple redundant NIFI clusters to handle the load would not be a problem.
The domain is IoT with TCP devices. Devices can only send messages over TCP. Each device sends a message every 2 minutes. We are considering moving over to a NIFI cluster/containerised solution on AWS if it can scale to handle our connection load.
Any similar challenges or experiences or workarounds?
Thanks!
You can increase the max connections to whatever your NiFi instances can handle. There will be a thread reading from each connection, so it would end up creating 10k threads if there were 10k concurrent connections to a single instance. You would likely want a TCP load balancer in front of a NiFi cluster, so maybe a 10 node cluster would get you 1k connections per nod.
Use case: Need to send a huge files (multiple of 100 MB) over the queue from one server to another.
At present, am using Activemq Artemis server to send huge file as ByteMessage with help of input stream over tcp protocol with reattempt interval as -1 for unlimited fail over retry. Here the main problem is consumer endpoints connection will be mostly unstable I.e disconnect from network because of mobility nature.
So while send a file in queue, if the connection is dropped and reconnectd broker should resume from where the transaction interrupted (e.g) while tranfer 300 mb of file to consumer queue ,assume that a portion of 100mb is transferred to consumer queue server , then the connection is dropped and reconnected after a while, then process should resume from transferring remaining 200 mb not the whole 300mb again.
My question is which one is the best protocol(tcp, Stomp and openwire) and best practice (blobmessage, bytemessage input stream)to achieve it in ActiveMQ Artemis
Apache ActiveMQ Artemis supports "large" messages which will stream over the network (which uses Netty TCP). This is covered in the documentation. Note: this functionality is only for "core" clients. It doesn't work with STOMP or OpenWire. It also doesn't support "resume" functionality where the message transfer will pick-up where it left off in the case of a disconnection.
My recommendation would be to send the message in smaller chunks in individual messages that will be easier to deal with in the case of network slowness or disconnection. The messages can be grouped together with a correlation ID or something and then the end client can take the pieces of the message and assemble them together once they are all received.