Increasing number of clients connections on Mosquitto broker - windows

I am working on a project where I want to use MQTT, some of my project requirements are around 25k clients connected and a message rate around 4000 messages/sec, after looking some open-source broker option I have been making some test whit mosquitto.
I am using a software called JMTER(it can simulate by threads the clients and messages that I need).
The machine where I am doing the test has 2 Intel® Xeon® Processors E5-2620 v3, it has 6 cores and 12 threads each one, and 9 GB ram,15M Cache, 2.40 GHz, my OS is windows server 2012 R2, so I have a good machine to host a broker.
To monitor my testing attempts I use MQTT explorer which is a plugging specially design to mosquitto.
I have been making some test trying 2k clients (1k publishing 1000 messages/second during 15 seconds, the message was “Hola1” and also 1k subscribers) these numbers where the highest ones that I could get, every time I tried to pass this number of clients mosquitto just died and I lost the connections.
I have been looking in web sites and some people say that mosquitto can handle up to 100k connections, some people say that you can configure your broker to support more connections but I haven’t figured out the way to configure my broker, is there any guide or documentation available to do this?

Related

TCP connection limit/timeout in virtual machine and native macOS/ARM-based Mac gRPC Go client?

I am currently working on a gRPC microservice, which is deployed to a Kubernetes cluster. As usual, I am benchmarking and load-/stress-testing my service, testing different load balancing settings, impact of SSL and so forth.
Initially, I used my Macbook and my gRPC client written in Go and executed this setup either in Docker or directly in containerd with nerdctl. The framework I use for this is called Colima and basically builds on a lean Alpine VM to provide the container engine. Herein, I ran into issues with connection timeouts and refusals once I crossed a certain number of parallel sessions, which I guess is a result from the container engine.
Therefore, I went ahead and ran my Go client natively on macOS. This setup somehow runs into the default 20s keepalive timeout for gRPC (https://grpc.github.io/grpc/cpp/md_doc_keepalive.html) the moment my parallel connections exceed the number of traffic I can work out by some margin (#1).
When I run the very same Go client on an x86 Ubuntu 22 desktop, there are no such issues whatsoever and I can start way more sessions in parallel, which are then processed accordingly without any issues with the 20s keepalive timeout.
Any ideas how that comes to being and if I could make some changes to my setup to be able to run my stress-test benchmarks from macOS?
#1: Let's say I can process and reply 1 request per second with my service. For stress testing, I now start 20 parallel sessions and would expect them to be processed sequentially.

How many SSE connections can a web server maintain?

I'm experimenting with server-sent events (SSE) as an alternative to websockets for real-time data pushing (data in my application is primarily one-directional).
How scalable would this be? I know that each SSE connection uses an HTTP request -- does this mean that a web server can handle as many SSE connections as HTTP requests (something like this answer)? I feel as though this might be the case, but I'm not sure how a SSE connection works and if it is substantially more complex/resource-hungry than a simple HTTP request.
I'm mostly wondering how this compares to the number of concurrent websockets a browser can keep open. This answer suggests that only ~1400-1800 sockets can be handled by a server at the same time.
Can someone provide some insight on this?
(To clarify, I am not asking about how many SSE connections can be kept open from the client; I am asking about how many can be reasonably kept open by a web server.)
Tomcat 8 (web server to give an example) and above that uses the NIO connector for handling incoming requst. It can service max 10,000 concurrent connections(docs). It does not say anything about max connections pers se. They also provide another parameter called acceptCount which is the fall back if connections exceed 10,000.
socket connections are treated as files. Every incoming connection to tomcat is like opening a socket and depending on the OS e.g in linux depends on the file-descriptor policy. You will find a common error when too many connections are open or max connections have been reached as the following
java.net.SocketException: Too many files open
You can change the number of open files by editing
/etc/security/limits.conf
It is not clear what is max limit that is allowed. Some say default for tomcat is 1096 but the (default) one for linux is 30,000 which can be changed.
On the article I have shared the linkedIn team were able to go 250K connections on one host.
So that should give you a pretty good idea about max sse connections possible. depends on your web server max connection configuration, OS capacity etc.

WebSocket vs. WebRTC server performance at large scale (1 million connections)

I saw this question about WebSocket performance. The conclusion of this question was that:
On today's systems, handling 1 million concurrent TCP connections is not an issue.
We had to demonstrate several times, to some of our customers, that 1 million connections can be reached on a single box (and not necessarily a super-monster machine)
With at least 30 GiB RAM you can handle 1 million concurrent sockets. The CPU needed depends on the data throughput you need.
I need to build a service that can connect to multiple peers at large scale. The traffic should be very minimal and mostly passing a small messages between the server and the client on real-time. It could be that some connections will be idle for a long time.
I wonder which protocol will give me better performance with less resources on those circumstances. I need to choose a protocol that has real-time capabilities, but also supported on web-browsers so I ended up with WebSockets and WebRTC (On WebRTC, the server will establish a WebRTC DataChannel to each peer via some signaling service).
What's the performance of WebRTC at large scale comparing to TCP sockets?
Can it handle large amount of connection with less resources than TCP sockets?

Benchmark of mosquitto clustering?

I'm trying to build an mosquitto clustering, because mosquitto is single thread and seems cannot handle lots of qos 2 messages.
MQTT server comparison: Benchmark of MQTT Servers
I found mosquitto can use bridge way to build cluster (Cluster forming with Mosquitto broker), but I'm wondering if mosquitto subscribe all message from all other server will cause high overhead for internal message sending.
For example, if I have 10 mosquitto brokers, each of them serve 1,000 messages, originally it's total 10,000 messages. But messages will shared between brokers, so each message will send to another 9 brokers, that's a total 1,000 x 9 x 10 = 90,000 message for internal usage.
Is there some benchmark for mosquitto clustering? Or what's the general solution for sending lots of qos 2 messages?
Thanks
We used to setup a MQTT service platform which use Mosquitto as the broker, with 8 brokers bridged together, about 20k clients subscribed with 20k topics, qos=0, avg pubs 1k messages/sec with 100-2k bytes, the bridge subscribe and publish all the topics, and bring a huge forward latency, sometimes more then 2 mins.
So now we simply broadcast all the pubs to each of the broker, this does work.
But bridge is something else with cluster, which means it does not like a logic MQTT broker that support cluster session, load balance, single point of failure,..
so I have implemented a autonomous Mosquitto cluster, and did some performance test by Tsung, generally speaking, with a scenario that 30k subscriber/2.5k pubs/sec, payload length=744bytes, qos=1, average request response is a bit high then bridge(5.1ms vs 2.32ms), but no message lost and the load did balanced.
you can find the detailed test report under mosquitt-cluster-bridge-benchmark.

How many SNMP packets/sec handle by Windows Server 2003/2008/2012?

We are monitoring more than 400 devices via SNMP, there is no limitation for number of nodes to monitor, licensed for unlimited nodes
the problem is alarms are malfunctioning, the monitoring software team told windows servers cannot handle more than 100 SNMP packets per second, Is it true?
Windows does not process the SNMP packets, it only hands them over to the monitoring software just like any other network packet. To say that Windows cannot handle 100 SNMP packets per second is saying that Windows cannot handle 100 packets of any kind per second.
That does not mean it is impossible for Windows to be the weakest link, but there are other more likely bottlenecks:
Your server hardware (mostly CPU and the network interface).
Your network (cabling, routers, switches, VPN connections, proxies, ..).
The devices you are monitoring. Devices like IP-phones, printers etc do not have a lot of processing power and may not be able to keep up with the SNMP requests from the server.
The monitoring software itself.

Resources