ULTIMATE GOAL: publish 100 "heartbeat" messages/second over MQTT using Mosquitto as a broker.
To send a single heartbeat, I can easily do mosquitto_pub -t "ems/heartbeat" -m 0.
I am aware of the watch utility in Unix systems but is not fast enough for the goal. My first approach to the problem was to scale this up by using the command while sleep 0.01; do mosquitto_pub -t "ems/heartbeat" -m 0; done. By subscribing to the "ems/heartbeat" topic (mosquitto_sub -t "ems/heartbeat") I am fairly sure that the messages published to the topic are way less than the 100 expected in the timeframe of 1 second. So here is my question: how can I run a shell script - the heartbeat above - 100 times a second, or even better, how can I publish 100 messages/sec over a certain MQTT topic?
The title might seem a bit misleading compared to what is my ultimate goal, but finding a way to run a shell script 100 times/sec should do the trick. If there are different ways to tackle the problem, they are of course welcome! Thanks!
EDIT & ADDITIONAL INFORMATION:
The receiver of these messages is just a microcontroller that needs to check that the connection with the Electronic Monitoring System (a laptop) is alive. The need for 100 messages/sec is given by the fact that the micro is providing control to highspeed actuators and when a connection loss occurs, everything needs to be in a safe state.
The basic assumption is that the 100 messages are spread over a 1-second time span and are coming from a single entity, the EMS.
By running the time command this is the output, so no way it possible to use mosquitto_pub to send 100 messages a second, as pointed out.
time mosquitto_pub -t "ems/heartbeat" -m 0
real 0m0.039s
user 0m0.006s
sys 0m0.011s
The short answer is don't.
Use a proper load generating tool, e.g. jmeter has MQTT support.
Otherwise use a proper MQTT client library and connect once and then just publish in a loop. That way you don't have the overhead of setting up and tearing down a new connection to the broker for each message (which is what driving mosquitto_pub this way is doing)
Related
Jmeter tests are run in master slave fashion with around 8 slave machines. However with the remote batching mode set to MODE_STRIPPED_BATCH, I am not able to run tests for more than 64 hours. Throughput is around 450 requests per minute, and per slave machine it results in the creation of jtl files that are around 1.5 gb. All 8 slaves are going to send this to the master (1.5 gb x 8) and probably the I/O gets too much for the master to handle. The master machines memory is at 16 gb ram and has disk storage of around 250 gb. I was wondering if the jmeter distributed architecture has any provision to make long running soak tests possible without any un explained stress on the master machine. Obviously I have the option to abandon master slave setup and go for 8 independent nodes, however I'll in that case run into complications with respect to serving data csv files ( which I currently serve using simple table server plugin from the master m) and also around aggregating result files. Any suggestions please. It would be great to be able to run tests atleast for around 4 days (96 hours or so).
I would suggest to go for an independent JMeter workers + external data collector setup.
Actually, the JMeter right-out-of-the-box "distributed scaling" abilities are weak, way outdated & overall pretty ridiculous. As well as it's data collection/agregation/processing abilities.
This situation actually puzzles me a lot - mind you, rivals are even worse, so there's literally NOTHING in the field (except for, perhaps, some SaaS solutions trying to monetize on this gap).
But is is what it is...
So that's about why-s, now to how-s.
If I were you, I would:
Containerize the JMeter worker
Equip each container with a watchdog to quickly restart the worker if things go south locally (or probably even on schedule to refresh it ultimately). Be that an internal one, or external like cloud services have - doesn't matter.
Set up a timeseries database - I recommend InfluxDB, it's an excellent product & it's free in basic version (which is going to be enough for your purposes).
Flow your test results/metrics into that DB - do not collect them locally! You can do it right from your tests with pretty simple custom listener (Influx line protocol is ridiculously simple & fast), or you can have external agent watching the result files as they flow. I just suggest you not to use so called Backend Listner to do the job - it's garbage, it won't shape your data right, so you'd have to do additional ops to bring them to order.
If you shape your test result/metrics data properly, you've get 'em already time-synced into a single set - and the further processing options are amazingly powerful!
My expectation is that you're looking for the StrippedAsynch sampler sender mode.
As per the documentation:
Asynch
samples are temporarily stored in a local queue. A separate worker thread sends the samples. This allows the test thread to continue without waiting for the result to be sent back to the client. However, if samples are being created faster than they can be sent, the queue will eventually fill up, and the sampler thread will block until some samples can be drained from the queue. This mode is useful for smoothing out peaks in sample generation. The queue size can be adjusted by setting the JMeter property asynch.batch.queue.size (default 100) on the server node.
StrippedAsynch
remove responseData from successful samples, and use Async sender to send them.
So on slave node add the following line to user.properties file:
mode=StrippedAsynch
and on the master node define asynch.batch.queue.size, to be as high to not to have impact onto JMeter's throughput (won't slow it down) and as low to not to overwhelm the master. I would start with 1000.
Another option is using StrippedDiskStore but you will have to manually collect serialized results after test completion (make sure that slave processes will not shut down because the results will be deleted when slave process finishes)
You could use JMeter PerfMon Plugin to monitor memory and network usage on master and slaves.
I'm trying to stress test my Spring Boot application, but when I run the following command, what ab is doing is that trying to give out a result the the maximum my application could holds. But what I need is to check whether my application could hold at a specific request per second.
ab -p req.json -T application/json -k -c 1000 -n 500000 http://myapp.com/customerTrack/v1/send
The request per second given from above command is 4000, but actually, a lot of records are buffered in my application which means it can't hold that much rps. Could anyone tell me how to set a specific request per second in ab tools? Thanks!
I don't think you can get what you want from ab. There are a lot of other tools out there.
Here's a simple one that might do exactly what you want.
https://github.com/rakyll/hey
For rate limiting to 100 requests per second the below command should work.
hey -D req.json -T application/json -c 1000 -q 100 -n 500000 http://myapp.com/customerTrack/v1/send
Apache Bench is single threaded program that can only take advantage of one processor on your client’s machine. In extreme conditions, the tool could misrepresent results if the parameters of your test exceed the capabilities of the environment the tool is running in. Accorading to your description, the rps has already reach your hardware limitation.
A lot of records are buffered in my application which means it can't hold that much rps
It is very hard to control request per second in single machine.
You can find better performacne testing tools from here HTTP(S) Benchmark Tools
If you have budget you can try goad, which is an AWS Lambda powered, highly distributed, load testing tool built in Go for the 2016 Gopher Gala. Goad allows you to load test your websites from all over the world whilst costing you the tiniest fractions of a penny by using AWS Lambda in multiple regions simultaneously.
Is there any way to achieve high tps making minimal connections using LoadRunner.
I am using Java protocol to test MQ.
Current scenario could achieve 30 TPS putting load of 15 Vusers.
Is there any way to use 2,3 Vusers and achieve 30 TPS?
My scenario looks like this,
init()-- Make connection to Qmgr
Action()-- sending message and getting the response
End()--- closing the connection.
So you're saying currently for each virtual user you can only achieve 2TPS.
If you have more than one iteration defined in your run time settings, then the 'Action' should be looping and reusing the current connection. If you're already doing this then that is as fast as you can go with a single thread.
Ensure the script is correctly re-using the connection within Action().
Otherwise the only way to speed things up is to optimise the code of the script.
Ensure that the messages aren't consumed too fast, I've found that trying to read off an empty IBM MQ can cause vusers to stall.
I am running a Jmeter load test for my application. I can successfully run the test to login-search-logout for 500 users with a ramp-up of 150. I am unable to run a test for any higher number of users ( 800 users with 240 ramp-up or 1000 with 300) I tried increasing the ramp-up time too. I dont see any system errors nor do I see any connection pool errors. Even Jmeter log looks normal. However, my test gets killed after a number of users is reached ( for 800 user, the test kill at 800 ). Any suggestions as to what I can check for? Thank you
As mentioned by CodeChimp and tried by you already, first to check is the heap size.
And also verify with lesser number of users.
You could also try to start two jmeter instances (probably with different user id range?) instead of one.
If the problem still persists, there could be two possibilities:
1. Unable to send so many requests from client side.
2. Server(being tested) is unable to handle that many requests.
To verify if the problem is with client:
Check the ulimit value on the amazon instance. "ulimit -a".
Specifically, check if the nproc value (ulimit -n) is higher than the max number of threads(users) you are trying to connect from the amazon instance.
Check if the client has ramped up the required number of users:
(In the case of jmeter, the number of threads is same as number of users.)
ps uH p | wc -l (This should be equal to the number of users you have given in the script)
The jmeter script itself might be logging out the users by the time other users are ramped up. Better to verify by running the thread group in a loop. (May be loop forever?).
Make sure your script is configured to "continue" after an error occurs. I think this can be configured at thread group level.
I want to load test a URL by hitting it few hundred times at same millisecond . I tried JMeter but I could hit 2 request at same millisecond. This seems to be problem that my machine cant create threads fast enough . Is there any solution to the issue ?
In JMeter you can use synchronizing timer setting it to 100, this way all threads will wait until there are 100 available and hit the server:
http://jmeter.apache.org/usermanual/component_reference.html#Synchronizing_Timer
Another solution is to increase the number of Threads so that you hit this throughput.
In next coming version (2.8) of JMeter you will be able to create threads on demand (created once needed).
Anyway hitting few hundred times at same millisecond is a high load so you will have to tune JMeter correctly.
Regards
Philippe
JMeter uses blocking HTTP client, in order to hit the server at the exact same time with 100 reqeusts you need 100 threads in JMeter. Even providing that, you still don't have 100 cores to actually run such code at the same time. Even if you had 100 cores, it takes some time to start a thread, so you would have to start them in advance and synchronize on some sort of barrier. And that is not supported in JMeter.
Why do you really want to run your server "at same millisecond"? An ordinary load test just calls the server with as many connections as possible, but not necessarily at the same time. Moreover, sometimes you are even adding random sleep between requests to simulate so-called think time.
As per Philippe's answer, JMeter does in fact support synchronous requests. But maybe for what you want something like Apache Bench using -c100 (or tune it to whatever works) is a better option? It's pretty basic stuff but then the overhead is a lot smaller which might help in this situation.
But I would also steal from Tomasz's answer and echo his concern that perhaps this is not really the best way to approach load testing. If you're trying to replicate real life traffic then do you really need such a high level of concurrency?
You need to use Jmeter-server and a host of client machines for load generation. Your single machine is not enough to generate the load itself.