Is there any Limit on the TPS in JPOS - jpos

I am using Jmeter to post ISO XML Message to 8000 port and same is read using XMLChannel.
Once it receives i append a few fields and puts the message into context
But the problem is i cannot go more than 100 TPS on my windows xp Machine. Are there any other settings do be done to increase TPS

jPOS per se is very fast, a simple auto-responder can get you over 4000 TPS sustained load (I get 4400 on my regular MBP). But once you add your business logic and database access, your mileage may vary. Take a look at http://jpos.org/tutorials and try the Gateway example, that one is quite fast.

Related

Gatling performance Testing: TPS is much lower than Jmeter's TPS

I am currently using Jmeter for API Performance Testing, but recently I started to look into Gatling as a potential replacement of Jmeter. Below is the PoC I'm doing for Gatling but I notice the performance result is very different.
Setup:
where we hit a https endpoint with a concurrent user of 10 for 60 seconds.
Results
Jmeter: 10 threads (no ramp up), 60 seconds
Result: 150 TPS
Gatling: 10 concurrent users, also 60 seconds
Result: 27 TPS(cnt/s?)
Question:
first I want to confirm the terminology of Gatling; in Gatling result chart, I see a column named "mean cnt/s" I hovered over it and it says "count of event per second", I imagine that's the same thing as Jmeter's TPS?
Jmeter:
summary + 2386 in 00:00:16 = 153.1/s Avg
Gatling:
Mean cnt/s: 26.652
if above assumption is correct, can someone share some insight on why Gatling's number is much lower than Jmeter's?
Thank You!
Gatling: 10 concurrent users, also 60 seconds
Do you understand what this does?
This is going to spawn a new user every time an existing one finishes, and hence create new connections. Assuming it takes 100ms for a virtual user to complete the scenario, you're going to spawn 101060 = 6,000 virtual users and as many connections.
Is that really what you want and is it the same thing as you do with JMeter?
If you actually want the same 10 users to loop for 60 seconds, you have to inject atOnceUsers(10) and add a during(60) loop in your scenario.
https://gatling.io/docs/gatling/reference/current/core/injection/#open-model
https://gatling.io/docs/gatling/reference/current/core/scenario/#loop-statements
Many things can cause deviations.
I assume you use the same setup for both in terms of load generator/target insance. You can start with fixed number of requests first.
Use loops in Jmeter and repeat in Gatling.
Sending for example 60 x 10 = 600 requests in total.
Gatling will be able to generate much higher load than Jmeter if properly used.

How much load it is?

I have tried but have a doubt that whether the below-mentioned specification is equivalent to 4000load or not.
the number of threads-100,
ramp-up period-10 secs,
loop count- 40, then
which is equal to how much load??
You are loading 100 concurrent threads, the loops just adds more execution time.
So it isn't equivalent to 4000 concurrent threads hitting your server
I don't know what do you mean by 4000load, your test will send 4000 requests per each Sampler which is in your Thread Group as fast as it can. The actual test duration will depend on your application response time but will not be less than 10 seconds.
You might want to take a look at Transactions per Second and Server Hits per Second charts to see how many requests your configuration delivers, both charts can be installed using JMeter Plugins Manager
Also you can generate HTML Reporting Dashboard which will have consolidated aggregate view of your test results.

How to get high rps with JMeter load testing https endpoint

I'm trying to test my https endpoint with JMeter. I want to make at least 10000 requests per second, but when I set the number of threads to 10000 I get way less rps, around 500.
I've tried setting the number of threads to 1000 and 100, surprisingly I get this same number of rps. I'm using HTTP Sampler and "use Keep-Alive" is set to true. When I look in the statistics I see that when using 100 threads, it makes use of Keep-Alive and connect_time is around 100 ms, but when the number of threads is higher connect_time grows, it's like it stops reusing the connections.
I know this isn't a server issue, because I've tried testing that same endpoint with Yandex.Tank and phantom and it can easily maintain 10 000 requests per second, the problem is it can't use response data to make furhter requests, that's why I have to use JMeter for this task.
This can be done by using "Stepping thread group". It will allow you to send 10000 request per second upto specified time. Refer below image.
Stepping Thread Group
Download jar from below link.
https://jmeter-plugins.org/wiki/SteppingThreadGroup/
I hope you are trying to achieve this using one machine. Try with multiple machine or jmeter distributed mode.
https://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.pdf
https://www.blazemeter.com/blog/how-to-perform-distributed-testing-in-jmeter/
https://blazemeter.com/blog/3-common-issues-when-running-jmeter-scripts-and-how-solve-them/
I am assuming that it is the issue with machine which is not able to generate that much load. Usually, i have use max 300 threads per machine but it depend on the machine config. Just check if the machine is having issue and multiple machine is able to generate more load, considering server is not having any issue.
Hope this helps.
Update:-Usually 200-500 can be handled my modern machines.
Please check the below link to have some more info:-
1.How do threads and number of iterations impact test and what is JMeter’s max. thread limit
2.https://www.blazemeter.com/blog/what%e2%80%99s-the-max-number-of-users-you-can-test-on-jmeter/ .

how to limit request per second in apache benchmark tools

I'm trying to stress test my Spring Boot application, but when I run the following command, what ab is doing is that trying to give out a result the the maximum my application could holds. But what I need is to check whether my application could hold at a specific request per second.
ab -p req.json -T application/json -k -c 1000 -n 500000 http://myapp.com/customerTrack/v1/send
The request per second given from above command is 4000, but actually, a lot of records are buffered in my application which means it can't hold that much rps. Could anyone tell me how to set a specific request per second in ab tools? Thanks!
I don't think you can get what you want from ab. There are a lot of other tools out there.
Here's a simple one that might do exactly what you want.
https://github.com/rakyll/hey
For rate limiting to 100 requests per second the below command should work.
hey -D req.json -T application/json -c 1000 -q 100 -n 500000 http://myapp.com/customerTrack/v1/send
Apache Bench is single threaded program that can only take advantage of one processor on your client’s machine. In extreme conditions, the tool could misrepresent results if the parameters of your test exceed the capabilities of the environment the tool is running in. Accorading to your description, the rps has already reach your hardware limitation.
A lot of records are buffered in my application which means it can't hold that much rps
It is very hard to control request per second in single machine.
You can find better performacne testing tools from here HTTP(S) Benchmark Tools
If you have budget you can try goad, which is an AWS Lambda powered, highly distributed, load testing tool built in Go for the 2016 Gopher Gala. Goad allows you to load test your websites from all over the world whilst costing you the tiniest fractions of a penny by using AWS Lambda in multiple regions simultaneously.

Webservice testing using Jmeter

I am new to JMeter and I have been tasked to do a POC where I need to load test the webserivce. I learnt the basics like adding the test plan, adding threads, adding SOAP/RPC Requestsampler and I got the response as well. But, I am not sure how to achieve the below scenario using JMeter.
I need 600 users to hit the service per request/second (this should run for 10 minutes) and the 2nd scenario is about 2000 users to hit the service at 5 request/second (again this should run for 10 minutes)
Also, would it be possible for JMeter to handle this many number of threads/users?
Any inputs would be deeply appreciated.
Given you properly configure JMeter it shouldn't be a problem to simulate 2k users, actually you may need more as if your web service response time will exceed 1 second - you won't be able to achieve 2k requests per second.
Configuring JMeter:
Run test in non-GUI mode
Disable all Listeners (if any)
Increase JVM Heap size
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure for detailed explanation and instructions
Simulating 600 / 2000 requests per second.
Set "Loop Count" to Forever or -1 in Thread Group
Tick "Scheduler" box and set desired duration (600 seconds)
Add Constant Throughput Timer and specify the desired throughput in requests per minute
It is recommended to use HTTP Request sampler for web services testing, you can set Content-Type and SOAPAction headers using HTTP Header Manager

Resources