Limitation when using K6 (Load impact) for load testing on APIs - performance

I ran a few tests using k6(OSS) by load impact and found it great in terms of usability compared to JMeter
I am doing a feasibility study to choose a load testing tool that should help me do API testing. I am inclined towards using K6 because I believe it is developer friendly but could not find resources that advise regarding maximum load I can simulate using K6.
Would it be possible to simulate 1 million rps(requests per second) using K6? If yes, how should I go about achieving this?

In theory, yes, if you use multiple k6 instances, you can achieve however many requests per second you want. A single k6 instance can produce anywhere from thousands to tens of thousands requests per second, depending on a lot of different factors - machine specs, script complexity, VUs, sleep times, network conditions, etc.
Right now k6 doesn't have a native distributed execution mode though, so you'd have to schedule the different instances yourself. There's a REST API (https://docs.k6.io/docs/rest-api) and you can output metrics to a centralized collector like InfluxDB (https://docs.k6.io/docs/results-output), but it'd take some work to execute a single test on multiple machines. A native k6 distributed execution mode is planned, but work on it hasn't started yet.

You can run k6 on the Load Impact (https://loadimpact.com) cloud (Cloud Execution mode) to access multiple k6 instances executing in parallel. Then, as noted, you can generate a large number of requests per second with the specific RPS being highly dependent on your script and other factors.

Related

Regarding the Chrome and FF multi thread (Process) how to do load test with web protocol?

I wonder if some one solved the issue of browser multi thread with a request response script for load test
If you are going to use real Chrome and FF browsers for load test you can consider the following options:
Selenium Grid
Selenium-Grid allows you run your tests on different machines against different browsers in parallel. That is, running multiple tests at the same time against different machines running different browsers and operating systems. Essentially, Selenium-Grid support distributed test execution. It allows for running your tests in a distributed test execution environment.
Apache JMeter with the WebDriver Sampler plugin. This way you will be able to control concurrency and get performance metrics in form of HTML Reporting Dashboard.
Simulating web browser concurrency is something most load testing tools do very badly, if at all. We have tried to do this in k6 by letting each VU (virtual user) use multiple, concurrent TCP connections to fetch things in parallel. There is a special API function - http.batch() - that enables this functionality. http.batch() accepts multiple URLs as input parameters, and will fetch as many as possible in parallel.
Like Dmitri writes, Jmeter has a plugin that provides concurrency - sort of. However, what it actually does (unless I'm misinformed) is to spread out requests over multple VUs. Each VU will still only use one concurrent connection, which means that if you e.g. want to simulate 100 real browser users, you may need to fire up 1,000 VUs to do so realistically. This is not very different from what you would get with any other load testing tool, in my opinion: all of them allow you to start more VUs to create more concurrency and more traffic.
I'd say that apart from k6 and maybe one or perhaps two other tools (and Jmeter is not one of them), your only option if you want to really simulate the way browsers behave, is to use Selenium Grid or something similar to fire up a large number of real, actual browsers to do the load testing. The drawback is that real browsers are very expensive to run: they want lots of CPU and memory. But they provide the by far best browser "simulation".

Performance testing: JMeter vs Tsung

What's the differences between JMeter and Tsung? I read that Tsung may generate more load than JMeter if test with same hardware, but how close it to reality?
Tsung is written in Erlang and said to be capable of running an extreme number of simultaneous users (10000+)
Jmeter is written in Java and quite capable of generating large amounts of load, assuming your test plan is good.
Here are some limitations in regards to performance in JMeter
Every user in Jmeter is an OS thread. This increases overhead when using a lot of concurrent users (jmeter best practices recommends using a low number of threads, http://jmeter.apache.org/usermanual/best-practices.html - in my experience you may run into problems when using more than 1000 threads per jmeter instance, but that may vary a lot depending on your test plan). You will also need to tweak JVM settings when running larger tests.
You can easily destroy performance & run out of PermGen memory if you have dynamic scripts (scripts that have to be recompiled every time because of jmeter variable expansion). Put your scripts in separate files or use a compilation cache key to avoid recompilation.
Using some test components (such as the Tree View, which keeps every request & response in memory) can wreak havoc on your load generator
I've tested some VERY large sites with JMeter, and as long as you are able to reduce the number of threads (reducing user-waits to keep throughput at your desired level) Jmeter is fine. There is quite a large community around JMeter, plugins for load testing using a lot of protocols & monitoring various systems. JMeter has also good scripting support - java, javascript, whatever that can be loaded in to a jvm basically (including for example groovy), so it is very extensible.
At one time (using jmeter 2.6 I think) I ran some 30,000 database requests per second (Oracle JDBC) from a single load generator, and there have been some optimizations since, so as long as you dont have extreme requirements, Jmeter is fine. Pick the one that suits your needs and your experience.
Note: I have very little experience using Tsung.
Edit: Nowadays I use Locust (https://github.com/locustio/locust/). Tsung has not been updated since 2017, and Locust's user/threading model (greenlets) supports more concurrent users than Jmeter. But most importantly it has a more flexible workflow (actual Python code instead of configuration/clicking in a GUI)
It's always depends on your scenario and amount off datas used for variablility, the ratio is closed to ten, when you'll be able to run 100 user/second with JMeter, Tsung will be easy with 1000 user/sec.

Running JMeter for 62,000 users

Our client asked us to run a stress test on their Web application simulating 62,000 users (threads), the test consists of 13-15 HTTP requests, with 1 second delay between every HTTP request, the test should run for 10.5 hours continuously.
I had previous experience with JMeter running up to 10,000 users, but have not tried for larger number.
Is there a limit for the number of threads that JMeter can handle, or is this limited by the hardware of test server?
agreed with Ray when you run jmeter for this much amount of thread the distributed testing is the best option, and the hardware and network shall be capable to handle this kind of load, and in case you want to do that in short go for http://blazemeter.com, they are Scalable from 1,000 to 100,000 concurrent users.
In principle it is limited by several factors including the configuration (CPU/Mem) of your JMeter machine. That said, it is a VERY large number of threads for just one JMeter. To be honest: I wouldn't run even 10000 threads on one machine. You might want to look into using JMeter distributed, see the manual (http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.pdf).
Except running script remotely, you'll need to configure your script with minimal usage of CPU and memory.
Use JMeter best practices and JMeter tuning tips to reach it

How many threads/users can one Windows client simulate during my load test?

I'm planning to do a load test of our ASP/.NET web application and need to simulate about 600 concurrent users on our system.
Initially we'll just be running the load test tools (probabaly JMETER or WCAT/WAST) from our personal workstations which are Windows 7/32 Bit Dells (Dual Core processors). I was wondering about how many users I can expect to be able to simulate from one client.
If I can easily do 200 users per client, I'll need to identify 2-3 more clients for the test.
I wanted to ask the community based on their experience how many users I should expect per client on a standard windows box.
Any help is appreciated!
This highly depends on the test plan itself and cannot be answered that easily.
If you for example have 500 users that just do one request and then have a waiting timer for five minutes, this should work. If all users constantly do requests without waiting, this will put much more load on your machine.
It depends on the samplers in use. HTTP requests are less costly than SOAP requests for example.
It also depends on the listeners you have active.
For a normal load test I usually have around 100-300 threads active. I would suggest to start with such a number and to monitor the load (CPU, network) on your client to see how much potential there is.
Without more details about the test scenarios and the hardware, it is hard to give specific answers. But our Load Tester product can (usually) handle this level of users pretty easily on a single machine (assuming relatively modern hardware). The testing tool should scale linearly up to a point, so you should be able to get a good estimate by running 50 users through a scenario that is similar to what you expect to test.

Why difference in out when using Jmeter to load test vs HP Load runner?

Here is the scenario
We are load testing a web application. The application is deployed on two VM servers with a a hardware load balancer distributing the load.
There are tow tools used here
1. HP Load Runner (an expensive tool).
2. JMeter - free
JMeter was used by development team to test for a huge number of users. It also does not have any licensing limit like Load Runner.
How the tests are run ?
A URL is invoked with some parameters and web application reads the parameter , process results and generates a pdf file.
When running the test we found that for a load of 1000 users spread over period of 60 seconds, our application took 4 minutes to generate 1000 files.
Now when we pass the same url through JMeter, 1000 users with a ramp up time of 60 seconds,
application takes 1 minutes and 15 seconds to generate 1000 files.
I am baffled here as to why this huge difference in performance.
Load runner has rstat daemon installed on both servers.
Any clues ?
You really have four possibilities here:
You are measuring two different things. Check your timing record structure.
Your request and response information is different between the two tools. Check with Fiddler or Wireshark.
Your test environment initial conditions are different yielding different results. Test 101 stuff, but quite often overlooked in tracking down issues like this.
You have an overloaded load generator in your loadrunner environment which is causing all virtual users to slow. For example you may be logging everything resulting in your file system becoming a bottleneck for the test. Deliberately underload your generators, reduce your logging levels and watch how you are using memory for correlations so you don't create a physical memory oversubscribed condition which results in high swap activity.
As to the comment above as to JMETER being faster, I have benchmarked both and for very complex code the C based solution for Loadrunner is faster upon execution from iteration to iteration than the Java based solution in JMETER. (method: complex algorithm for creating data files on the fly for upload for batch mortgage processing. p3: 800Mhz. 2GB of RAM. LoadRunner 1.8 million iterations per hour ungoverned for a single user. JMETER, 1.2 million) Once you add in pacing it is the response time of the server which is determinate to both.
It should be noted that LoadRunner tracks its internal API time to directly address accusations of the tool influencing the test results. If you open the results set database set (.mdb or Microsoft SQL server instance as appropriate) and take a look at the [event meter] table you will find a reference for "Wasted Time." The definition for wasted time can be found in the LoadRunner documentation.
Most likely the culprit is in HOW the scripts are structured.
Things to consider:
Think / wait time: When recording,
Jmeter does not automatically put in
waits.
Items being requested: Is
Jmeter ONLY requesting/downloading
HTML pages while Load runner gets all
embedded files?
Invalid Responses:
are all 1000 Jmeter responses valid?
If you have 1000 threads from a
single desktop, I would suspect you
killed Jmeter and not all your
responses were valid.
Dont forget that the testing application itself measures itself, since the arrival of the response is based on the testing machine time. So from this perspective it could be the answer, that JMeter is simply faster.
The second thing to mention is the wait times mentioned by BlackGaff.
Always check results with result tree in jmeter.
And always put the testing application onto separate hardware to see real results, since testing application itself loads the server.

Resources