I would like to run a test for 10 threads, by sending 100 requests from a CSV file to a server. I would like for each thread to fire 100 requests sequentially while allowing parallel requests. I have my main sampler and sub samplers for its subcomponents and another sampler to which i want to compare my results. This configuration results in 7 samplers in general. The problem is that when i try to plot the throughput vs. threads graph in jmeter for 1 user results shows more than 100 transactions/sec values on the y axis. Same thing happens in the "show results in table" listener (i.e., for 1 user it shows 700 samples) How can i graph the graphs/listeners for only the main samplers (mine and the other) to get realistic numbers.
am I doing the right thing?
Thanks
Put all the samples/sub-samples into a transaction controller for each response you want to measure.
Then only plot the graph for the transaction result, rather than for each sample/sub-sample.
Maybe this could help You to solve:
Apache JMeter - User's Manual: Elements of a Test Plan http://goo.gl/gIZwX
There i've read something like ThreadGroup
"Each thread will execute the test plan in its entirety and completely independently of other .... The Graph Results listener plots the response times on a graph."
Good luck.
Related
I have 250 users ,and around 60000 sample counts should be hit including all the requests. Whichever request is supposed to get huge sample count,I have put those request within loop count,But the requests outside the loop are getting executed only 3-4 times which is less than expected. How do I handle this?
It is not very possible to provide the comprehensive answer without knowing what you're trying to achieve and seeing your test or at least Thread Group configuration
The easiest option is moving the requests which you want to execute more times into a separate Thread Group
If the requests have to stay in one Thread Group you can control the frequency using Throughput Controller
If the logic is more complex - consider using Switch Controller or Weighted Switch Controller
While understanding concept of Concurrent thread and ultimate thread group, i am confused to understand result of summary/aggregate report while running concurrent thread or ultimate thread group .For example if i have 200 user and ramp up time 60 sec then i didn't see all sampler request as 200 sample after completing execution successfully but only few sampler request have 200 sample. when i use normal thread group then i always got thread count same for each sampler request after completing execution .
for realistic load testing with more user , could you please suggest me which thread group should prefer.
Could you please provide valuable guidance with some valuable link/book and also share me standard performance bench mark criteria or key parameter detail while doing load testing.(if any bench load parameter value does not meet standard then we can say that there is a performance issue)
Thanks for giving valuable time in advance.
Thanks
amit
This is due to the fact that:
Your application response time is too high
Your test duration is too low
For example I can see response times > 80 seconds:
it means that if a single virtual user has cumulative response time for 2 samplers > 160 seconds and the test duration is 120 seconds it will not be able to execute all the requests. Just increase your test duration to be i.e. 10 minutes and you should see more virtual users capable of executing all the Samplers you defined in the test plan.
Also given first user is capable of executing all the requests successfully and in time it looks like that your application gets overloaded hence cannot respond fast enough when the number of concurrent users reaches some "critical threshold", you can add listeners like Active Threads Over Time and Response Times vs Threads, this way you will be able to correlate increasing load with the increasing response time.
If also makes sense to collect:
Baseline health metrics of your application (CPU, RAM, Network, Disk usage, etc.), it can be done using JMeter PerfMon Plugin.
Lower level details like slowest methods, largest objects, heaviest database queries, etc. This form of information can be obtained using profiling tools specific to your application programming language(s).
i have the need to run one http request sample more times than the rest of the samples in the Test group, for example, i need to run for 10 users, but for each of them, i need to run one of the samples multiple times, lets say 10, is there a way to achieve it?
1) I set "Number of Threads (Users)" in Thread group to 10, so i have 10 total users (with data taken for every thread from a CVS file, with equal number of rows and threads, so 1 thread is an unique data set.
2) I make some requests after, but for only one of the requests, i need to make it like 100 times in parallel for the same data for every thread, so in total, i will make 1000 (100 http requests for 10 unique users/threads) requests to that endpoint
Thanks in advance!
Edit: I found the loop controller, but its not making the 100 http requests at the same time for each thread in the thread group, it makes another one when the first ends
If I correctly got your requirements, to wit:
You need to execute one sampler more times than other samplers
The execution must occur at exactly the same moment
The most obvious choice would be either Parallel Sampler or Parallel Controller (depending on the nature of your requests). You can install both test elements using JMeter Plugins Manager:
Is it possible to automate the load tests in Jmeter and increase the number of threads until the first error is observed?
For example I start with testing 16 threads for every seconds and increase the number until i receive an error. But instead of doing this manually can I let this run automatically?
Looking into Pre-defined Properties section of JMeter's User Manual on Functions there is a JMeterThread.last_sample_ok variable holding result of the last sampler execution.
So if you build your test plan as follows:
Sampler which does test action
If Controller checking whether previous sampler was successful
If not - relevant actions (stop test, send email, stop ramping up virtual users, etc.)
The value you need to put in "Condition" input of If Controller should look like
"${JMeterThread.last_sample_ok}"=="false"
See How to use JMeter's 'IF' Controller and get Pie for more information on JMeter's If Controller.
Regarding threads in jmeter You may find those 2 links interesting:
What is the highest number of threads that is reasonable to simultaneously run in Jmeter?
JMeter max. thread limit
Regarding your methodology, why not use slow rampup and see the limit using what Dmitri T has provided ?
I'm trying to understand the basics of JMeter. I've got a "plus1" Java servlet that adds one to a request parameter and returns the result, so it's a fast test servlet just so I can understand load testing.
Here's my test plan:
Thread Group: 1 thread, ramp up 1 s, loop count 10000
HTTP Request to localhost
Graph Results
Summary Report
When I run this, the summary report shows a throughput number of 200/sec or so.
The key question, with no controllers in the test plan, is JMeter running the test plan (sending a single request) and waiting for the response before looping?
When I introduce a more computationally intensive page for the request, the throughput number goes down as I would expect.
In short, yes.
There is an argument for having a sampler that would make a request and not wait for the response but it's an edge case. In most cases you would want a testing tool to wait to see what happens and verify things. It's also more realistic, most users will wait for a response, in fact they generally have to, before making subsequent calls.
If you want to run a capacity test then the best approach, I think, is to spread the load over multiple threads and to actually throttle the throughput of each one - you can do this using a Constant Throughput Controller. Eg. You could have 500 threads each running at 60 requests per minute, this would give a total load of 500 reqs/sec. This way, your test load is predictable and stable - it won't be linked to the speed of response from the server. Note. with multiple threads you'll want a ramp up period and you might find you have to spread the test over multiple machines (known as 'distributed' testing if you're going to google it).