I know about the Performance test for an "Online" application or APIs where we measure response time, throughput and CPU/Memory utilization.
My question is - What are the parameters to measure for performance testing of a "Batch" job? The job I am talking about reads a file (nightly process) and update the database (RDBMS) with new records. What is the criteria of performance testing for such batch processes?
In a batch scenario, the most important performance testing attributes in my opinion are the throughput of your tasks (i.e. worker / thread) and the endurance of the system.
In terms of throughput, you would want to identify how much throughput can be produced by an individual worker and therefore you are able to size your production batch jobs based on batch size accurately. Also, if the throughput is not in an acceptable level, it means that the system has room for improvement in Logic / IO (ex: query performance, indexes, connection pools and so forth)
In terms of endurance, you would want to ensure that the batch jobs are able to run long with consistent throughput depending on the size of batches. If the performance degrades as the batch size grows, that would mean that there are bottlenecks that you need to fix before you feed large batches to your system.
Related
Are there any clear ideas that define the Scalability test? I have designed Load, Stress, Spike and Soak tests using JMeter Ultimate Thread Group but, i have not any idea about Scalability test differs from these tests. How to design a good scalability test with ultimate thread group in Jmeter for maximum user count is equal to 500.
As per the Wikipedia article on Scalability testing:
Scalability testing, is the testing of a software application to measure its capability to scale up or scale out in terms of any of its non-functional capability.
So basically you can use the same approach as for the Stress Testing, something like this:
then you need to pay attention to the following charts/KPIs:
Active Threads Over Time - to show number of active virtual users
Transactions per Second - to show the system throughput
Charts of system resources consumption - to show usage of CPU, RAM, etc.
Ideally the charts should be more or less the same/linear, i.e. if you increase the load by factor of 2x the throughput should increase by the same factor and resource consumption should increase proportionally.
If the charts are not equal/similar/proportional - then at some point the system won't be able to keep threads/transactions per second ratio and this will indicate the bottleneck
I am currently using JMeter command line to trigger load test under master(2GB Memory & 1 Core) and slave machine(2GB Memory & 1 Core)
How many threads are supported by JMeter for above configuration.
Do we need to change any thing in Heapsize to get maximum threads?
Can any one help in this regard.
We don't know, it might be the case even 1 thread is not supported, it might be the case 2147483647 users are supported.
The number of virtual users you can simulate varies and depends on different factors like:
The nature of the test (what protocols are in scope, what exactly the test is doing, etc.) For simple GET request with small response you will be able to simulate more users, for complex POST request with a lot of calculated encrypted parameters uploading several large files the number of users will be much less
The size of request and response
The number of pre/post processors, assertions, etc.
So the only way of telling how many users you can simulate to measure it
Make sure to have monitoring of essential OS health metrics like CPU, RAM, etc. usage is in place. If you don't have any solutions in mind you can consider using JMeter PerfMon Plugin
Make sure to follow JMeter Best Practices
Start with 1 user and gradually increase the load at the same time looking at the CPU, RAM, Network, disk usage, etc.
When any of monitored metrics starts exceeding, say, 80% of maximum available capacity take a look at how many threads are online just before this moment using i.e. Active Threads Over Time listener
This is how many users you can simulate for particular this test on particular this hardware/software combination
This question already has answers here:
How do threads and number of iterations impact test and what is JMeter’s max. thread limit
(6 answers)
Closed 5 years ago.
Can anybody explain about how many concurrent users one Jmeter will handle?
I want to run 2000 concurrent users for my project.
No one can "explain" this to you, you can only measure it.
The number of virtual users which can be simulated by JMeter depends on several factors:
machine hardware specifications (CPU, RAM, NIC, etc)
software specifications and versions (OS, JVM and JMeter version and architecture)
the nature of your test (number of requests, size of request/response, number of pre/post processors, assertions, etc)
So your actions should look like:
Make sure you're following JMeter Best Practices
Set up monitoring of baseline OS health metrics (CPU, RAM, disk and network usage). This can be done using i.e. JMeter PerfMon Plugin
Start with 1 virtual user and gradually increase the load until resource consumption won't exceed some reasonable threshold (i.e. 90% of maximum capacity)
Once you start running out of resources - mention the number of virtual users which were active at this moment - this will be the maximum you can simulate on particular this machine for particular this test. This can be done using i.e. Active Threads Over Time listener
If the number is 2000 or more - you're good to go, if it's less - you will have to go for Distributed Testing
See What’s the Max Number of Users You Can Test on JMeter? article for more detailed explanation of the above points and few more hints.
Well, it is the same as with any other software: One CPU-Core can handle exactly one operation (-step) at the same time. What JMeter does is ramping up x threads and then starts them. No "magic" there. So in order to give you good coverage in respect to collision you will want to dedicate a machine with a decent number of CPU cores (Server, not your local machine. Your Machine will occasionally be distracted by other tasks) and make sure your processes takes a decent amount of runtime themselves. Additionally, run the same Test several times to fade out warm up times. How many concurrent users you will get (to answer the question) depends on your environment and "it all depends". Basically, it is not limited by JMeter but by the System you execute it on. You will need to "try it out" and "finetune" your test.
I'm developing an API and want to (of course) optimize performance in terms of number of concurrent users.
I have run some tests using Blitz (my app is on Appfog, PHP, 512MB, 1 instance) according to those tests my API can handle 11 concurrent users before response times get too high (>1000 ms).
For me it is surprisingly low. I can add more RAM and instances to improve the results but I suspect that my code could be smarter.
I did some tests, always with same hardware config. Result is number of concurrent users before exceeding 1000 ms in response time.
Using my actual API (with db-queries) --> 11 users
Using script that just outputs text (minimum processing) --> 40 users
Using script with sleep(2) function to simulate long response time --> 52 users (before exceeding (2000 + 1000 ms)
Using a memory intensive script (building data with for-loop): 95 users
I really don't see any correlation in the results (each test has been run many times with similar results). The more processing for the script - the more concurrent users?
What is that affects the number concurrent users (apart from hardware config)?
Generally there are two aspects you should think of:
bottlenecks like database or external APIs. You are as slow as the slowest component
look for locks that turn your concurrent code into sequential. See: Amdahl's law
The second point is related to the first one. Database or whatever you use in your code might be internally synchronized or might not cope well with concurrency.
We are re-implementing(yes from scratch) a web application which is currently in production. We have decided to start doing some performance tests on the new app, to get some early information of the capabilities.
As the old application is currently in production and has a good performance we would like to extract some performance parameters, and then use this parameters as a reference or base goal of the performance of the new application.
Which do you think are the most relevant performance parameters we should be obtaining from the current production application?
Thanks!
Determine what pages are used the most.
Measure a latency histogram for the total time it takes to answer the request. Don't just measure the mean, measure a histogram.
From the histogram you can see how many % of requests have which latency in milliseconds. You can choose to key performance indicators by takes the values for 50% and 95%. This will tell you the average latency and the worst latency (for the worst 10% of requests).
Those two numbers alone will bring you great confidence regarding the experience your users will have.
Throughput does not matter for users, but for capacity planning.
I also recommend that you track the performance values over time and review them twice a year.
Just in case you need an HTTP client, there is weighttp, a multi-threaded client written by the guys from Lighttpd.
It has the same syntax used by ApacheBench, but weighttp lets you use several client worker threads (AB is single-threaded so it cannot saturate a modern SMP Web server).
The answer of "usr" is valid, but you can as well record the minnimum, average and maximum latencies (that's useful to see in which range they play). Here is a public-domain C program to automate all this on a given concurrency range.
Disclamer: I am involved in the development of this project.