I need to undergo performance testing for my project and I have learned how to Handle the Jmeter for Performance testing through online, but still, i was unable to find the solution how to analyze the result? from the report.I do know how to analyze the result so that I can't able to find the Performance Issue I n m application, where the error had been occurring, so from that how I can improve that performance.Is there is any article or video tutorial to learn how to analyze the result?
There are 2 possible test outcomes:
Positive: the performance of your application matches SLA and/or NFR. In this case you need to provide the report as a proof, it might be i.e. HTML Reporting Dashboard
Negative: the performance of your application is beyond the expectations. In this case you need to perform some investigation on the reasons, which could be in:
Simply lack of resources, i.e. your application doesn't have enough headroom to operate due to not enough CPU, RAM, Network or Disk bandwidth. So make sure you are monitoring these resources on the application under test side, you can do it using i.e. JMeter PerfMon Plugin.
The same but on JMeter load generator(s) side. If JMeter cannot send requests fast enough the application won't be able to serve more requests so if JMeter machine doesn't have enough resources - the situation will be the same so make sure to monitor the same metrics on JMeter host(s) as well
You have some software configuration problem (i.e. application server thread pool, db connection pool, memory settings, caching settings, etc. are not optimal). In the majority of cases web, application and database servers default configuration is not suitable for high loads and it needs to be tuned so try playing with different settings and see the impact.
Your application code is not optimal. In case when there is a plenty of free hardware resources and you are sure that infrastructure is properly set up (i.e. other applications behave fine) it might be a problem with your application under test code. In this case you will need to re-run your test with profiler tool telemetry to see what are the most time and resources consuming methods and how they can be optimised.
It might also be a networking related problem, i.e. faulty router or bad cable or whatever.
There are too many possible reasons however the approach should be the same: the whole system acts at the speed of its slowest component so you need to identify this component and determine why it is slow. See Understanding Your Reports posts series to learn how to read JMeter load test results and identify the bottlenecks from them.
Related
i want to use the tool Apache JMeter in order to make an http flood in a website that i have designed in Visual Studio 2017. My question is, if i start running the website and in the same time start the http flood, is it "safe" for my PC? I mean, is it possible to cause a damage because of the resource consumption? Or is it depended from the number of threads i'll use at the JMeter script parameters?
Thank you.
From the hardware perspective I don't think you will be able to damage your machine as most probably it will throttle or simply turn off when temperatures will exceed acceptable threshold.
From the test results point of view you might be getting inaccurate metrics especially if you run JMeter at the same machine as when it comes to high loads both JMeter and your website will become very resource intensive will "fight" for the resources like CPU, RAM, etc.
So I would recommend considering deploying your website on prod-like environment and use separate JMeter load generator(s) for conducting the load. This way you will get confidence that test results are not impacted by mutual interference.
If you cannot set up proper load testing environment unfortunately your test scenario will not make a lot of sense, there are still some areas which you can test on a scaled-down environment from performance perspective like:
integration test
soak test
interoperability test
How to proceed with performing performance testing for an application that is not in production.
If the environment of the application under test is the same as it will be in production - the approach should not be different.
If the environment of the application under test is different, i.e. has less servers, the servers has less memory, etc. - in the absolute majority of cases you will not be able to calculate and predict the performance on more powerful hardware as there are too many factors to consider. You can inform the client about it right away.
However there are still some types of testing you can perform, i.e.:
You could run an integration test to check whether your application is configured for high load
You could run a scalability test to check whether and how does your application scale
You could run a soak test to check for possible memory leaks
You could use profiling tools to check bottlenecks in terms of code quality
You could identify slow DB queries and look for a way to optimise them
etc.
More information: Performance Testing in a Scaled Down Environment. Part Two: 5 Things You Can Test
If you meant how to judge on how much volume to test if application is not already in production, the answer is simple, you have to predict. The prediction can be based on survey, reports from your business analysts.
If none of the above are available. Just test how much your application can withstand in a test environment of similar configuration as production. This will give you an idea of when you need to start worrying while the application is live.
I have to test my rest API such that 100k API calls are made simultaneously(within 500ms).Any Idea how to simulate it?Utility to use?
I would suggest to use JMeter too, it enable your test to create multiple concurrent jmeter server. Just to be clear you can control multiple remote JMeter engines from a single JMeter client and replicate a test across many computers and thus simulate a larger load on the target server.
To be honest, your target is quite high (100k API calls simultaneously within 500ms), i.e. you'll need a lot of jmeter servers. When you create stress tests, there are not magical recipes, guides or manuals. Trial and error is a fundamental method of solving this kind of problems.
In my experience, I first try with few concurrent users and see how the server react. Then increase the number of concurrent users till to reach an intolerable performance decrease or, worst, a bottleneck .
http://jmeter.apache.org/usermanual/remote-test.html
You will obviously need a load testing tool which can be run in a distributed mode, i.e. 1 controller and X load generators executing the same test.
Grinder - scripts are written in some Python dialect
Apache JMeter - this guy doesn't require any specific knowledge, you can create tests using simple GUI
Tsung - is written in Erlang, known for capability to produce high loads even on low-end hardware.
See Open Source Load Testing Tools: Which One Should You Use? article for more information on above tools.
JMeter
The Apache JMeter™ application is open source software, a 100% pure
Java application designed to load test functional behavior and measure
performance. It was originally designed for testing Web Applications
but has since expanded to other test functions.
I have been tasked with looking for a performance testing solution for one of our Java applications running on a Weblogic server. The requirement is to record production requests (both GET and POST including POST data) and then run these requests in a performance test environment with a copy of the production database.
The reasons for using production requests instead of a test script are:
It is a large application with no existing test scripts so it would be a a large amount of work to write scripts to cover the entire application.
Some performance issues only appear when users do a number of actions in a particular order.
To test using actual user interaction with the system not an estimation at how the users may interact with the system. We all know that users will do things we have not thought of.
I want to be able to fix performance issues and rerun the requests against the fixed code before releasing to production.
I have looked at using JMeters Access Log Sampler with server access logs however the access logs do not contain POST data and the access log sampler only looks at the request URL so it cannot simulate users submitting form data.
I have also looked at using the JMeter HTTP Proxy Server however this can record the actions of only one user and requires the user to configure their browser to use the proxy. This same limitation exist with Tsung and The Grinder.
I have looked at using Wireshark and TCReplay but recording at the packet level is excessive and will not give any useful reports at a request level.
Is there a better way to analyze production performance considering I need to be able to test fixes before releasing to production?
That is going to be a hard ask. I work with Visual Studio Test Edition to load test my applications and we are only able to "estimate" the users activity on the site.
It is possible to look at the logs and gather information on the likelyhood of certain paths through your app. You can then look at the production database to look at the likely values entered in any post requests. From that you will have to make load tests that approach the useage patterns of your production site.
With any current tools I don't think it is possible to record and playback actual user interation.
It is possible to alter your web app so that is records and logs every request and post against session and datetime. This custom logging could be then used to generate load test requests against a test website. This would be some serious code change to your existing site and would likely have performance impacts.
That said, I have worked with web apps that do this level of logging and the ability to analyse the exact series of page posts/requests that caused an error is quite valuable to a developer.
So in summary: It is possible, but I have not heard of any off the shelf tools that do it.
Please check out this Whitepaper by Impetus Technologies on this page.. http://www.impetus.com/plabs/sandstorm.html
Honestly, I'm not sure the task you're being asked to do is even possible, let alone a good idea. Depending on how complex the application's backend is, and how perfect you can recreate the state (ie: all the way down to external SOA services or the time/clock), it may not be possible to make those GET and POST requests reproduce the same behavior.
That said, performance testing against production data is always great, but it usually requires application-specific knowledge that will stress said data. Simply repeating HTTP GETs and POSTs will almost certainly not yield useful results.
Good luck!
I would suggest the following to get the production requests and simulate the accurate workload:
1) Use coremetrics: CoreMetrics provides such solutions using which you can know the application usage patterns. This would help in coming up with an accurate workload model. This model can then be converted into test scripts and executed against a masked copy of production database. This will provide you accurate results about the application performance in realtime.
2) Another option would be creating a small utility using AOP (Aspect oriented apporach) so that it can trace all the requests and corresponding method traces. This would help in identifying the production usage pattern and in turn accurate simulation of workload. AOP frameworks such as AspectJ can be used. This would not require any changes in code. The instrumentation can be done on the fly. The other benefit would be that thi cna only be enabled for a specific time window and then it can be turned off.
Regards,
batterywalam
What can be the various performance testing scenarios to be considered for a website with huge traffic? Is there any way to identify the elements of the code which are adversely affecting the site performance?
Please provide something similar to checklist of generalised scenarios to be tested to ensure proper performance testing.
It would be good to start with some load testing tools like JMeter or PushToTest and start running it against your web application. JMeter simulates HTTP traffic and loads the server that way. You can do that as well as load test AJAX parts of your application with PushToTest because it can use Selenium Scripts.
If you don't have the resources (computers to run load tests) you can always use a service like BrowserMob to run the scripts against a web accessible server.
It sounds like you need more of a test plan than a suggestion of tools to use. In performance testing, it is best to look at the users of the application -
How many will use the application on a light day? How many will use the app on a heavy day?
What type of users make up your user population?
What transactions will each of these user types perform?
Using this information, you can identify the major transactions and come up with different user levels (e.g. 10, 25, 50, 100) and percentages of user types (30% user A, 50% user B, ...) to test these transactions with. Time each of these transactions for each test you execute and examine how the transaction times change as compared to your user levels.
After gathering some metrics, since you should be able to narrow transactions to individual pieces of code, you will be able to know where to focus your code improvements. If you still need to narrow things down further, finer tests within each transaction can be created to provide more granular results.
Concurrency will kill you here, as you need to test your maximum projected concurrent users + wiggling room hitting the database, website, and any other web service simultaneously. It really depends on the technologies you're using, but if you have a large interaction of different web technologies, you may want to check out Neoload. I've had nothing but success with this web stress tool, and the support is top notch if you need to emulate specific, complicated behavior (such as mocking AMF traffic, or using responses from web pages to dictate request behavior.)
If you have a DB layer then this should be the initial focus of your attention, once the system is stable (i.e. no memory leaks or other resource issues). If the DB is not the bottle neck (or not relevant) then you need to correlate CPU/Memory/Disk IO and Network traffic with the increasing load and increasing response times. This gives you an idea of capacity and correlation (but not cause) to resource usage.
To find the cause of a given issue with resources you need to establish a Six Sigma style project where you define the problem and perform root case analysis in order to pin point the piece of code (or resource configuration) that is the bottleneck. Once you have done this a couple of times in your environment, you will notice patterns of workload, resource usage and counter measures (solutions) that will guide you in your future performance testing 'projects'.
To choose correct performance scenarios you need to go through the next basic checklist:
High priority scenarios from the business logic perspective. For example: login/order transactions, etc.
Mostly used scenarios by end users. Here you may need information from monitoring tools like NewRelic, etc.
Search / filtering functionality (if applicable) - Scenarios which involve different user roles/permissions
Performance test is a comparison test either with the previous release of the same application or with the existing players in the market.
Case 1- Existing application
1)Carry out the test for the same scenarios as covered before to get a clear picture on the response of the application before and after the upgrade.
2)If you need to dig deeper you can get back to the database team to understand which functionalities are getting more requests. Also ask them on the total number of requests on an average on any particular day so that you can take a call on what user load and time duration to be given for the test.
Case 2- New Application
1) Look for existing market players and design your test as per the critical functions of the rival product (for e.g. Gmail might support many functions what what is being used often is launch ->login ->compose mail -> inbox ->outbox).
2) Any time you can get back to your clients on what they suppose to be business critical scenarios or scenarios that will be used more often..