I'm looking for a way to report NFR/performance quality metrics of a code, these metrics comes from the execution of unit tests and can be for example the average, minimum and maximum response times statistics, number of executions and other custom metrics.
For this, i'm thinking to create "virtual" resources that are not liked to sources on each class and test method, and report these metrics on them so that every time the tests are executed i get an idea on the impact of the last changes on performance.
I saw in other discussion that SonarQube is designed for static code quality, these performance and NFR metrics are also part of the code quality, i think it makes sense that they get reported and tracked in SonarQube.
Because, as you noted, SonarQube is designed for static code analysis, you're going to have a hard time with the 'virtual' resources route. Analysis looks at the directories and files in the source directory. No file, no SonarQube resource, and nowhere to attach metrics.
If you're determined to do this, then you should consider attaching your metrics at some aggregate level: module or project. Note that metric history isn't kept below the project level.
Alternately, you could attach these metrics to the test files themselves.
Keeping the historical data is another point, as last option I will keep the methods level metrics in a separate repository and write a SonarQube widget that displays these metrics, I made a prototype and that worked, used w2ui, I will lose SonarQube capability but for a first approach that will be fine
Related
For functional automation we use to create a framework which is reusable for automating application. Is there any way to create Performance testing framework in jmeter. So that we can use same framework for Performance testing of different applications.
Please help if any one knows and provide more information regarding it.
You can consider JMeter as a "framework" which already comes with test elements to build requests via different protocols/transports, applying assertions, generating reports, etc.
It is highly unlikely you will be able to re-use existing script for another application as JMeter acts on protocol level therefore there will be different requests for different applications.
There is a mechanism in JMeter allowing to re-use pieces of test plan as modules so you won't have to duplicate your code, check out Test Fragments and Module Controller, however it is more applicable for a single application.
The only "framework-like" approach I can think of is adding your JMeter tests into continuous integration process so you will have a build step which will execute performance tests and publish reports, basically you will be able to re-use the same test setup and reporting routine and the only thing which will change from application to application will be .jmx test script(s). See JMeter Maven Plugin and/or JMeter Ant Task for more details.
You must first ask yourself, how dynamic is my conversation that I am attempting to replicate. If you have a very stable services API where the exposed external interface is static, but the code to handle it on the back end is changing, then you have a good shot at building something which has a long life.
But, if you are like the majority of web sites in the universe then you are dealing with developers who are always changing something, adding a resource, adding of deleting form values (hidden or not), headers, etc.... In this case you should consider that your scripts are perishable, with a limited life, and you will need to rebuild them at some point.
Having noted the limited lifetime of a piece of code to test a piece of code with a limited lifetime, are there some techniques you can use to insulate yourself? Yes. Rule of thumb is the higher up the stack you go to build your test scripts the more insulated you are from changes under the covers ( assuming the layer you build to is stable ). The trade off is with more of the intelligence under the covers of your test interface, the higher the resource cost for any individual virtual user which then dictates more hosts for test execution and more skew from client side code which can distort the view of what is coming from the server. An example, run a selenium script instead of a base jmeter script. A browser is invoked, you have the benefit of all of the local javascript processing to handle the dynamic changes and your script has a longer life.
I'm working on integrating performance tests with CI/CD infrastructure. My performance testing tool is JMeter and CI server is Jenkins. Both can do their jobs, but when it comes to integrating performance tests in CI/CD pipeline, things are no longer so trivial.
To have proper deployment pipeline CI server needs to know when a performance test build should be considered passed or failed. Verification of mean average of response times is not a good option - completely different SLAs can apply to different types of transactions executed as part of the same JMX file. Asserting on mean average of response times for particular transaction type is a far better option, but it is still far from a perfect solution. This won't tell us e.g. if response times for same type of transaction are increasing (which can has something to do with memory leaks) or decreasing (which can be a blessing of server-side cache). For that reason, relying only on mean avg response times can create false confidence in software quality.
I analysed a couple of tools, including JMeter Maven Analysis Plugin and Jenkins Performance Plugin. None of them seems to offer what I'm looking for.
In pre-CI era, performance tests were executed late in the development lifecycle and analysed by a human being. I wonder if anyone came across any advance enough tool, which can make it possible for CI server to reliably determine if perf test build should be marked as passed or failed, without verification of results by a human?
In absence of tools offering what I'm looking for, I've decided to kick off an open source project create one on my own, in my free time:
https://github.com/automatictester/lightning
It's still in its early days, but the core functionality is there. Now it's a matter of time to extend it with extras.
I am a Jenkins expert but only mildly skilled with JMeter. Can your JMeter results be processed by a script in order to tell when Transaction Type Z passes beyond the acceptable time limit for the run?
It looks like parsing the jmeter results with some additional logic is what you need to be able to bubble up an exit(1) (or any non-zero) value.
I have my site running inside a server behind my cope firewall. I need some libraries that will give me some insights on load testing. Specifically I need.
Provide results for the maximum concurrent tests the platform can support at this time.
Steps to scale the application to 20 concurrent tests.
Provide results for the maximum tests per day the platform can support at this time.
Steps to scale the application to minimum 10000 tests per day.
What is report generation time to load ratio.
Is there some open source APIs that can give me some insights on the above points
Your use cases include Performance testing as well as tuning part.
Performance engineering (testing + tuning + reporting) is big task. Which depends on system under test, SLAs defined, possible bottlenecks, optimizations and it takes time/experience to do all these things.
For use cases :
1. Provide results for the maximum concurrent tests the platform can support at this time.
2. Steps to scale the application to 20 concurrent tests.
3. Provide results for the maximum tests per day the platform can support at this time
Load testing and Stress testing is required. JMeter is an open source tool written in Java which can used for Load testing and stress testing which will help you to find max capacity of server today. Please have a look at JMeter and tutorial for building test plan Building Test plan in JMeter
4. Steps to scale the application to minimum 10000 tests per day
Performance tuning is required to scale appliation to 10000 users per day if it doesn't match the required SLA. Performance testing results will help to identify possible bottlenecks in system which you should try to improve.
DB - add indexes, tune parameters, improve query, partition DB etc.
App Server - tune server config, caching, connection pool, code improvement, logging etc.
Similarly other tiers should be looked at and improved performance.
5. What is report generation time to load ratio.
Generally testing and data setup take more time than analysis and resolution. many freewares are available for JMeter result/log analysis.
In addition, Paid tools like Load Runner, Rational Performance Tester, Selenium etc. are also present for load testing. I prefer JMeter because of its simplicity and configurability. (lot of support is present with custom plugins)
I hope this was helpful.
I have been tasked with looking for a performance testing solution for one of our Java applications running on a Weblogic server. The requirement is to record production requests (both GET and POST including POST data) and then run these requests in a performance test environment with a copy of the production database.
The reasons for using production requests instead of a test script are:
It is a large application with no existing test scripts so it would be a a large amount of work to write scripts to cover the entire application.
Some performance issues only appear when users do a number of actions in a particular order.
To test using actual user interaction with the system not an estimation at how the users may interact with the system. We all know that users will do things we have not thought of.
I want to be able to fix performance issues and rerun the requests against the fixed code before releasing to production.
I have looked at using JMeters Access Log Sampler with server access logs however the access logs do not contain POST data and the access log sampler only looks at the request URL so it cannot simulate users submitting form data.
I have also looked at using the JMeter HTTP Proxy Server however this can record the actions of only one user and requires the user to configure their browser to use the proxy. This same limitation exist with Tsung and The Grinder.
I have looked at using Wireshark and TCReplay but recording at the packet level is excessive and will not give any useful reports at a request level.
Is there a better way to analyze production performance considering I need to be able to test fixes before releasing to production?
That is going to be a hard ask. I work with Visual Studio Test Edition to load test my applications and we are only able to "estimate" the users activity on the site.
It is possible to look at the logs and gather information on the likelyhood of certain paths through your app. You can then look at the production database to look at the likely values entered in any post requests. From that you will have to make load tests that approach the useage patterns of your production site.
With any current tools I don't think it is possible to record and playback actual user interation.
It is possible to alter your web app so that is records and logs every request and post against session and datetime. This custom logging could be then used to generate load test requests against a test website. This would be some serious code change to your existing site and would likely have performance impacts.
That said, I have worked with web apps that do this level of logging and the ability to analyse the exact series of page posts/requests that caused an error is quite valuable to a developer.
So in summary: It is possible, but I have not heard of any off the shelf tools that do it.
Please check out this Whitepaper by Impetus Technologies on this page.. http://www.impetus.com/plabs/sandstorm.html
Honestly, I'm not sure the task you're being asked to do is even possible, let alone a good idea. Depending on how complex the application's backend is, and how perfect you can recreate the state (ie: all the way down to external SOA services or the time/clock), it may not be possible to make those GET and POST requests reproduce the same behavior.
That said, performance testing against production data is always great, but it usually requires application-specific knowledge that will stress said data. Simply repeating HTTP GETs and POSTs will almost certainly not yield useful results.
Good luck!
I would suggest the following to get the production requests and simulate the accurate workload:
1) Use coremetrics: CoreMetrics provides such solutions using which you can know the application usage patterns. This would help in coming up with an accurate workload model. This model can then be converted into test scripts and executed against a masked copy of production database. This will provide you accurate results about the application performance in realtime.
2) Another option would be creating a small utility using AOP (Aspect oriented apporach) so that it can trace all the requests and corresponding method traces. This would help in identifying the production usage pattern and in turn accurate simulation of workload. AOP frameworks such as AspectJ can be used. This would not require any changes in code. The instrumentation can be done on the fly. The other benefit would be that thi cna only be enabled for a specific time window and then it can be turned off.
Regards,
batterywalam
What can be the various performance testing scenarios to be considered for a website with huge traffic? Is there any way to identify the elements of the code which are adversely affecting the site performance?
Please provide something similar to checklist of generalised scenarios to be tested to ensure proper performance testing.
It would be good to start with some load testing tools like JMeter or PushToTest and start running it against your web application. JMeter simulates HTTP traffic and loads the server that way. You can do that as well as load test AJAX parts of your application with PushToTest because it can use Selenium Scripts.
If you don't have the resources (computers to run load tests) you can always use a service like BrowserMob to run the scripts against a web accessible server.
It sounds like you need more of a test plan than a suggestion of tools to use. In performance testing, it is best to look at the users of the application -
How many will use the application on a light day? How many will use the app on a heavy day?
What type of users make up your user population?
What transactions will each of these user types perform?
Using this information, you can identify the major transactions and come up with different user levels (e.g. 10, 25, 50, 100) and percentages of user types (30% user A, 50% user B, ...) to test these transactions with. Time each of these transactions for each test you execute and examine how the transaction times change as compared to your user levels.
After gathering some metrics, since you should be able to narrow transactions to individual pieces of code, you will be able to know where to focus your code improvements. If you still need to narrow things down further, finer tests within each transaction can be created to provide more granular results.
Concurrency will kill you here, as you need to test your maximum projected concurrent users + wiggling room hitting the database, website, and any other web service simultaneously. It really depends on the technologies you're using, but if you have a large interaction of different web technologies, you may want to check out Neoload. I've had nothing but success with this web stress tool, and the support is top notch if you need to emulate specific, complicated behavior (such as mocking AMF traffic, or using responses from web pages to dictate request behavior.)
If you have a DB layer then this should be the initial focus of your attention, once the system is stable (i.e. no memory leaks or other resource issues). If the DB is not the bottle neck (or not relevant) then you need to correlate CPU/Memory/Disk IO and Network traffic with the increasing load and increasing response times. This gives you an idea of capacity and correlation (but not cause) to resource usage.
To find the cause of a given issue with resources you need to establish a Six Sigma style project where you define the problem and perform root case analysis in order to pin point the piece of code (or resource configuration) that is the bottleneck. Once you have done this a couple of times in your environment, you will notice patterns of workload, resource usage and counter measures (solutions) that will guide you in your future performance testing 'projects'.
To choose correct performance scenarios you need to go through the next basic checklist:
High priority scenarios from the business logic perspective. For example: login/order transactions, etc.
Mostly used scenarios by end users. Here you may need information from monitoring tools like NewRelic, etc.
Search / filtering functionality (if applicable) - Scenarios which involve different user roles/permissions
Performance test is a comparison test either with the previous release of the same application or with the existing players in the market.
Case 1- Existing application
1)Carry out the test for the same scenarios as covered before to get a clear picture on the response of the application before and after the upgrade.
2)If you need to dig deeper you can get back to the database team to understand which functionalities are getting more requests. Also ask them on the total number of requests on an average on any particular day so that you can take a call on what user load and time duration to be given for the test.
Case 2- New Application
1) Look for existing market players and design your test as per the critical functions of the rival product (for e.g. Gmail might support many functions what what is being used often is launch ->login ->compose mail -> inbox ->outbox).
2) Any time you can get back to your clients on what they suppose to be business critical scenarios or scenarios that will be used more often..