For functional automation we use to create a framework which is reusable for automating application. Is there any way to create Performance testing framework in jmeter. So that we can use same framework for Performance testing of different applications.
Please help if any one knows and provide more information regarding it.
You can consider JMeter as a "framework" which already comes with test elements to build requests via different protocols/transports, applying assertions, generating reports, etc.
It is highly unlikely you will be able to re-use existing script for another application as JMeter acts on protocol level therefore there will be different requests for different applications.
There is a mechanism in JMeter allowing to re-use pieces of test plan as modules so you won't have to duplicate your code, check out Test Fragments and Module Controller, however it is more applicable for a single application.
The only "framework-like" approach I can think of is adding your JMeter tests into continuous integration process so you will have a build step which will execute performance tests and publish reports, basically you will be able to re-use the same test setup and reporting routine and the only thing which will change from application to application will be .jmx test script(s). See JMeter Maven Plugin and/or JMeter Ant Task for more details.
You must first ask yourself, how dynamic is my conversation that I am attempting to replicate. If you have a very stable services API where the exposed external interface is static, but the code to handle it on the back end is changing, then you have a good shot at building something which has a long life.
But, if you are like the majority of web sites in the universe then you are dealing with developers who are always changing something, adding a resource, adding of deleting form values (hidden or not), headers, etc.... In this case you should consider that your scripts are perishable, with a limited life, and you will need to rebuild them at some point.
Having noted the limited lifetime of a piece of code to test a piece of code with a limited lifetime, are there some techniques you can use to insulate yourself? Yes. Rule of thumb is the higher up the stack you go to build your test scripts the more insulated you are from changes under the covers ( assuming the layer you build to is stable ). The trade off is with more of the intelligence under the covers of your test interface, the higher the resource cost for any individual virtual user which then dictates more hosts for test execution and more skew from client side code which can distort the view of what is coming from the server. An example, run a selenium script instead of a base jmeter script. A browser is invoked, you have the benefit of all of the local javascript processing to handle the dynamic changes and your script has a longer life.
Related
I have a system that is due to undergo load testing. Liaising with the load testing team has been difficult and less than helpful, but to cut a long story short they claim the load testing tool they will use cannot mock/simulate the functionality of the external dependencies of the system (a web service and a database). As a result I am questioning their choice of tool.
In more detail, the system I have created receives a http request, after some processing it makes a call to a third party database (which we do not have control of) to retrieve more data, this then undergoes further processing, and finally a call is made to a client (which client is dependent on the data input).
So all-in-all we need to know how the server handles the load during the two processing stages, and the timing of the calls to the database and the final client.
My question is essentially this, can this be done with a load runner tool?
If so which tools can do this?
The load testers plan to use NeoLoad, and claim it is not possible, is this true?
We do load and resilience tests similar to you described with help of Gatling load tool. This is very flexible tool based on scala language and allows to build different load scenarious. However it's required coding skills form the performance engineers.
I have to test my rest API such that 100k API calls are made simultaneously(within 500ms).Any Idea how to simulate it?Utility to use?
I would suggest to use JMeter too, it enable your test to create multiple concurrent jmeter server. Just to be clear you can control multiple remote JMeter engines from a single JMeter client and replicate a test across many computers and thus simulate a larger load on the target server.
To be honest, your target is quite high (100k API calls simultaneously within 500ms), i.e. you'll need a lot of jmeter servers. When you create stress tests, there are not magical recipes, guides or manuals. Trial and error is a fundamental method of solving this kind of problems.
In my experience, I first try with few concurrent users and see how the server react. Then increase the number of concurrent users till to reach an intolerable performance decrease or, worst, a bottleneck .
http://jmeter.apache.org/usermanual/remote-test.html
You will obviously need a load testing tool which can be run in a distributed mode, i.e. 1 controller and X load generators executing the same test.
Grinder - scripts are written in some Python dialect
Apache JMeter - this guy doesn't require any specific knowledge, you can create tests using simple GUI
Tsung - is written in Erlang, known for capability to produce high loads even on low-end hardware.
See Open Source Load Testing Tools: Which One Should You Use? article for more information on above tools.
JMeter
The Apache JMeter™ application is open source software, a 100% pure
Java application designed to load test functional behavior and measure
performance. It was originally designed for testing Web Applications
but has since expanded to other test functions.
On our applications we have a lot of functional tests through selenium.
We understand that it is a good practice to have the server where the tests are ran as similar as possible to the production servers, and we try to follow it as much as possible.
But that is very hard to achieve in 100%, so we have a different settings file for our server for some changes that we want in the staging environment (for example, we opt to turn e-mail sending off because of the additional required architecture).
In fact, lots of server frameworks recommend having an isolated front-controller (environment) for testing to easily achieve this small changes.
By default, most frameworks such as ours recommend that their testing environment should have its cache turned off. WHY?
If we want to emulate production as much as possible, what's the possible advantage of having the server's cache turned off when performing functional tests? There can be bugs that are only found with the cache on, and having it on might also have the benefit of accelerating our tests execution!
Don't we just need to make sure that the cache is cleared before starting a new batch of functional tests, the same way we clear the cache when deploying a new version to production?
A colleague of mine suggests that the reason for this is could be that cache can generate false-positives, errors that are not caused by badly implemented features (that are the main target of those tests), but of the cache system itself... but even if those really happen (I suppose it depends on how advanced is the way the cache is used), why would they be false-positives?
To best answer this question I will clarify some points.
(be aware that this is based on my experience)
Integration tests using the browser are typically "Black Blox Tests" , which means that they are made without knowledge of the code. That is, without knowing whether the cache is being used or not.
These tests are usually designed based on certain tasks that are performed during the normal use of the system. But, these tasks are chosen for automation depending on certain conditions of use (mainly reusability, and criticality/importance but also the cost of implementation). So most of the times, we will not need/wont to test caching behaviour.
By convention, a test (any) must be created with a single purpose and have the less possible dependencies. Why?
When the test fails , we can quickly find the source of the failure.
Smaller tests are easier to extend, fix, remove...
We do not spend too much time, first debugging the test code and then
debugging the system code.
Integration testing should follow this convention.
Answering the question:
If we want to check a particular task, we must isolate it as possible.
For example, if we want to verify that the user correctly logs in, we have to delete the cookies to be sure that they do not influence the result (because they may). If on the other hand, we want to test the cookies we have to somehow use an environment where they are not deleted.
so, in short:
If there is need to test the caching behaviour then we need to create an "isolated" environment where this is possible.
The usual integration tests purpose is to test the functionality, so the framework default value it's to have the cache disabled.
This does not means that we shouldn't create our own environment to test the caching behaviour.
I have been tasked with looking for a performance testing solution for one of our Java applications running on a Weblogic server. The requirement is to record production requests (both GET and POST including POST data) and then run these requests in a performance test environment with a copy of the production database.
The reasons for using production requests instead of a test script are:
It is a large application with no existing test scripts so it would be a a large amount of work to write scripts to cover the entire application.
Some performance issues only appear when users do a number of actions in a particular order.
To test using actual user interaction with the system not an estimation at how the users may interact with the system. We all know that users will do things we have not thought of.
I want to be able to fix performance issues and rerun the requests against the fixed code before releasing to production.
I have looked at using JMeters Access Log Sampler with server access logs however the access logs do not contain POST data and the access log sampler only looks at the request URL so it cannot simulate users submitting form data.
I have also looked at using the JMeter HTTP Proxy Server however this can record the actions of only one user and requires the user to configure their browser to use the proxy. This same limitation exist with Tsung and The Grinder.
I have looked at using Wireshark and TCReplay but recording at the packet level is excessive and will not give any useful reports at a request level.
Is there a better way to analyze production performance considering I need to be able to test fixes before releasing to production?
That is going to be a hard ask. I work with Visual Studio Test Edition to load test my applications and we are only able to "estimate" the users activity on the site.
It is possible to look at the logs and gather information on the likelyhood of certain paths through your app. You can then look at the production database to look at the likely values entered in any post requests. From that you will have to make load tests that approach the useage patterns of your production site.
With any current tools I don't think it is possible to record and playback actual user interation.
It is possible to alter your web app so that is records and logs every request and post against session and datetime. This custom logging could be then used to generate load test requests against a test website. This would be some serious code change to your existing site and would likely have performance impacts.
That said, I have worked with web apps that do this level of logging and the ability to analyse the exact series of page posts/requests that caused an error is quite valuable to a developer.
So in summary: It is possible, but I have not heard of any off the shelf tools that do it.
Please check out this Whitepaper by Impetus Technologies on this page.. http://www.impetus.com/plabs/sandstorm.html
Honestly, I'm not sure the task you're being asked to do is even possible, let alone a good idea. Depending on how complex the application's backend is, and how perfect you can recreate the state (ie: all the way down to external SOA services or the time/clock), it may not be possible to make those GET and POST requests reproduce the same behavior.
That said, performance testing against production data is always great, but it usually requires application-specific knowledge that will stress said data. Simply repeating HTTP GETs and POSTs will almost certainly not yield useful results.
Good luck!
I would suggest the following to get the production requests and simulate the accurate workload:
1) Use coremetrics: CoreMetrics provides such solutions using which you can know the application usage patterns. This would help in coming up with an accurate workload model. This model can then be converted into test scripts and executed against a masked copy of production database. This will provide you accurate results about the application performance in realtime.
2) Another option would be creating a small utility using AOP (Aspect oriented apporach) so that it can trace all the requests and corresponding method traces. This would help in identifying the production usage pattern and in turn accurate simulation of workload. AOP frameworks such as AspectJ can be used. This would not require any changes in code. The instrumentation can be done on the fly. The other benefit would be that thi cna only be enabled for a specific time window and then it can be turned off.
Regards,
batterywalam
We are setting up a Selenium test campaign on a big web application.
The first thing we've done was to build a framework which initialize SQL data in database before the test, launch the test, archive results and then clear data.
We've integrate that in a Maven 2 process, run every day by TeamCity on a dedicated database.
We've set up several Selenium tests now but It's not as used as planned.
Reasons are that tests are sometimes broken for other reasons than regressions (data may have changed, stored procedure may have been recompiled and so on).
I would like to know if there are big success in user interface testing and if so, reasons to that. Commons errors may also interest me.
Testability helps a lot. The biggest win for testability in web apps is if all of the HTML elements you need to interact with on the page have unique and consistent attributes. If the attributes you are using to identify the HTML elements (Selenium uses xpath) are not consistent/reliable from build-to-build, or session-to-session, your test scripts will fail. Also, these attributes must be unique, so that the automation tool (in this case Selenium) can reliably find the object on the web page.
If you want reliable unit tests, you need to have the same input. Starting state of the database is the input. So, you need to have the same starting database each time. Of course, if you wish to do testing with different input, you need to create another unit test (as results will obviously not be the same).
When I do stuff like this, I always use the same database as a starting point. Of course, some of the tests might fail without modifying the database is correct way, so some other subsequent tests might fail as well even though they wouldn't otherwise. If your unit-test tool allows, you should define dependencies between tests to make sure that those tests will not be run at all when the 'parent' one fails.
I use http-unit which has the added benefit of working before any styling has been added to the page.
http://httpunit.sourceforge.net/
You can attach the tests to run in the integration test phase for maven2.
From the site
Written in Java, HttpUnit emulates the
relevant portions of browser behavior,
including form submission, JavaScript,
basic http authentication, cookies and
automatic page redirection, and allows
Java test code to examine returned
pages either as text, an XML DOM, or
containers of forms, tables, and
links.