Using Old Jmeter versions - performance

Is there any impact/effects in using an older version of JMeter?
Since in the JMeter best practices wiki page, it is advised not to use the versions that are older than 3 versions of the current one.
Provide more input in this regard, since am using an older version which is more than 3 versions of the current one.
Help needed.

It depends on Test Plan nature and what Test Elements you're using. Normally newer JMeter releases contain bug fixes and performance improvements so theoretically you can achieve higher throughput using newer JMeter version.
There is JMeter Performance evolution across versions wiki page where you can see the trend of improving JMeter performance across versions so if the throughput is critical for you in terms of load test lab costs - it's better to consider upgrade asap.
On the other hand if your test works fine you can continue using earlier JMeter version in order to keep consistency of the test results for regression testing purposes. Moreover, if your test relies on JMeter Plugins - some of them simply might stop working with newer JMeter version due to JMeter API change.
So for existing project it is OK to keep previous JMeter version given potential cost of migration, but for the new one it is highly recommended to stick to the latest JMeter version. In both cases make sure to follow JMeter Best Practices and recommendations from the 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article.
For migration, always read release notes in order to know what to do:
https://jmeter.apache.org/changes.html#Incompatible%20changes

Related

How to maintain jmeter scripts in an agile environment

We are following agile for our s/w development & we get a new build every 2 days. So how should i maintain my jmeter scripts. Which tool will help me in maintaining the scripts so that i can see the gradual improvement or degradation of our product
Most probably you're looking for a continuous integration tool, if you have a build server which compiles your product, runs unit tests, packages it, etc. you can add your JMeter tests as an extra step to act as regression test
If you don't have specific continuous integration solution in mind I'd recommend going for Jenkins, it's free, open source and Java-based so you won't require to setup an "alien" runtime.
With regards to JMeter and Jenkins integration there is Performance Plugin which is capable of displaying performance trends you're looking for across builds and also can mark builds as unstable or failed depending on metrics thresholds.

Jmeter 2.13 vs Jmeter 3.0 Which is the better to run load Test?

Currently, we are using the JMeter 2.13 version for load testing. I am planning to migrate to JMeter 3.0 version.
I have not started working on the JMeter 3.0 till now.I don't know pros and cons of it.
Please suggest me, Shall I upgrade scripts to 3.0 or shall I continue with Jmeter2.13.
Of course it's better to go with newer version. Apart from new technologies are showing up each day and you need to be able to test them, there are always bug fixes and improvements, performance boost,...
For full list of fixes, improvements and new stuff, refer to change logs:
History of Previous Changes
By the way, if you are planning to move to newer version, why don't go to 3.2 directly?
3.1 to 3.2 Change Log
The general recommendataion is use the latest version of JMeter where possible
Pros: newer JMeter versions normally come with new features, performance improvements, bug fixes, etc.
Cons:
there could be some incompatible changes (normally you will need to override some JMeter Properties to revert JMeter configuration to match previous version behaviour, restore deprecated elements if you are using them in the tests, etc.
results on newer JMeter version might be not inline with what you used to have, i.e. due to aforementioned performance improvements JMeter could generate higher loads

Suppressing bugs in next SonarQube analysis

We have started using SonarQube analysis for C#, JavaScript. Our application is old one. So when we did analysis for the first time (for first release) it showed bugs in thousands.Now what we want is to set benchmark for bugs. Now when I go for next scan for the same project I should not get same thousand defects again, instead it should give only new bugs related to current release(second release). Do we have something in SonarQube to configure which sets benchmark.
What you want is fixing the leak. You can configure your quality gates to rely on issues introduced during the leak period (instead of the absolute value)

JMeter with a software versioning tool

Is it possible to use JMeter with a software versioning tool such as Git, so the test cases for a larger project can be done by a team ?
Also are there any other tools that can provide the same functionality that SVN gives but to JMeter test scripts
JMeter JMX tests are basically XML files, as XML is basically a textual format it is version-control-system friendly so it should not be a problem to store them under SVN, Git, Mercurial, whatever.
Also if you work in a team you can additionally consider implementing high-level test architecture based on Test Fragments which can be used in the main Test Plan via Module Controllers. See How to Manage Large JMeter Scripts With JMeter Test Fragments article for mode details on the approach.
VisualSVN is free. You can also create a previate repo in GitHub (not FREE)
If more than 1 person is working on JMeter script, then I would suggest you to check this in creating a modular / reusable modules in JMeter.
source: http://www.testautomationguru.com/jmeter-modularizing-test-scripts/
You can use any version control tool. Git work very well with it.

Automated Software Versioning integrated with Issue Control System

I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?

Resources