Jmeter Report Making Approach - jmeter

Usually I do run the Jmeter tests multiple times and do select a consistent result out of all runs. And further use the statistics to make the Jmeter Report.
But someone asks from my team that, we need to calculate the Average of all runs and use it for Report making.
If I do so, then I can not generate the in-built Graphs which Jmeter provides. And also the statistics which I present for the test is also not the original it's manipulated/altered by calculating the Average.
Which is the good approach to follow?

I think you can use Merge Results tool in order to join 2 or more results files into a single one and apply your analysis solution(s) onto this generated aggregate result. Moreover you will be able to compare different test runs results.
You can install the tool using JMeter Plugins Manager

I am developing now one tool based on Python + Django + Postgres which will help to run/parse/analyze/monitor JMeter load tests and compare results. In`s on some early stage but already not so bad( but sadly bad documented)
https://github.com/v0devil/JMeter-Control-Center
Also there is static report generator based on Python + Pandas. Maybe you can modify it for your tasks :) https://github.com/v0devil/jmeter_jenkins_report_generator

Related

I cannot see a full list of database storage classes in Openshift webconsole

I am using couchdb and I need to stress test the db to make sure about performance limitation.
I have never done any sort of performance testing. So I decided to write a piece of code to do so:
import couchdb
couch= couchdb.Server('xxxxxxx')
db= couch["performance_test"]
doc={
"test": "test",
"db":"cocuh"
}
db.save(doc)
So this is just a beginning and I need to add a lot to it to have it working. For example in terms of benchmarks, measure ... I am not sure what to do. Before keep going and complete the code I want to make sure I am in the right path. Is it the best way to do so? or should I use a specific tool for it? any insight is appreciated
First you need to figure out what you're actually measuring -- concurrent read rates, write throughput etc.
There are lots of load testing frameworks around, for example Locust. Using an existing load testing tool gives you a lot of functionality you otherwise would ghave to implement yourself. Locust makes nice graphs etc.
There are also plenty of tools and libraries that can help you generate random test data, such as faker in Python.

Find and run all scenarios where step is used?

I'm kinda new to SpecFlow but i would like to find and run all scenarios where step is used. I know about Ctrl+Shift+Alt+S option, but when it's used 20+ times on many feature files it can be hard to test it all one after another. This question came to my mind when i updated step and needed to retest it.
Specify a tag against the scenarios that contain that step - these will then appear within Test Explorer area if you filter based off 'Traits'. You can then run all scenarios with that tag.
So for example you would have
#TAGHERE
Scenario: Your Scenario
Given
When
Then

How to highlight a particular ( important ) it block in the report while running protractor script?

I have around 10 it blocks in my spec. Among these, Two it blocks are testing important features and I want to highlight these in my report so that I can immediately sense whether those features are working fine or not. Is there any way
to achieve this?
I am using jasmine reporter with protractor for reporting.
Try the reporter below for better reporting
https://www.npmjs.com/package/protractor-html-reporter-2
For better view on the report make a separate describe for the particular it blocks.
Hope it helps you

How do I setup esrally to use with elassandra and my own tests?

I'm wondering whether others have attempt benchmarking Elassandra (more specifically, I'm using express-cassandra) using esrally. I'm hoping to not spend to much more time on esrally if that's not a good solution to test Elassandra.
Reading the documentation, it looks like Rally is capable of starting from scratch: Download Elasticsearch, install the source, build it, run it, connect, create a full schema, then start testing with data filling the schema (possibly done with some random data), do queries, ...
I already have everything in place and the only thing I really want to see a few things such as:
Which of 10 different memory setup is faster.
Which type of searches work, whether option 1, 2 and 3 from my existing software create drastic slow downs or not...
Whether insertion while doing searches have a effects on the speed of my searches.
I'm not going to change many parameters other than the memory (-Xmx, -Xms, maybe some others... like cached row in a separate heap.) For sure, I want to run all the tests with the latest Elassandra and not consider rebuilding or anything of the sort.
From reading the documentation, there is no mention of Elassandra. I found total of TWO PAGES in Google about testing with esrally and Elassandra and that did not boost my confidence that it's doable...
I would imagine that I have to use the benchmark-only pipeline. That at least removes all the gathering of the source, building, etc. I guess it also reduces the number of parameters I get in the resulting benchmark, but I don't need all the details...
Have you had any experience with such a setup? (Elassandra + esrally)
Yes, esrally works with Elassandra by using the --benchmark-only option.
To automate the creation of elassandra clusters to benchmark, you could either use ecm or k8s helm chart.
For instance, using ccm :
ecm create bench_cluster -v 6.2.3.10 -n 3 -s -e
esrally --pipeline=benchmark-only --target hosts=127.0.0.1:9200,127.0.0.2:9200,127.0.0.3:9200
ecm remove bench_cluster
For testing specific scenarios, you can write custom tracks.

JMeter- graph generator plugin

I have a graph generator plugin. I want to create graphs after I input the users in GUI mode. Do I have to run the script in advance and then run it again in order to see the graphs? I'm asking because it wants the 'JMeter Results File' which if I don't run, would not be there.
There are two ways to make graphs: at run-time, or from old results. If you want to do the former, put it in your test and make sure you follow the instructions here:
http://jmeter-plugins.org/wiki/GraphsGeneratorListener/#Generate-CSV-PNG-for-current-test-results
Note that, like many listeners, this has a fairly high performance cost, so it suggests you avoid using this while in GUI mode.
Alternatively, you can run your normal test without this listener, then run a second 'fake' test with it to generate your graphs:
http://jmeter-plugins.org/wiki/GraphsGeneratorListener/#Generate-CSV-PNG-for-existing-previous-test-results

Resources