I'm looking for a way to measure the execution time of my cucumber steps. Using the junit format I managed to get some data about the execution time of the features and scenarios, but I would like to see the times of the steps inside the scenarios as well.
cucumber --format usage
"Prints where step definitions are used. The slowest step definitions (with duration) are listed first."
"-Dcucumber.options= -p usage"
This sorts the Step Definitions by their average execution time.
Related
I'm about a week into learning JMeter and I've run a few test scripts which generate a summary.csv which contains your standard ; Samples, Average, Median etc...
[My Question]
I was wondering if there was a way to add a threshold for the summary.csv so if Average time is higher than x amount of milliseconds, then the user will be informed that the specific result was slower than expected. (Maybe this can be displayed on the summary.csv, I'm not sure what my options are tbh on how to output this)
I am aware that we can use assertions (specifically duration assertion) through the test script but the issue I have with assertions is that it stops the test once an assertion fails, stopping it from generating a summary.csv
Thank you for any input/opinions you guys have :) It is much appreciated!
Have a great day and stay safe everyone!
They are there already and they're controllable by the following JMeter Properties:
jmeter.reportgenerator.apdex_satisfied_threshold
jmeter.reportgenerator.apdex_tolerated_threshold
there is also a property which can apply thresholds to specific samplers or Transaction Controllers: jmeter.reportgenerator.apdex_per_transaction
Just declare the properties with the values of your choice in the user.properties file and next time you generate the dashboard its APPDEX section will reflect the thresholds.
More information: JMeter HTML Reporting Dashboard - General Settings
I started to monitor JMeter with script running with e.g. Jconsole and noted number of classes goes up gradually then quickly down, then gradually up again.
Is it typical behaviour of JMeter? Maybe even JVM in general? Or particular plug-in code should be involved? I'm concerned that to decrease number of classes GC is run and that can affect continuity of generated load. I was not able to find the answer via web search now.
ADDED: Test plan includes making HTTP request samplers, JSON Assertions, using concurrency thread group to increase load in steps + randomization (Random Controller, Random Variable Config Element).
ADDED 2:
Following advice by Dmitri, I run test JVM_ARGS="-Xlog:class+unload -Xlog:class+load"; jmeter ... for about 60 min (3600 sec test) I got around 116 000 classes loaded and 68 000 classes unloaded, below shows all unloaded classes with times unloaded (jdk.nashorn.internal.scripts.Script take most of occurrences and mean time confirm it happened sometime during the test - being technically correct not only at start or only at finish) (from Jupiter notebook):
time
size mean
classname
java.lang.invoke.LambdaForm 978.0 29.757680
jdk.nashorn.internal.runtime.Context 11.0 17.486000
jdk.nashorn.internal.scripts.ModuleGraphManipulator 305.0 17.489377
jdk.nashorn.internal.scripts.Script 66845.0 2308.991561
Any additional advice? What to look for further?
I would say this is due to some randomization as other mentioned test elements are unlikely to cause increase in the number of loaded classes and trigger unloading.
This is normal behavior of the JVM which can unload the classes if/when they are no longer referenced by the program.
You can add the following JVM options to your JMeter startup command:
-XX:+TraceClassLoading -XX:+TraceClassUnloading
so you will be able to see which exact classes are being loaded/unloaded at the given moment of time.
Unfortunately it is not possible to provide more information without seeing your test plan and JVM arguments, just ensure that you're following recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article to get confidence that your test will not crash due to a memory leak as JMeter gives enough freedom of shooting your own leg.
I am using Ruby and cucumber to run my end-to end tests . I have lots of tests which take longer time to run . I am using 'parallel_tests' to run my 'features' in parallel which has minimised the execution time significantly .But I wanted to know if there is a way to run 'scenarios' in parallel
Yes! There is.
Using the cukeforker library you can run either features or scenarios in parallel.
https://github.com/jarib/cukeforker
# parallelize per scenario, with one JUnit XML file per scenario.
CukeForker::Runner.run CukeForker::Scenarios.tagged(%W[#edition ~#wip])
:extra_args => %W[-f CukeForker::Formatters::JunitScenarioFormatter --out results/junit]
I have a test suite of rspec tests which are divided into different files.
Every file represents one test scenario with some number of test steps.
Now, on some particular tests, it can happen that specific step fails but it is too time consuming and not needed to run rest of the steps in that scenario.
I know there is an option --fail-fast in rspec but if I'm running tests like: rspec spec/* that will mean that when first step fails in any script, it will abort complete execution.
I'm just looking for mechanism to abort execution of that specific test scenario (test script) when failure happens but to continue execution of other test scenarios.
Thanks for the help,
Bakir
Use the RSpec-instafail gem.
According to its documentation, it:
Show failing specs instantly. Show passing spec as green dots as usual.
I have a set of cucumber tests that use Capybara to access a website and perform certain tasks. The tests run fine and at the end they output accurate information about whether or not the tests and steps failed or passed. For example,
1 scenario (1 failed)
3 steps (1 failed, 2 passed)
However, if I try to write a customer formatter or even use one of the built-in custom formatters (such as progress or pretty), it shows that all of the steps are being skipped.
Does anyone know why this could be? Again, I think that all of the steps are executing properly and cucumber is reporting back to me at the end if they failed or passed (as I would expect), but the formatters seem to always think that the steps are being skipped.
If you're using a scenario outline, there's a limitation in the parser that causes them to be reported as skipped: https://github.com/cucumber/cucumber/issues/316
You can run Cucumber with the --expand flag (or -x for short) to output each step in the scenario outline for every row in the the example table. Then they should report as passed or failed as expected.