karate dsl - running parallel features and scenarios - parallel-processing

in new release(0.9.0),I saw that karatedsl able to run parallel tests in scenario level (each feature will be breakdown into scenarios and run as 1 scenario per thread)..
so for example, I have 4 features, for feature 1 and 2 I want to run parallel tests in scenario level and for feature 3 and 4 in feature level(because of some case I have to do these things)..
so, are there any solution or suggestion for me how can I do it ??

For Feature you don't want to run on scenario level you can add parallel=false tag to your feature.
#parallel=false
Feature:
Refer karate documentation for suppressing parallel execution in scenario level

Related

Output flakey scenarios in Cucumber output

I'm running tests in cucumber using the --retries N option to reattempt failed tests N times to catch some tests which are failing inconsistently. Currently the summary after running these tests in the terminal is something like this:
100 scenarios (2 failed, 5 flaky, 1 skipped, 98 passed)
588 steps (9 failed, 24 skipped, 555 passed)
11m45.859s
Failing Scenarios:
cucumber features/some_feature.feature:13 # Scenario: AC.1 Some scenario
cucumber features/some_feature.feature:54 # Scenario: AC.6 Some other scenario
This lets me know what's failing, however I'd like to also have a list of the flakey scenarios to help me diagnose what is failing inconsistently. Is there a way to set up Cucumber such that this is the case?
The scenarios listed are the scenarios that fail the build (making the exit code non-zero), if you use the option "--strict" or "--strict-flaky" the flaky scenarios will also be listed in the summary ("--strict" will also list the pending and undefined scenarios).
It's currently not possible to see Flaky scenarios in the summary.
In order to change this, someone would have to submit a pull request changing console_issues.rb, and possibly associated tests.

Running Cucumber Scenarios in Parallel of Single Feature File

I have 2 feature file that include multiple scenarios, I want to run the scenarios of each feature file in parallel.
For example:
example1.feature has 2 scenarios
example2.feature has 3 scenarios
How to run the five scenarios in parallel ?
Thanks All :)

cucumber filter out scenario in runtime

I'm looking for the way how I can manage ruby cucumber scenarios in runtime. I'd like to filter some scenarios out having some information about SUT which I can gather in runtime.
For instance, I've the following scenarios
#automated
Scenario: As a customer I want to run ... scenario 1
Given ...
#automated #debug
Scenario: As a debugger I want to run ... scenario 2
Given ...
#automated
Scenario: As a customer I want to run ... scenario 3
Given ...
#automated #release
Scenario: As a releaser I want to run ... scenario 4
Given ...
I'm able to determine whether a debug or release application is testing now. And for the debug one I want to see scenarios 1,2,3 to be run but for the release app I want to see 1,3,4 to be run.
I know how to do it using rake or any other wrapper script but it'd be better to find solution without such wrapper scripts.
Also, cucumber profiles might not be a good choice here because there are several parameters each with a number of values. So it might require some crazy number of their combinations.

JMeter: What is a good test structure for load testing REST APIs?

I am load testing (baseline, capacity, longevity) a bunch of APIs (eg. user service, player service, etc) using JMeter. Each of these services have several endpoints (eg. create, update, delete, etc). I am trying to figure out a good way to organize my test plans in JMeter so that I can load test all of these services.
1) Is it a good idea to create a separate JMeter Test Plan (jmx) for each of the APIs rather than creating one JMeter test plan and adding thread groups like "Thread Group for User Service", "Thread Group for Player Service", etc? I was thinking about adding one test plan per API, and then adding several Thread Groups for different types of load testing (baseline, capacity, longevity, etc).
2) When JMeter calculates the Sample Time (Response Time), does it also include the time taken by the BeanShell Processors?
3) Is it a good idea to put a Listener inside of each Simple Controller? I am using JMeter Plugins for reporting. I wanted to view the reports for each endpoint.
Answers to any or all of the questions would be much appreciated :)
I am using a structure like below for creating a test plan in JMeter.
1) I like a test plan to look like a test suite. JMeter has several ways of separating components and test requirements, so it can be hard to set a rule. One test plan is likely to be more efficient than several, and can be configured to satisfy most requirements. I find there can be alot of repetition between plans, which often means maintaining the same code in different places. Better to use modules and includes on the same plan to reduce code duplication, but includes are equivalent and can be used with test fragments to reduce duplication.
Threadgroups are best used as user groups, but can be used to separate tests any way you please. Consider the scaling you need for different pages/sites. ie User/Administrator tests can be done in different Thread Groups, so you can simulate say 50 users and 2 admins testing concurrently. Or you may distinguish front-end/back-end or even pages/sites.
2) It does not include beanshell pre- and post-processing times. (But if you use a beanshell sampler, it depends on the code)
3) listeners are expensive, so fewer is better. To separate the results, you can give each sampler a different title, and the listeners/graphs can then group these as required. You can include timestamps or indexes as part of your sampler title using variables, properties and ${__javaScript}, etc. This will cause more or less grouping depending on the implementation you choose.

Tiered Parallel Execution w/ Failure Recovery in Jenkins

In the figure below, I want each level of jobs to run in parallel (as many as they can simultaneously on executors), and IF one arbitrary job fails, after fixing the problem I want the things to run normal again (as if the job didn't fail). I mean if the failed job is build successfully after fixing, I want the jobs at lower levels to start automatically.
I have seen that Build Flow Plugin cannot realize that. I hope someone has some brilliant ideas to share.
Thanks for your time.
For Further Clarification:
All the jobs at level x must be successful before all the jobs at level x+1. If some job at level x fails, I do not want any job at level x+1 to start. After fixing the the problem, re-run the job, and if it succeeds (and all the other at level x also have succeeded), then I want level x+1 to start building.
Referencing your diagram, I'll restate the requirements of your question (to make sure I understand it).
At Level 1, you want all of the jobs to run in parallel (if possible)
At Level 2, you want all of the jobs to run in parallel
At Level 3, you want all of the jobs to run in parallel
Any successful build of a Level 1 job should cause all Level 2 jobs to build
Any successful build of a Level 2 job should cause all Level 3 jobs to build
My answer should work if you do not require "Any failure at Level 1 will prevent all Level 2 jobs from running."
I don't believe this will require any other plugins. It just uses what is built into Jenkins.
For each Level 1 job, configure a Post Build action of "Build other projects"
The projects to build should be all of your Level 2 jobs. (The list should be comma separated list.)
Check "Trigger only if build succeeds"
For each Level 2 job, configure a Post Build action of "Build other projects"
The projects to build should be all of your Level 3 jobs.

Resources