How we can run same feature file on multiple browser sequentially? [duplicate] - user-interface

I am able to execute WebUI feature file against single browser (Zalenium) using parallel runner and defined driver in karate-config.js. How can we execute WebUI feature file against multiple browsers (Zalenium) using parallel runner or distributed testing?

Use a Scenario Outline and the parallel runner. Karate will run each row of an Examples table in parallel. But you will have to move the driver config into the Feature.
Just add a parallel runner to this sample project and try: https://github.com/intuit/karate/tree/master/examples/ui-test
Scenario Outline: <type>
* def webUrlBase = karate.properties['web.url.base']
* configure driver = { type: '#(type)', showDriverLog: true }
* driver webUrlBase + '/page-01'
* match text('#placeholder') == 'Before'
* click('{}Click Me')
* match text('#placeholder') == 'After'
Examples:
| type |
| chrome |
| geckodriver |
There are other ways you can experiment with, here is another pattern when you have a normal Scenario in main.feature - which you can then call later from a Scenario Outline from a separate "special" feature - which is used only when you want to do this kind of parallel-ization of UI tests.
Scenario Outline: <config>
* configure driver = config
* call read('main.feature')
Examples:
| config! |
| { type: 'chromedriver' } |
| { type: 'geckodriver' } |
| { type: 'safaridriver' } |
EDIT - also see this answer: https://stackoverflow.com/a/62325328/143475
And for other ideas: https://stackoverflow.com/a/61685169/143475
EDIT - it is possible to re-use the same browser instance for all tests and the Karate CI regression test does this, which is worth studying for ideas: https://stackoverflow.com/a/66762430/143475

Related

Can't get tagging to work with cypress-cucumber-preprocessor

I am currently having issues with using tagging with the cypress-cucumber-preprocessor package. I know that the cypress-tags has been removed and made redundant so I'm trying to set up tagging using the new syntax but to no avail.
Here is my feature:
Feature: duckduckgo.com
Rule: I am on a desktop
Scenario: visiting the frontpage
When I visit <site>
Then I should see a search bar
#google
Examples:
| site |
| google.com |
#duckduckgo
Examples:
| site |
| duckduckgo.com |
And my step definitions:
import { When, Then } from "#badeball/cypress-cucumber-preprocessor";
When(`I visit` + url, () => {
if(url === 'duckduckgo.com') return cy.visit("https://www.duckduckgo.com");
if(url === 'google.com') return cy.visit("https://www.google.com");
});
Then("I should see a search bar", () => {
cy.get("input").should(
"have.attr",
"placeholder",
"Search the web without being tracked"
);
});
When I try to run my tests with npx cypress run --env tags="#google", it gives me an error saying url in my steps definitions isn't defined. What am I doing wrong?
Try to add script with this command in package.json file this way:
"scripts": {
"open:test": "npm run clean:report && cypress open --env configFile=test,TAGS=#test" (or any tag you need)
}
And then use it as:
npn run open:test
The main difference besides rapping it into script is not using quotes, maybe this will help you

Run specific part of Cypress test multiple times (not whole test)

Is it possible to run specific part of the test in Cypress over and over again without execution of whole test case? I got error in the second part of test case and first half of it takes 100s. It means I have to wait 100s every time to get to the point where the error occurs. I would like rerun test case just few steps before error occurs. So once again, my question is: Is it possible to do in Cypress? Thanks
Workaround #1
If you are using cucumber in cypress you can modify your scenario to a Scenario Outline that will execute Nth times with a scenario tag:
#runMe
Scenario Outline: Visit Google Page
Given that google page is displayed
Examples:
| nthRun |
| 1 |
| 2 |
| 3 |
| 4 |
| 100 |
After that run the test in the terminal by running through tags:
./node_modules/.bin/cypress-tags run -e TAGS='#runMe'
Reference: https://www.npmjs.com/package/cypress-cucumber-preprocessor?activeTab=versions#running-tagged-tests
Workaround #2
Cypress does have retry capability but it would only retry the scenario during failure. You can force your scenario to fail to retry it Nth times with a scenario tag:
In your cypress.json add the following configuration:
{
"retries": {
// Configure retry attempts for `cypress run`
// Default is 0
"runMode": 99,
// Configure retry attempts for `cypress open`
// Default is 0
"openMode": 99
}
}
Reference: https://docs.cypress.io/guides/guides/test-retries#How-It-Works
Next is In your feature file, add an unknown step definition on the last step of your scenario to make it fail:
#runMe
Scenario: Visit Google Page
Given that google page is displayed
And I am an uknown step
Then run the test through tags:
./node_modules/.bin/cypress-tags run -e TAGS='#runMe'
For a solution that doesn't require adding a change to the config file, you can pass retries as a param to specific tests that are known to be flakey for acceptable reasons.
https://docs.cypress.io/guides/guides/test-retries#Custom-Configurations
Meaning you can write (from docs)
describe('User bank accounts', {
retries: {
runMode: 2,
openMode: 1,
}
}, () => {
// The per-suite configuration is applied to each test
// If a test fails, it will be retried
it('allows a user to view their transactions', () => {
// ...
}
it('allows a user to edit their transactions', () => {
// ...
}
})```

I cannot run my simulation program using omnet++ in Windows 7

I am learning omnet++ to simulate a network. The code in packeage.ned is shown as follows:
package helloworld.simulations;
import inet.networklayer.configurator.ipv4.FlatNetworkConfigurator;
import inet.node.inet.Router;
import inet.node.inet.StandardHost;
#license(LGPL);
network Network
{
#display("bgb=519,314");
submodules:
Client: StandardHost {
#display("p=82,217");
}
router: Router {
#display("p=218,117");
}
Server: StandardHost {
#display("p=361,198");
}
flatNetworkConfigurator: FlatNetworkConfigurator {
#display("p=296,46;b=45,44");
}
connections:
Client.ethg++ <--> router.ethg++;
router.ethg++ <--> Server.ethg++;
}
And the code in omnetpp.ini is shown as follows:
[General]
network = helloworld.simulations.Network
**.Client.numTcpApps = 1
**.Client.tcpApp[0].typename = "TCPBasicClientApp"
**.Client.tcpApp[0].connectAddress = "Server"
**.Client.tcpApp[0].connectPort = 80
**.Client.tcpApp[0].thinkTime = 0s
**.Client.tcpApp[0].idleInterval = 0s
**.Server.numTcpApps = 1
**.Server.tcpApp[0].typename = "TCPEchoApp"
**.Server.tcpApp[0].localPort = 80
**.ppp[*].queueType = "DropTailQueue"
**.ppp[*].queue.frameCapacity = 10
However, when I run this program, I encounter the following problem:enter image description here
Now, I do not how to solve this problem. Thank you for your help!
Have you built INET? If yes, go to the mingw console and type:
opp_run -h nedfunctions -l /d/omnetpp-5.1.1/Projects/inet/src/inet | grep firstAvailableOrEmpty
After -l there is a path to your libINET.dll file. You should see something like:
firstAvailableOrEmpty : string firstAvailableOrEmpty(...)
Accepts any number of strings, interprets them as NED type names (qualified or unqualified), and returns the first one that exists and
its C++ implementation class is also available. Returns empty string
if none of the types are available.
Moreover, the instance of FlatNetworkConfigurator must be called configurator, not flatNetworkConfigurator.
EDIT
Go to INET properties then select OMNeT++ | Makemake | select src | Options... | Compile tab | More >> and ensure that you have set Export include path for other projects and Force compiling object files for use in DLLs. And in Target tab set Export this shared/static library for other projects. Then rebuild INET.
Then in your project:
In Properties | Project References make sure that inet is selected.
In Properties | OMNeT++ | Makemake | select directory with source files | Options... | Compile and make sure that the following options are checked:
Add include paths exported from referenced projects
Add include dir and other compile options from enabled project features
In Properties | OMNeT++ | Makemake | select directory with source files | Options... | Link and make sure that the following options are checked:
Link with libraries exported from referenced projects
Add libraries and other linker options from enabled project features
Rebuild your project.

SpecFlow Step Generation for Scenario Outline Generating Incorrect Methods

I'm new in Visual Studio. I'm using Visual Studio 2015 with SpecFlow. Below is the Feature File:
#mytag
Scenario Outline: Successful Authentication
Given I am a user of type <user>
When I hit the application URL
And I provide <email>
And I click on Log In Button
Then I will be landed to the Home Page
And I will be able to see <name> on the Home Page
Examples:
| user | email | name |
| admin | a.b#example.com | alpha |
| non-admin | b.c#example.com | beta |
When I generate the step definitions I'm expecting parameters in place of the variables, instead the method is generated as below:
[Given(#"I am a user of type admin")]
public void GivenIAmAUserOfTypeAdmin()
{
ScenarioContext.Current.Pending();
}
I was instead expecting a method like:
[Given(#"I am a user of type '(.*)'")]
public void GivenIAmAUserOfType(string p0)
{
ScenarioContext.Current.Pending();
}
What am I missing?
As an example, surrounding the <user> in the Given step with '' like this,
Given I am a user of type '<user>'
will generate the desired results. It's probably needed in order to recognize the regular expression.

Integration tests for Rest API

I'd like to get different pointe of views about how to create integration tests for Rest APIs.
The first option would be using cucumber as described in the "The Cucumber Book":
Scenario: Get person
Given The system knows about the following person:
| fname | lname | address | zipcode |
| Luca | Brow | 1, Test | 098716 |
When the client requests GET /person/(\d+)
Then the response should be JSON:
"""
{
"fname": "Luca",
"lname": "Brow",
"address": {
"first": "1, Test",
"zipcode": "098716"
}
}
"""
The second option would be (again) using cucumber, but removing the technical detail as described here:
Scenario: Get person
Given The system knows about the following person:
| fname | lname | address | zipcode |
| Luca | Brow | 1, Test | 098716 |
When the client requests the person
Then the response contains the following attributes:
| fname | Luca |
| lname | Brow |
| address :first | 1, Test |
| address :zipcode | 098716 |
And the third option would be using Spring as described here:
private MockMvc mockMvc;
#Test
public void findAll() throws Exception {
mockMvc.perform(get("/person/1"))
.andExpect(status().isOk())
.andExpect(content().mimeType(IntegrationTestUtil.APPLICATION_JSON_UTF8))
.andExpect(jsonPath("$.fname", is("Luca")))
.andExpect(jsonPath("$.lname", is("Brow")))
.andExpect(jsonPath("$.address.first", is("1, Test")))
.andExpect(jsonPath("$.address.zipcode", is("098716")));
}
I really like the second option since it looks cleaner to business users and testers, but on the other hand for a developer that will consume this API the first option looks more visible since it shows the JSON response.
The third option is the easiest one since it's just Java code, but the readability and the cross-team interaction is not as nice as cucumber.
You should use the third option but not with junit, you should do it using spock.This is the best of both the worlds.
Spock tests are written like this
def "description of what you want to test"() {
given:
//Do what is pre-requisite to the test
when:
def response = mockMvc.perform(get("/person/id")).andReturn().getResponse();
then:
checkForResponse.each {
c->c(response )
}
where:
id | checkResponse
1 | [ResponseChecker.correctPersondetails()]
100 | [ResponseChecker.incorrectPersondetails()]
}
Integration test are made to test if components of your application can work together. For example, you test some requests to database and mvc controller with integration tests. Integration tests are here to test your infrastructure.
On the other hand, BDD tests are made to facilitate communication between development and specifications. The common idea is to write tests or specification by example. There are definitely not design to write integration tests.
I would recommend the third option.

Resources