HP-UFT 12.02: How to set list of test to run without ALM managing tool - hp-uft

I have an UFT tool 12.02. A number of test are already created. Now i am trying to run all this test, one after another, without using ALM. Is it possible? If yes, then how?

Check out the Test Batch Runner.

Related

How can I group test cases to create (and run) test suites using cypress 10 and cucumber?

A lot has changed in terms of config in cypress 10 so I've been dealing with a bunch new stuff to create global variables, commands, etc., to be able to run cypress using cucumber preprocessor, in different environments. Also cypress has removed the option to run all specs from the runner, so I can't pick up a folder or filter and run all tests within. I figured the only way to do it is through command line but is still not working.
So I'm not sure if the way to create test suites is still using tags or there is a new improved one. Anyway My question is how can I group test cases to run them separately now? Simple case would be run a #Smoke suite or a #Regression suite.
I've already tried adding the tags on the .feature files above Scenario, and I'm running the command with --tags=#Tag parameter but doesn't seem to pick it up.
Not in need of a straight solution but if you could point me in the right direction, cause I haven't found a straight answer. Thank you!

Replicate "Run all specs" cypress test runner functionality via command line

I have several cypress spec files running against a web app in CI/CD build pipelines.
For whatever reason, there is a gap in time between each spec file is run in the pipeline, so the more spec files we add, the slower the build runs. I can see in the logs it's about 30s to a full minute between each spec file is run (I turned off the video record option to ensure that was not somehow related). Recently, it has begun to stall out completely and the build step fails from timing out.
To verify it wasn't related to the number of tests, I did an experiment by combining all the different tests into a single spec file and running only that file. This worked perfectly - because there was only a single spec file to load, the build did not experience any of those long pauses in between running multiple spec files.
Of course, placing all our tests into a single file is not ideal. I know with the cypress test runner there is a way to accomplish running all tests across multiple spec files as if they were in a single file using "Run all specs" button. From the cypress docs:
"But when you click on "Run all specs" button after cypress open, the Test Runner bundles and concatenates all specs together..."
I want to accomplish the exact same thing through the command line. Does anyone know how to do this? Or accomplish the same thing in another way?
Using cypress run is not the equivalent. Although this command runs all tests, it still fires up each spec file separately (hence the delay issue in the pipeline).
Seems like they don't want to do it that way. Sorry for not having a better answer.
https://glebbahmutov.com/blog/run-all-specs/

Is there a recommended debugging strategy for E2E automation tests?

What is the best elegant approach for debugging a large E2E test?
I am using TestCafe automation framework and currently. I'm facing multiple tests that are flaky and require a fix.
The problem is that every time I modify something within the test code I need to run the entire test from the start in order to see if that new update succeeds or not.
I would like to hear ideas about strategies regard to how to debug an E2E test without losing your mind.
Current debug methods:
Using the builtin TestCafe mechanism for debugging in the problematic area in the code and try to comment out everything before that line.
But that really doesn't feel like the best approach.
When there are prerequisite data such as user credentials,url,etc.. I manually Declare those again just before the debug().
PS: I know that tests should be focused as much as possible and relatively small but this is what we have now.
Thanks in advance
You can try using the flag
--debug-on-fail
This pauses the test when it fails and allows you to view the tested page and determine the cause of the fail.
Also use test.only to to specify that only a particular test or fixture should run while all others should be skipped
https://devexpress.github.io/testcafe/documentation/using-testcafe/command-line-interface.html#--debug-on-fail
You can use the takeScreenshot action to capture the existing state of the application during the test. Test Café stores the screenshots inside the screenshots sub-directory and names the file with a timestamp. Alternatively, you can add the takeOnFails command line flag to automatically capture the screen whenever a test fails, so at the point of failure.
Another option is to slow down the test so it’s easier to observe when it is running. You can adjust the speed using the - speed command line flag. 1 is the fastest speed and 0.01 the slowest. Then you can record the test run using the - video command line flag, but you need to set up FFmpeg for this.

Visual Studio test explorer requirements run all

When I run tests in test explorer individually they all run without errors.
When I click on "Run All" I get varying test that fail.
My question:
Does anybody know what the minimum requirements regarding tests to be able to run them with "Run All"? Like maybe they all have to in a certain way with a certain command?
Running test manually one by one or run all has nothing to do with your tests being failed. You need to check the failure reason for each test and solve it.
Maybe you can post the error messages here so we can help you better.
I discovered that I had a static variable still containing a value other than that expected. Now the outcome is always the same regardless of how I run it.
Sorry for troubling you all

Packaging/Hosting Selenium scripts

A certain test script of mine needs to be run by the "operations" team periodically. My script uses the following components -
1. TestNG
2. Excel (for the input specifications)
3. Selenium RC, ofcourse.
It currently runs in Eclipse.
Is there a way I can package and host it in an, ideally, web accessible location, that folks in operations can click on and review results?
Thanks.
I ended up writing an Ant script controlling execution.

Resources