In the play framework, how do I run just one selenium test suite and have it run automatically? - tdd

I am doing Test Driven Development using the Play Framework and want to keep run the current failing test quickly. I am finding clicking Start! to be to slow since I have a mostly keyboard driven workflow. So ideally I would like a way to just reload the page and have it rerun the currently failing test.

I ended up leaving play test running and reloading the following page - note the auto=yes parameter
http://localhost:9000/#tests?select=Application.test.html&auto=yes

Related

Cypress 10 - How to run all tests in one go?

I used to use Cypress 9 on previous projects.
By default, when running cypress open or cypress open --browser chrome used to run all tests for all React components.
However I installed Cypress 10 for the first time on a project that didn't have e2e tests yet. I added test specs, but I don't see any option to run all tests altogether.
It seems I have to run the tests one by one, clicking on each of them.
Can anyone please suggest how do I run all the tests automatically?
It's been removed in Cypress v10, here are the change notes related
During cypress open, the ability to "Run all specs" and "Run filtered specs" has been removed. Please leave feedback around the removal of this feature here. Your feedback will help us make product decisions around the future of this feature.
The feedback page to register your displeasure is here
You can create a "barrel" spec to run multiple imported specs.
I can't vouch for it working the same as v9 "Run all tests", but can't see any reason why not.
// all.spec.cy.js
import './test1.spec.cy.js' // relative paths
import './test2.spec.cy.js'
...
As #Constance says, reinstated in v11.20.
But still a very handy technique if you want to run a pre-defined subset of your tests.
In Cypress version 11.2.0 the Run All button has been reinstated.
You need to set experimentalRunAllSpecs to true in cypress.config.js.
Please see Configuration - End-to-End Testing
If Cypress Test Runner is not a must, I suggest to utilize the CLI/Node Cmd approach
You can trigger all the test(s) by npx cypress run(Still the video recording & screenshot on failed steps would be saved in the respective folders) to run all or with any other cypress flags to filter out specific spec files, or browser etc.
As per the feedback discussion there is a workaround the same as #Fody's answer that will achieve the same result as v9. Also worth noting though is the section on Continuous Integration and the Update 1 that includes a fix for preventing this workaround creating issues with the cypress run command.
Are there any current workarounds?
Yes. If you are impacted by the omission of this feature, it is possible to achieve the same level of parity as 9.x with a workaround Gleb Bahmutov explains here: https://glebbahmutov.com/blog/run-all-specs-cypress-v10/
This will still inherit the same problems as the previous implementation (which is why it was removed) but it will work in certain cases where the previous implementation was not problematic for your use case
https://github.com/cypress-io/cypress/discussions/21628#discussion-4098510
It was removed because people used it wrong.
The Test Runner is for debugging single tests. But by running all tests, then performance will quickly become a problem and crash the entire suite.
Running all tests should only be performed from the CLI.
Sources
https://github.com/cypress-io/cypress/issues/681
https://github.com/cypress-io/cypress/discussions/21628

GUI testing coverage

I have two questions. My first question is: Do applications exist which measure the coverage of GUI testing for web applications (not code, but the coverage of GUI components on web page)?
My second question is:
Is GUI testing with Selenium for example necessary if we have tests for javascript as well?
Thank you in advance.
You can write your custom application to find all dom elements using http://www.w3schools.com/js/js_htmldom_elements.asp, store this in some place and after completing your test automation framework run this utility to make sure that none of the elements are missing.
GUI test is required to make sure that all your integration points b/w several backend API are working. Also we will be sure that non of the UI elements are break over UI and all your business use cases are working as expected. Mostly UI testing is done for Acceptance Testing and we can show to the customer that all there use cases are working as expected. Later in the next release you can make sure that you are not breaking any UI code. UI testing gives us confidence while releasing to end users.

Cucumber reporting and browser hiding

I'm using cucumber (ruby) + watir for running behavior driven tests, and I'm trying to do a couple of things for integrating it with Hudson CI:
Currently, the browser window pops up on the build slave each time the job runs. Is there a way to hide the browser window while the tests run?
I am using the jUnit formatter and having Hudson parse the test results. However, I'd like to print a test summary with the feature/scenario names and the test result (pass/fail) on the console window so that the email notification sent out lists the test summary. How do I do that?
Currently, cucumber prints a lot of debug output including deprecated function warnings. The -q flag doesn't seem to do anything for me. Is there a way I can stop cucumber from printing anything on the screen?
Edit: I'm running the tests on a Windows 7 desktop.

Test Suite Execution Jubula

I've been looking at Jubulas automated functional testing tool and following along with the tutorials, but I've become stuck before I ever even got off the ground with it. The user manual provided with the installation hasn't given any answers and I can't find anything in blogs dedicated to Jubula.
My question: I have my test suite, complete with test cases & steps, all set up and ready to go. I've mapped my objects using the editor. I've started the AUT and connected to it. All I have to do is start the test execution.... I click start.... nothing happens.
The Java application is visible (it's a simple calculator) and I can interact with it. But I don't get any dialogue boxes when I press start, which is what is supposed to happen according to the tutorial.
Has anyone tried Jubula and had this problem?
Two things come to mind.
If the "Start Test Suite" button is disabled, then it means you still have some sort of a problem stopping the Test Suite being executed (e.g. missing data or object mapping).
If the "Start Test Suite" button is enabled, then it might just be that you need to select a Test Suite to execute from the drop-down menu (opened by clicking on the small arrow next to the green button).
I had the same problem but at least I got a report about the tests failing. After I specified the JRE for the AUT (this setting is only shown if you click on the advanced or expert button) my tests finally started to work.
I think it's none of the previous answers. If you get a Failed-test report or your Start Test Suite button is disabled then it's pretty obvious. You can find those mistakes mentioned in documentation/blogs.
BUT! There are two errors which leave no traits; no error messages, nothing in logs.
1.) If there's a version incompatibility If you installed Jubula from a standalone installer or from Eclipse marketplace then it will work. But if you put it together for yourself then you cold mix up the components. I have an answer on these issues:
Jubula doesn´t recognize running AUT after upgrade to 2.0
2.) If you mislead your AUT-agent by starting an other .exe It has exactly the symptoms mentioned in the question. It's happening because the application has the Remote-Control (rc) plugin started in it and the AUT-agent is notified about the start. It tries to identify the process within the AUT-configs listed in the client's (testexec's) database and it misidentifies it.
You can solve this by adding each run-situation as a different AUT-config in your database. It's mostly about location in the filesystem: about where the exec process is launched from. I.e: debug-local (from Eclipse launch-bar), exported-local (for Delta-pack exports), QA-local (if you have PDE in your build) etc.

Meteor with QUnit

I am trying to use QUnit with a Meteor app. Should this be possible? Any recommended patterns?
I was trying to make an app that was "self testing" by making a route for "/test" but it doesn't appear that QUnit is running my tests (no test output appears).
#Tom, sure here ya go:
I've added a package for qunit with meteor here:
https://github.com/jpmec/meteor/commit/786b93153d94c0e2291ac210f64587dbbbad23d6
Some facts and disclaimers:
I didn't to the branch right, I branched from master not devel.
I'm not spending much time trying to keep my meteor branch up to date.
This meteor branch is truely fubar'ed wrt the main meteor project, so don't branch from it.
Your best bet is to download, and go look in the packages folder for qunit. That part I think I did right. You'll probably just want to drop this in your meteor packages folder and see if it helps you.
After trying it out some, here are my thoughts to other would-be qunit with meteor users:
I cannot figure out how to easily have a "test site" and "production site" with meteor. It seems like it is all or nothing out of the box, so you can have a self-testing site, but all users get to run the tests. (What I would like is to serve up one site on one port and another site on another port, while maintaining a consistent folder tree for my "app").
The hot-push of meteor is really cool with qunit. As you write your tests you see them go from red to green in semi-real time. No need to keep switching to the test page and refreshing. This is by far the coolest part of meteor, and using qunit with meteor.
The answer to this question was a little more involved for me.
I found no discernible difference between putting qunit in a package, and just including qunit sources in my /client files. My difficulty was that sometimes tests seemed to run, sometimes not at all, and frequently a mysterious "global error" would appear in my test results.
This was called by qunit attempting to automatically launch the test run before my own code had loaded tests. I found no good solution to prevent the automatic behavior. My eventual solution was to let qunit finish its (empty) automatic test run, and then to call Qunit.init(), load tests, then Qunit.start().

Resources