I've been looking at Jubulas automated functional testing tool and following along with the tutorials, but I've become stuck before I ever even got off the ground with it. The user manual provided with the installation hasn't given any answers and I can't find anything in blogs dedicated to Jubula.
My question: I have my test suite, complete with test cases & steps, all set up and ready to go. I've mapped my objects using the editor. I've started the AUT and connected to it. All I have to do is start the test execution.... I click start.... nothing happens.
The Java application is visible (it's a simple calculator) and I can interact with it. But I don't get any dialogue boxes when I press start, which is what is supposed to happen according to the tutorial.
Has anyone tried Jubula and had this problem?
Two things come to mind.
If the "Start Test Suite" button is disabled, then it means you still have some sort of a problem stopping the Test Suite being executed (e.g. missing data or object mapping).
If the "Start Test Suite" button is enabled, then it might just be that you need to select a Test Suite to execute from the drop-down menu (opened by clicking on the small arrow next to the green button).
I had the same problem but at least I got a report about the tests failing. After I specified the JRE for the AUT (this setting is only shown if you click on the advanced or expert button) my tests finally started to work.
I think it's none of the previous answers. If you get a Failed-test report or your Start Test Suite button is disabled then it's pretty obvious. You can find those mistakes mentioned in documentation/blogs.
BUT! There are two errors which leave no traits; no error messages, nothing in logs.
1.) If there's a version incompatibility If you installed Jubula from a standalone installer or from Eclipse marketplace then it will work. But if you put it together for yourself then you cold mix up the components. I have an answer on these issues:
Jubula doesn´t recognize running AUT after upgrade to 2.0
2.) If you mislead your AUT-agent by starting an other .exe It has exactly the symptoms mentioned in the question. It's happening because the application has the Remote-Control (rc) plugin started in it and the AUT-agent is notified about the start. It tries to identify the process within the AUT-configs listed in the client's (testexec's) database and it misidentifies it.
You can solve this by adding each run-situation as a different AUT-config in your database. It's mostly about location in the filesystem: about where the exec process is launched from. I.e: debug-local (from Eclipse launch-bar), exported-local (for Delta-pack exports), QA-local (if you have PDE in your build) etc.
Related
I used to use Cypress 9 on previous projects.
By default, when running cypress open or cypress open --browser chrome used to run all tests for all React components.
However I installed Cypress 10 for the first time on a project that didn't have e2e tests yet. I added test specs, but I don't see any option to run all tests altogether.
It seems I have to run the tests one by one, clicking on each of them.
Can anyone please suggest how do I run all the tests automatically?
It's been removed in Cypress v10, here are the change notes related
During cypress open, the ability to "Run all specs" and "Run filtered specs" has been removed. Please leave feedback around the removal of this feature here. Your feedback will help us make product decisions around the future of this feature.
The feedback page to register your displeasure is here
You can create a "barrel" spec to run multiple imported specs.
I can't vouch for it working the same as v9 "Run all tests", but can't see any reason why not.
// all.spec.cy.js
import './test1.spec.cy.js' // relative paths
import './test2.spec.cy.js'
...
As #Constance says, reinstated in v11.20.
But still a very handy technique if you want to run a pre-defined subset of your tests.
In Cypress version 11.2.0 the Run All button has been reinstated.
You need to set experimentalRunAllSpecs to true in cypress.config.js.
Please see Configuration - End-to-End Testing
If Cypress Test Runner is not a must, I suggest to utilize the CLI/Node Cmd approach
You can trigger all the test(s) by npx cypress run(Still the video recording & screenshot on failed steps would be saved in the respective folders) to run all or with any other cypress flags to filter out specific spec files, or browser etc.
As per the feedback discussion there is a workaround the same as #Fody's answer that will achieve the same result as v9. Also worth noting though is the section on Continuous Integration and the Update 1 that includes a fix for preventing this workaround creating issues with the cypress run command.
Are there any current workarounds?
Yes. If you are impacted by the omission of this feature, it is possible to achieve the same level of parity as 9.x with a workaround Gleb Bahmutov explains here: https://glebbahmutov.com/blog/run-all-specs-cypress-v10/
This will still inherit the same problems as the previous implementation (which is why it was removed) but it will work in certain cases where the previous implementation was not problematic for your use case
https://github.com/cypress-io/cypress/discussions/21628#discussion-4098510
It was removed because people used it wrong.
The Test Runner is for debugging single tests. But by running all tests, then performance will quickly become a problem and crash the entire suite.
Running all tests should only be performed from the CLI.
Sources
https://github.com/cypress-io/cypress/issues/681
https://github.com/cypress-io/cypress/discussions/21628
I am hoping someone here can at least point me in the right direction to solving this frustrating issue. The smartbear community had no response.
I have a bunch of tests set up in TestComplete to test a web application. When I run them all at once, I consistently get an RPC Server Unavailable Error. I have no idea what this means. When I run the tests individually, there is no issue with the scripts.
I have tried running them in a script, calling them from a keyword test, and just using the project set up to call them all in order. No dice. Running each test manually completely defeats the purpose of automation.
Any ideas on how to fix this or at least where the F*** to start?? I did not have this problem with TC10, only when I upgraded to TC11
Thanks
So, in each of my modules, I was opening and closing Chrome. I decided to try taking out the close statement. Boom, no error. It still refreshes the page, which is fine. I don't know why this was a problem, but I got it to run all the modules concurrently
I want to implement unit testing in my Xcode project, and would like to run tests without requiring the application to be started.
Reasons for this are, I have a core data based document app, that also uses a cvdisplay link to control continuous rendering in a background thread.
It strikes me that I do not need a running application to test core data datamodel functionality, this should be distinct from view stuff anyway. Also I would like to isolate and performance test my background rendering processes, something that seems very difficult with the app running, but could easily do without the application running, just getting the right classes and feed it the correct data.
I've seen other questions that have answers for Xcode versions before six, but the answers don't seem to work for the current version.
The docs now make a distinction between application and library tests. Library tests are run against library targets.
I'm not sure i want to reorganise my code into distinct libraries at the moment, and would prefer to avoid it or fake it somehow.
I saw somewhere an openradar relating to this in ios, but I'm interested in osx.
Has anyone any insight into this?
EDIT : Learning to cope with the existing setup for now, testing with full app running, I can run some checks on that, then I close all documents and shut down the display link.
I can then run tests creating my own persistent store coordinator, in memory datastore and context, as well as testing my rendering classes without fear of conflict with the other display thread.
I'm now running into troubles with linking sources, I just can't seem to get it right, I fiddle with settings, it seems to work for a bit, then suddenly stops building again with Undefined symbols for architecture x86_64: errors, either that or problems linking with 3rd party private frameworks. I look through the web, change a few things, it starts working again. Then I add some tests, importing more of my classes, things stop working again.,.. Infuriating
EDIT 2: Pretty much all sorted now, but maybe not terribly efficient. For each test case class, I either open or close documents and start or stop the display link in the +(void)setup method. I don't do anything in the +(void)tearDown, and let the setup decide how to proceed based on the current state.
Although this means it's possible to flow from one test class to another minimizing document opens and closes, there doesn't seem to be a way to order the tests so that I could group them together.
BTW, I also solved my mentioned linking troubles (XCode 6 Testing Target Troubles), not really relevant to this question though.
It sounds like you landed on the standard solution: Give your app a way to tell when it's being stood up for testing rather than use, and then have applicationDidFinishLaunching: not do any of your usual launch-time behaviors, but leave it to specific tests to provide any setup they need.
You might benefit from creating multiple test suites to deal with different expected conditions, like all the tests that work around a specific document being open.
I'm trying to port FirefoxOS on Motorola G but I don't understand how to write device manifest. What should be specified in the manifest? Where do I start? Mozilla's official documentation isn't that helpful actually.
The manifest is tricky but like a bike - one you get the hang of it then it becomes second-nature.
Here are the links I used to understand the manifest:
https://developer.mozilla.org/en-US/Apps/Developing/Manifest
https://developer.mozilla.org/en-US/Apps/Developing/About_app_manifests?redirectlocale=en-US&redirectslug=Web%2FApps%2FFAQs%2FAbout_app_manifests
The main point that helped me was to understand that only two fields are required: name, and description. This make other options specific to your needs, so I stripped all other members out to start: "locales" and "developer".
The primary config that I needed to get right was:
launch_path - I got it to work through trial-and-error, but then moved the app within my architecture and was surprised when the app went 404! I shouldn't have been surprised because... the path was incorrect. After updating the path the app installed correctly.
For example:
/Apps/App1/app1.html
Final bit of advice on Manifest. The best way to understand it is to get a test app working from the mdn-app-template! This way you can see how it works and test it's capabilities. I strongly recommend this as a first step. https://github.com/chrisdavidmills/mdn-app-template
Other suggestions:
- It took a while to get the workflow down. It is possible to just click a 'refresh' link in the App Manager. Which is a rather immediate workflow.
- Uninstalling in Android was weird. The app is actually saved within Firefox. So you have to go to about:apps to uninstall. Here is the link: https://developer.mozilla.org/en-US/Apps/Developing/Apps_for_Android
Hope it helps.
I use last stable SBT with Scala 2.10 and last Scala plugin in IntelliJ IDEA 12.x. And have very simple test Scala project.
I have specs2 test where I want to start my debug from. Having several breakpoints, I'm expecting going over lines, (from one break point to another - in my test and in My code), but instead: the debbuger going somewhere inside library classes, stops there, showing me some strange sources.
That's reproducible all the time, and I have to click 2, 3, sometimes 5 times on the next-arrow-button (on debug panel) to reach next break point (in the test or in the code).
I run my test with SBT 'test-compile' action, like IntelliJ pop-up suggests.
Aldo I found this debug settings for Scala ("Do not step into specific Scala classes"). But I have this check-box selected.
I've post an issue in IntelliJ IDEA site.
IntelliJ 15 now has support for adding breakpoints inside lambdas. See this blog post for details.