I am using XCode Version 7.1 (7B91b) on my local MAC.
And testing my app on the Simulator iPhone 6 (iOS9).
I have created Unit tests for my app, then I found that the code coverage was about 34%. I decided to create UI tests to increase the code coverage. But unfortunately, the code coverage did not increase.
I made a simple trial,
create a project with NavBarContrl, FirstTabelViewController & SecondTableViewController, keep the unit tests as default
Leave unit tests, add one record in UI Tests to navigate from first table view controller to the second. (I am expecting that Both First/Second Table View Controllers must be highlighted in the final Code Coverage report, right?)
Make sure code coverage is enabled
Run Tests then check code coverage, it is 40%
Take a look on the attached code coverage result, Second View Controller coverage is zero !! although when I was watching the simulator, it did navigate from first view controller to the second one. It can't be zero.
is there anything that I m missing here?
I tried to search in Apple official Documentation but cant find any explicit contradiction to have code coverage with UI Testing, any suggestions?
Make sure you have enabled Debug Executable in Test section in your Scheme settings.
It appears that without this option Xcode unable to gather coverage data.
Related
I used to use Cypress 9 on previous projects.
By default, when running cypress open or cypress open --browser chrome used to run all tests for all React components.
However I installed Cypress 10 for the first time on a project that didn't have e2e tests yet. I added test specs, but I don't see any option to run all tests altogether.
It seems I have to run the tests one by one, clicking on each of them.
Can anyone please suggest how do I run all the tests automatically?
It's been removed in Cypress v10, here are the change notes related
During cypress open, the ability to "Run all specs" and "Run filtered specs" has been removed. Please leave feedback around the removal of this feature here. Your feedback will help us make product decisions around the future of this feature.
The feedback page to register your displeasure is here
You can create a "barrel" spec to run multiple imported specs.
I can't vouch for it working the same as v9 "Run all tests", but can't see any reason why not.
// all.spec.cy.js
import './test1.spec.cy.js' // relative paths
import './test2.spec.cy.js'
...
As #Constance says, reinstated in v11.20.
But still a very handy technique if you want to run a pre-defined subset of your tests.
In Cypress version 11.2.0 the Run All button has been reinstated.
You need to set experimentalRunAllSpecs to true in cypress.config.js.
Please see Configuration - End-to-End Testing
If Cypress Test Runner is not a must, I suggest to utilize the CLI/Node Cmd approach
You can trigger all the test(s) by npx cypress run(Still the video recording & screenshot on failed steps would be saved in the respective folders) to run all or with any other cypress flags to filter out specific spec files, or browser etc.
As per the feedback discussion there is a workaround the same as #Fody's answer that will achieve the same result as v9. Also worth noting though is the section on Continuous Integration and the Update 1 that includes a fix for preventing this workaround creating issues with the cypress run command.
Are there any current workarounds?
Yes. If you are impacted by the omission of this feature, it is possible to achieve the same level of parity as 9.x with a workaround Gleb Bahmutov explains here: https://glebbahmutov.com/blog/run-all-specs-cypress-v10/
This will still inherit the same problems as the previous implementation (which is why it was removed) but it will work in certain cases where the previous implementation was not problematic for your use case
https://github.com/cypress-io/cypress/discussions/21628#discussion-4098510
It was removed because people used it wrong.
The Test Runner is for debugging single tests. But by running all tests, then performance will quickly become a problem and crash the entire suite.
Running all tests should only be performed from the CLI.
Sources
https://github.com/cypress-io/cypress/issues/681
https://github.com/cypress-io/cypress/discussions/21628
I have tried setting up a unit-tests project to cover front-end code in TypeScript with Jasmine and Chutzpah, but have a hard time figuring out what I'm doing wrong.
I have created a sample ASP.NET projet in which I have extracted and included the default jasmine tests. Pressing F5 opens a browser and makes the tests pass.
I have then converted all the tests to TypeScript and included its definitely typed definitions. Pressing F5 opens a browser and makes the tests pass.
I have finally installed Chutzpah with its Visual Studio extension, but I'm not able to either make the tests pass using the Visual Studio/Resharper Unit Tests window or the default chutzpah console. At this stage, pressing F5 still opens a browser and makes the tests pass.
That's the last step I'm struggling with. For clarity of the discussion, I have setup a sample projet on GitHub to reproduce my problem. I'm sure it must be something really simple but I just cannnot figure it out.
The project can be found at the following location:
https://github.com/springcomp/TypeScript.Jasmine.Chutzpah.Sample
This doesn't really answer the 'Why', I'm still trying to get my head round this myself. However it might get you a bit further forward..
I saw the same behavior when I pulled down your project. If I put the below reference paths in at the top of the PlayerSpec.ts :
///<reference path="../src/Player.ts"/>
///<reference path="../src/Song.ts"/>
///<reference path="../spec/SpecHelper.ts"/>
///<reference path="../Scripts/typings/jasmine/jasmine.d.ts"/>
After this Resharper runs the tests successfully with them passing.
I can't explain why - maybe resharper doesn't use the SpecRunner file for picking up references?
I have two questions. My first question is: Do applications exist which measure the coverage of GUI testing for web applications (not code, but the coverage of GUI components on web page)?
My second question is:
Is GUI testing with Selenium for example necessary if we have tests for javascript as well?
Thank you in advance.
You can write your custom application to find all dom elements using http://www.w3schools.com/js/js_htmldom_elements.asp, store this in some place and after completing your test automation framework run this utility to make sure that none of the elements are missing.
GUI test is required to make sure that all your integration points b/w several backend API are working. Also we will be sure that non of the UI elements are break over UI and all your business use cases are working as expected. Mostly UI testing is done for Acceptance Testing and we can show to the customer that all there use cases are working as expected. Later in the next release you can make sure that you are not breaking any UI code. UI testing gives us confidence while releasing to end users.
Is it possible to automatically run unit tests while you work without compiling or running them manually? I am aware that NDepend allows you to do so, but I would prefer to use the ReSharper suite.
This has been available since dotCover 10. See the dotCover documentation for details.
This adds a new panel "Continuous Testing Session" as well as a new status icon in the gutter.
Note that Visual Studio also has this feature, known as Live Unit Testing.
Not possible with Resharper at the moment, you will need something like NCrunch that runs your unit tests continuously in the background, highlighting code that breaks them as you write it and fails your tests.
Edit: At the time of my response it wasn't possible to do this with ReSharper but now in Version 10 it is, see Drew Noakes's answer. You could still give NCrunch a try as it continuously runs your tests in the background even without doing an explicit save.
How to do Code Coverage testing in Xcode while using instrument for iOS automation?
Is there any tool that display the percentage of the automation code coverage.
Also I would like to know how it can be done when the test cases are written only for functionality.
no there isnt. you could include logs and count the number of logs printed vs. the total number of logs.... that would work but is a lot of manual work