How to run Jasmine on multiple pages? - jasmine

A lot of examples out there focus on writing tests and pretty much everything is closed in one single html file.
I'm more interested in how to actually run Jasmine within real-life app (which is not a SPA).
For the first test I included my SpecRunner.html at the very end of my app's framework:
include PATH_TOOLS.'/tests/jasmine/SpecRunner.html';
and this works, wherever I go in my app I have the test results at the bottom of the page. Obviously this kind of mixing test code with framework's code is not the cleanest approach - every time I push my commits to repository I'd have to remove this line.
On the other hand, If I open SpecRunner.html directly I cannot navigate to my app from there, unless it would be opened in iframe, but is it a common practice? I doubt. I know I can always run Jasmine in terminal, but I would prefer to see the results beside my app.
Perhaps I can somehow run Jasmine from command line and force it to open my app in a real browser, like Selenium does?

Related

Run all specs from command line in cypress

How can I run all specs from command line in cypress? I have 3 spec files which depends on each other and browser shouldn't reset after each test.
"But when you click on "Run all specs" button after cypress open, the Test Runner bundles and concatenates all specs together..."
I want to accomplish the exact same thing through the command line.
You might not like this answer, but you're going head first against the wall there.
One of goals in pretty much any testing project is making your tests completely independent from one another, and there's plenty of reasons to do so, with just a few being:
You don't care if one test failed and the chain is broken.
Similarly, changing/updating one test case doesn't break a chain.
You can run your tests in parallel, which is a serious point in any projects that plan to scale.
As far as I know, this browser/runner reset after each spec file is desired behavior from cypress side to make parallelization possible (but I can't remember where I read it), so I don't think there's any workaround for your problem.

Replicate "Run all specs" cypress test runner functionality via command line

I have several cypress spec files running against a web app in CI/CD build pipelines.
For whatever reason, there is a gap in time between each spec file is run in the pipeline, so the more spec files we add, the slower the build runs. I can see in the logs it's about 30s to a full minute between each spec file is run (I turned off the video record option to ensure that was not somehow related). Recently, it has begun to stall out completely and the build step fails from timing out.
To verify it wasn't related to the number of tests, I did an experiment by combining all the different tests into a single spec file and running only that file. This worked perfectly - because there was only a single spec file to load, the build did not experience any of those long pauses in between running multiple spec files.
Of course, placing all our tests into a single file is not ideal. I know with the cypress test runner there is a way to accomplish running all tests across multiple spec files as if they were in a single file using "Run all specs" button. From the cypress docs:
"But when you click on "Run all specs" button after cypress open, the Test Runner bundles and concatenates all specs together..."
I want to accomplish the exact same thing through the command line. Does anyone know how to do this? Or accomplish the same thing in another way?
Using cypress run is not the equivalent. Although this command runs all tests, it still fires up each spec file separately (hence the delay issue in the pipeline).
Seems like they don't want to do it that way. Sorry for not having a better answer.
https://glebbahmutov.com/blog/run-all-specs/

Is there a recommended debugging strategy for E2E automation tests?

What is the best elegant approach for debugging a large E2E test?
I am using TestCafe automation framework and currently. I'm facing multiple tests that are flaky and require a fix.
The problem is that every time I modify something within the test code I need to run the entire test from the start in order to see if that new update succeeds or not.
I would like to hear ideas about strategies regard to how to debug an E2E test without losing your mind.
Current debug methods:
Using the builtin TestCafe mechanism for debugging in the problematic area in the code and try to comment out everything before that line.
But that really doesn't feel like the best approach.
When there are prerequisite data such as user credentials,url,etc.. I manually Declare those again just before the debug().
PS: I know that tests should be focused as much as possible and relatively small but this is what we have now.
Thanks in advance
You can try using the flag
--debug-on-fail
This pauses the test when it fails and allows you to view the tested page and determine the cause of the fail.
Also use test.only to to specify that only a particular test or fixture should run while all others should be skipped
https://devexpress.github.io/testcafe/documentation/using-testcafe/command-line-interface.html#--debug-on-fail
You can use the takeScreenshot action to capture the existing state of the application during the test. Test Café stores the screenshots inside the screenshots sub-directory and names the file with a timestamp. Alternatively, you can add the takeOnFails command line flag to automatically capture the screen whenever a test fails, so at the point of failure.
Another option is to slow down the test so it’s easier to observe when it is running. You can adjust the speed using the - speed command line flag. 1 is the fastest speed and 0.01 the slowest. Then you can record the test run using the - video command line flag, but you need to set up FFmpeg for this.

How do you write UI tests for a command line app in Xcode?

I'm making a small Zork-type game, and I don't want to have to type my way through the whole thing to test the game play every time I change something. Is there a way to use UI testing to do it for me?
I've tried looking around, but everyone just talks about running UI tests from the command line. But, I'd like to know how to do it for a console app.
Now I don't know what your codebase looks like, but your best bet is probably to create test files that run the logic you want to test. You may want to make all logic input independent so that input can be passed by either your tests or a user.

How to use rspec to test screen scraping?

I'm writing a site that is going to rely a lot on screen scraping. Because I know screen scraping is prone to breaking I'd like to get notified somehow that there is a problem.
The solution that I think will work is to write an rspec test for each site I want to support. The test will open a few remote pages from each site and compare them with the output I expect from my scraper. I'd like to also run the same tests on locally cached copies so I know if my code changes broke the scraper or if the remote site changed. I'd like to somehow run these tests once a day and notify me of any problems.
Eventually I'd like to make this a gem because it's a reoccurring problem for me. I tend to do a lot of scraping and it would be nice to know when things break.
So my problem is I'm relatively new to writing tests for my code and I have no clue what the best way to set this up is.
Take a look at the VCR gem, which will let you get local copies of various pages you want to test, while having the ability to refresh them every so often, as well as testing against live pages.

Resources