Cucumber parse speed - ruby

We have been using Cucumber for some time now, and now have over 200 scenarios. Our startup speed is getting very slow, which makes a big difference in our edit-test-commit cycle. The problem seems to be the parsing of the feature files. Is there a way we can speed this up?
NOTE: We are using IronRuby, which has a known slow startup time. However, this startup time (about 30 seconds) is small compared to the time spent parsing (2-3 minutes) which we can see because of the side-effects of our env.rb code.
EDIT: Running only specific tags doesn't help reduce parse time because Cucumber still has to parse all the files to read the tags in the first place.

It is possible to run only feature files in a specific directory by passing the directory in to cucumber. This causes only the features under that directory to run, and more importantly, it doesn't parse anything in other directories. So one can reduce run time by organizing feature files into directories and only running the relevant feature directory.

You could just test the scenarios you're working with at the moment. If you set the tag #wip (word in progress) before a Scenario and run 'rake cucumber:wip' you will only run the scenarios that contain the tag #wip

Related

Replicate "Run all specs" cypress test runner functionality via command line

I have several cypress spec files running against a web app in CI/CD build pipelines.
For whatever reason, there is a gap in time between each spec file is run in the pipeline, so the more spec files we add, the slower the build runs. I can see in the logs it's about 30s to a full minute between each spec file is run (I turned off the video record option to ensure that was not somehow related). Recently, it has begun to stall out completely and the build step fails from timing out.
To verify it wasn't related to the number of tests, I did an experiment by combining all the different tests into a single spec file and running only that file. This worked perfectly - because there was only a single spec file to load, the build did not experience any of those long pauses in between running multiple spec files.
Of course, placing all our tests into a single file is not ideal. I know with the cypress test runner there is a way to accomplish running all tests across multiple spec files as if they were in a single file using "Run all specs" button. From the cypress docs:
"But when you click on "Run all specs" button after cypress open, the Test Runner bundles and concatenates all specs together..."
I want to accomplish the exact same thing through the command line. Does anyone know how to do this? Or accomplish the same thing in another way?
Using cypress run is not the equivalent. Although this command runs all tests, it still fires up each spec file separately (hence the delay issue in the pipeline).
Seems like they don't want to do it that way. Sorry for not having a better answer.
https://glebbahmutov.com/blog/run-all-specs/

doing restartable downloads in ruby

I've been trying to figure out how to use the Down gem to do restartable downloads in ruby.
So the scenario is downloading a large file over an unreliable link. The script should download as much of the file as it can in the timeout allotted (say it's a 5GB file, and the script is given 30 seconds). I would like that 30 second progress (partial file) to be saved so that next time the script is run, it will download another 30 seconds worth. This can happen until the complete file is downloaded and the partial file is turned into a complete file.
I feel like everything i need to accomplish this is in this gem, but it's unclear to me which features i should be using, and how much of it i need to code myself. (streaming? or caching?) I'm a ruby beginner, so i'm guessing i use the caching and just save the progress to a file myself, and enumerate for as many times as i have time.
How would you solve the problem? Would you use a different gem/method?
You probably don't need to build that yourself. Existing tools like curl and wget already have that functionality.
If you really want to build it yourself, you could perhaps take a look at how curl and wget do it (they're open-source, after all) and implement the same in Ruby.

Is there a recommended debugging strategy for E2E automation tests?

What is the best elegant approach for debugging a large E2E test?
I am using TestCafe automation framework and currently. I'm facing multiple tests that are flaky and require a fix.
The problem is that every time I modify something within the test code I need to run the entire test from the start in order to see if that new update succeeds or not.
I would like to hear ideas about strategies regard to how to debug an E2E test without losing your mind.
Current debug methods:
Using the builtin TestCafe mechanism for debugging in the problematic area in the code and try to comment out everything before that line.
But that really doesn't feel like the best approach.
When there are prerequisite data such as user credentials,url,etc.. I manually Declare those again just before the debug().
PS: I know that tests should be focused as much as possible and relatively small but this is what we have now.
Thanks in advance
You can try using the flag
--debug-on-fail
This pauses the test when it fails and allows you to view the tested page and determine the cause of the fail.
Also use test.only to to specify that only a particular test or fixture should run while all others should be skipped
https://devexpress.github.io/testcafe/documentation/using-testcafe/command-line-interface.html#--debug-on-fail
You can use the takeScreenshot action to capture the existing state of the application during the test. Test Café stores the screenshots inside the screenshots sub-directory and names the file with a timestamp. Alternatively, you can add the takeOnFails command line flag to automatically capture the screen whenever a test fails, so at the point of failure.
Another option is to slow down the test so it’s easier to observe when it is running. You can adjust the speed using the - speed command line flag. 1 is the fastest speed and 0.01 the slowest. Then you can record the test run using the - video command line flag, but you need to set up FFmpeg for this.

How do I optionally slow down cucumber (ruby) tests?

I have tried googling, but had no luck. Maybe I just don't know the terms to search for...
Recently we made some changes to the environment, and now tests that used to run without issue kill the service. We are working to find out why that is happening... but in the meantime, is there a way I could pass a CLI command or something to slow down the tests on demand? (or vice versa, run at full speed on demand) Or maybe build something into a rake task?
I know I could easily add an after hook to sleep between scenarios, but I want to be able to run the tests full blast as well while we are trying to sort out the issue. Adding an after hook would require editing several files every time we wanted to turn the throttling on or off.
UPDATE:
decided to try adding this to env.rb and I think it might work, although it feels crude. If you have other suggestions I would love to hear them. This is just a temporary fix though, once we figure out what is up with the environment we do need to go back and add a more elegant way of slowing tests down when needed, perhaps through the http client.
After do
if ENV['SLOW'].eql? 'yes'
sleep(3)
#logger.info '******* Waiting 3 seconds before running next scenario *******'
end
end

Is there a gui for nosetests

I've been using nosetests for the last few months to run my Python unit tests.
It definitely does the job but it is not great for giving a visual view of what tests are working or breaking.
I've used several other GUI based unit test frameworks that provide a visual snap shot of the state of your unit tests as well as providing drill down features to get to detailed error messages.
Nosetests dumps most of its information to the console leaving it the developer to sift through the detail.
Any recommendations?
You can use rednose plugin to color up your console. The visual feedback is much better with it.
I've used Trac + Bitten for continuous integration, it was fairly complex setup and required substantial amount of time to RTFM, set up and then maintain everything but I could get nice visual reports with failed tests and error messages and graphs for failed tests, pylint problems and code coverage over time.
Bitten is a continuous integration plugin for Trac. It has the master-slave architecture. Bitten master is integrated with and runs together with Trac. Bitten slave can be run on any system that communicate with master. It would regularly poll master for build tasks. If there is a pending task (somebody has commited something recently), master will send "build recipe" similar to ant's build.xml to slave, slave would follow the recipe and send back results. Recipe can contain instructions like "check out code from that repository", "execute this shell script", "run nosetests in this directory".
The build reports and statistics then show up in Trac.
I know this question was asked 3 years ago, but I'm currently developing a GUI to make nosetests a little easier to work with on a project I'm involved in.
Our project uses PyQt which made it really simple to start with this GUI as it provides all you need to create interfaces. I've not been working with Python for long but its fairly easy to get to grips with so if you know what you're doing it'll be perfect providing you have the time.
You can convert .UI files created in the PyQt Designer to python scripts with:
pyuic4 -x interface.ui -o interface.py
And you can get a few good tutorials to get a feel for PyQt here. Hope that helps someone :)
I like to open a second terminal, next to my editor, in which I just run a loop which re-runs nosetests (or any test command, e.g. plain old unittests) every time any file changes. Then you can keep focus in your editor window, while seeing test output update every time you hit 'save' in your editor.
I'm not sure what the OP means by 'drill down', but personally all I need from the test output is the failure traceback, which of course is displayed whenever a test fails.
This is especially effective when your code and tests are well-written, so that the vast majority of your tests only take milliseconds to run. I might run these fast unit tests in a loop as described above while I edit or debug, and then run any longer-running tests manually at the end, just before I commit.
You can re run tests manually using Bash 'watch' (but this just runs them every X seconds. Which is fine, but it isn't quite snappy enough to keep me happy.)
Alternatively I wrote a quick python package 'rerun', which polls for filesystem changes and then reruns the command you give it. Polling for changes isn't ideal, but it was easy to write, is completely cross-platform, is fairly snappy if you tell it to poll every 0.25 seconds, doesn't cause me any noticeable lag or system load even with large projects (e.g. Python source tree), and works even in complicated cases (see below.)
https://pypi.python.org/pypi/rerun/
A third alternative is to use a more general-purpose 'wait on filesystem changes' program like 'watchdog', but this seemed heavyweight for my needs, and solutions like this which listen for filesystem events sometimes don't work as I expected (e.g. if Vim saves a file by saving a tmp somewhere else and then moving it into place, the events that happen sometimes aren't the ones you expect.) Hence 'rerun'.

Resources