mocha in watch mode runs the suite twice - windows

When I run mocha in watch mode, and I save a file, I often see that mocha shortly flashes "1 passing", before giving the correct result.
In the mocha logs, I could see that it started an initial run, aborted, and then restarted.
To further diagnose the problem, I made a small node.js script, that uses fs.watch to log file system events. This revealed that when I save a file, 3 or 4 filesystem events are emitted. I assume that this is the cause for the behavior, as the 2nd event is received after mocha started running the test suite.
My initial thoughts was that this was an issue with Windows Defender. But placing the source code under a folder which is excluded from Windows Defender didn't seem to solve the problem (the actual configuration is outside my control, it's a company managed Windows installation). There doesn't seem to be other anti-virus software running.
Why does this happen, and is there any way to fix it, preferably at the source - why does one saved file cause multiple file system events?
And if not fixable, is there a workaround, e.g. make mocha mocha a few milliseconds before running the suite?
I tried creating a 10millisecond delay in a global before block, but that just result in "0 passing" instead of "1 passing".
I also tried running mocha with the --delay option, and placing this small piece of code in suite:
setTimeout(function() {
run();
}, 10);
Same result as before, now it just flashes "0 passing" instead of "1 passing"
Edit:
If I use the "touch" tool from ubuntu running in WSL tool to touch a file, I don't observe this behavior. If I use a minimal neovim in the same ubuntu (empty init file), I do observe this behavior.

Related

Replicate "Run all specs" cypress test runner functionality via command line

I have several cypress spec files running against a web app in CI/CD build pipelines.
For whatever reason, there is a gap in time between each spec file is run in the pipeline, so the more spec files we add, the slower the build runs. I can see in the logs it's about 30s to a full minute between each spec file is run (I turned off the video record option to ensure that was not somehow related). Recently, it has begun to stall out completely and the build step fails from timing out.
To verify it wasn't related to the number of tests, I did an experiment by combining all the different tests into a single spec file and running only that file. This worked perfectly - because there was only a single spec file to load, the build did not experience any of those long pauses in between running multiple spec files.
Of course, placing all our tests into a single file is not ideal. I know with the cypress test runner there is a way to accomplish running all tests across multiple spec files as if they were in a single file using "Run all specs" button. From the cypress docs:
"But when you click on "Run all specs" button after cypress open, the Test Runner bundles and concatenates all specs together..."
I want to accomplish the exact same thing through the command line. Does anyone know how to do this? Or accomplish the same thing in another way?
Using cypress run is not the equivalent. Although this command runs all tests, it still fires up each spec file separately (hence the delay issue in the pipeline).
Seems like they don't want to do it that way. Sorry for not having a better answer.
https://glebbahmutov.com/blog/run-all-specs/

Global event hook that fires once when either Cypress run or open is called

Is there a Cypress event hook that is called one time when either cypress run or cypress open are used?
I know that there is the plugins file on('before:browser:launch') that will run before cypress open (https://docs.cypress.io/api/plugins/browser-launch-api.html#Modify-args-based-on-browser) but will this also run before cypress run?
Not sure which event I should hook into to fire off some Node code before either cypress open or run are used.
Well, the support/index.js file is usually ideal for this kind of setup code. It does run before every spec file, not just once on startup, but you can usually put anything you need to set up in there.
It's also worth having a read of the CI docs if you haven't already, as they point out potential pitfalls with running processes before Cypress. If your code is a simple "run then exit successfully" scenario, you could always go with:
node foo.js && cypress run
It's when the node process is backgrounded that things can get a bit hairy.

Visual Studio 2017 locking files

Visual Studio Version 15.2 (26430.6) Release.
Having recently updated to the above version I am running into continuous issues with VS locking up files when trying to build.
Could not copy "obj\Debug\projHype.dll" to "bin\Debug\projHype.dll". Exceeded retry count of 10. Failed.
Also tried running VS2017 with and with out admin priveledges
I tried the suggestions for older versions of visual studio but to no avail. Any ideas how to get around this?
For anyone encountering this. Updating to version 26430.12 will resolve this. Looks like the previous release contained a bug.
While there may be other causes, testhost.exe and testhost.x86.exe can both result in a lock that prevents the build from completing. The symptoms are bewildering-- the test explorer churns indefinitely, sometimes timeout warnings appear in the build. Sometimes the files cannot be accessed even after VS is shut down.
If you are using nUnit or another test framework, make sure that test discovery does not encounter any infinite loops or crashes in your code. If this happens, it can hang the testhost executable. For example, if you use nUnit TestCase or TestCaseSource, if any of these perform an action that can hang, lock up, or crash, they will be invoked before the tests are executed.
This is the tricky bit-- your tests haven't run yet, but your code can lock up VS! While this may not be your problem, if you have unit test discovery in any way, check that it all completes.
One way to make sure all discovery completes is to use the functions from a test itself, and disable them as a TestCase or TestCaseSource (or equivalent in other test frameworks). If the test hangs or crashes, that's the culprit.
For me it helped to run the program again with accepting "run last successful build". After the run, the locks were gone.

Jasmine specs never complete when run by Phantom JS on TeamCity

I am using PhantomJS to execute my Jasmine specs (v 2.0) on TeamCity as part of my CI process. The problem I have is that the process never exits when it has finished running the specs. I have also seen it exit early with a timeout. I am using the Teamcity Jasmine reporter.
More detail: I have tried a basic setTimeout-based mechanism found on a blog post (will add when I find the URL) and the standard wait-for driven script on the PhantomJs.org page. In both cases, the specs never finish. When running it through TeamCity, the build never finishes, though the log shows that all the specs have been run; if I run it directly from the command line, I see one of two things: either it finishes and then sits there indefinitely, or I get a wait-for timeout message in the middle of running the specs (i.e. it does not finish).
Final detail: this only happens when I include two particular specs. One is a long spec containing dozens of tests, the other a spec for a simple custom collection type. The order in which I include them makes no difference. If I run either spec alone, Phantom exits correctly. If I include either of these specs with all the others, Phantom exits correctly. It only seems to hang when BOTH these two particular specs are included.
I have checked that I have no uninstalled jasmine clocks.
The quantity of js involved is too large to post.

Is there a gui for nosetests

I've been using nosetests for the last few months to run my Python unit tests.
It definitely does the job but it is not great for giving a visual view of what tests are working or breaking.
I've used several other GUI based unit test frameworks that provide a visual snap shot of the state of your unit tests as well as providing drill down features to get to detailed error messages.
Nosetests dumps most of its information to the console leaving it the developer to sift through the detail.
Any recommendations?
You can use rednose plugin to color up your console. The visual feedback is much better with it.
I've used Trac + Bitten for continuous integration, it was fairly complex setup and required substantial amount of time to RTFM, set up and then maintain everything but I could get nice visual reports with failed tests and error messages and graphs for failed tests, pylint problems and code coverage over time.
Bitten is a continuous integration plugin for Trac. It has the master-slave architecture. Bitten master is integrated with and runs together with Trac. Bitten slave can be run on any system that communicate with master. It would regularly poll master for build tasks. If there is a pending task (somebody has commited something recently), master will send "build recipe" similar to ant's build.xml to slave, slave would follow the recipe and send back results. Recipe can contain instructions like "check out code from that repository", "execute this shell script", "run nosetests in this directory".
The build reports and statistics then show up in Trac.
I know this question was asked 3 years ago, but I'm currently developing a GUI to make nosetests a little easier to work with on a project I'm involved in.
Our project uses PyQt which made it really simple to start with this GUI as it provides all you need to create interfaces. I've not been working with Python for long but its fairly easy to get to grips with so if you know what you're doing it'll be perfect providing you have the time.
You can convert .UI files created in the PyQt Designer to python scripts with:
pyuic4 -x interface.ui -o interface.py
And you can get a few good tutorials to get a feel for PyQt here. Hope that helps someone :)
I like to open a second terminal, next to my editor, in which I just run a loop which re-runs nosetests (or any test command, e.g. plain old unittests) every time any file changes. Then you can keep focus in your editor window, while seeing test output update every time you hit 'save' in your editor.
I'm not sure what the OP means by 'drill down', but personally all I need from the test output is the failure traceback, which of course is displayed whenever a test fails.
This is especially effective when your code and tests are well-written, so that the vast majority of your tests only take milliseconds to run. I might run these fast unit tests in a loop as described above while I edit or debug, and then run any longer-running tests manually at the end, just before I commit.
You can re run tests manually using Bash 'watch' (but this just runs them every X seconds. Which is fine, but it isn't quite snappy enough to keep me happy.)
Alternatively I wrote a quick python package 'rerun', which polls for filesystem changes and then reruns the command you give it. Polling for changes isn't ideal, but it was easy to write, is completely cross-platform, is fairly snappy if you tell it to poll every 0.25 seconds, doesn't cause me any noticeable lag or system load even with large projects (e.g. Python source tree), and works even in complicated cases (see below.)
https://pypi.python.org/pypi/rerun/
A third alternative is to use a more general-purpose 'wait on filesystem changes' program like 'watchdog', but this seemed heavyweight for my needs, and solutions like this which listen for filesystem events sometimes don't work as I expected (e.g. if Vim saves a file by saving a tmp somewhere else and then moving it into place, the events that happen sometimes aren't the ones you expect.) Hence 'rerun'.

Resources