Is there a gui for nosetests - user-interface

I've been using nosetests for the last few months to run my Python unit tests.
It definitely does the job but it is not great for giving a visual view of what tests are working or breaking.
I've used several other GUI based unit test frameworks that provide a visual snap shot of the state of your unit tests as well as providing drill down features to get to detailed error messages.
Nosetests dumps most of its information to the console leaving it the developer to sift through the detail.
Any recommendations?

You can use rednose plugin to color up your console. The visual feedback is much better with it.

I've used Trac + Bitten for continuous integration, it was fairly complex setup and required substantial amount of time to RTFM, set up and then maintain everything but I could get nice visual reports with failed tests and error messages and graphs for failed tests, pylint problems and code coverage over time.
Bitten is a continuous integration plugin for Trac. It has the master-slave architecture. Bitten master is integrated with and runs together with Trac. Bitten slave can be run on any system that communicate with master. It would regularly poll master for build tasks. If there is a pending task (somebody has commited something recently), master will send "build recipe" similar to ant's build.xml to slave, slave would follow the recipe and send back results. Recipe can contain instructions like "check out code from that repository", "execute this shell script", "run nosetests in this directory".
The build reports and statistics then show up in Trac.

I know this question was asked 3 years ago, but I'm currently developing a GUI to make nosetests a little easier to work with on a project I'm involved in.
Our project uses PyQt which made it really simple to start with this GUI as it provides all you need to create interfaces. I've not been working with Python for long but its fairly easy to get to grips with so if you know what you're doing it'll be perfect providing you have the time.
You can convert .UI files created in the PyQt Designer to python scripts with:
pyuic4 -x interface.ui -o interface.py
And you can get a few good tutorials to get a feel for PyQt here. Hope that helps someone :)

I like to open a second terminal, next to my editor, in which I just run a loop which re-runs nosetests (or any test command, e.g. plain old unittests) every time any file changes. Then you can keep focus in your editor window, while seeing test output update every time you hit 'save' in your editor.
I'm not sure what the OP means by 'drill down', but personally all I need from the test output is the failure traceback, which of course is displayed whenever a test fails.
This is especially effective when your code and tests are well-written, so that the vast majority of your tests only take milliseconds to run. I might run these fast unit tests in a loop as described above while I edit or debug, and then run any longer-running tests manually at the end, just before I commit.
You can re run tests manually using Bash 'watch' (but this just runs them every X seconds. Which is fine, but it isn't quite snappy enough to keep me happy.)
Alternatively I wrote a quick python package 'rerun', which polls for filesystem changes and then reruns the command you give it. Polling for changes isn't ideal, but it was easy to write, is completely cross-platform, is fairly snappy if you tell it to poll every 0.25 seconds, doesn't cause me any noticeable lag or system load even with large projects (e.g. Python source tree), and works even in complicated cases (see below.)
https://pypi.python.org/pypi/rerun/
A third alternative is to use a more general-purpose 'wait on filesystem changes' program like 'watchdog', but this seemed heavyweight for my needs, and solutions like this which listen for filesystem events sometimes don't work as I expected (e.g. if Vim saves a file by saving a tmp somewhere else and then moving it into place, the events that happen sometimes aren't the ones you expect.) Hence 'rerun'.

Related

New Visual Studio installation, tests not running in Test Explorer

This question is very similar to other questions that also in some cases literally have the text "tests not running in Test Explorer" in the title. But, my context is a bit different. In those questions, there was a fair bit of investigation into what might be wrong with the tests. I am fairly confident nothing is wrong with the tests in this case.
I am one of hundreds of developers working on a project, and this project has a large bank of automated tests (though perhaps not as large as it ought to be :-P). Everybody is frequently running tests, and triggers fire when pull requests are made and merged to automatically run them then too. Tests were working fine for me as well. But, I have just been given a new laptop with better hardware specs, and I am trying to get it set up. On the new laptop, the project builds just fine (and noticeably faster :-) ), but the automated tests just don't run. I can't figure out why, and I'm looking for suggestions about what to check in this context -- given that there are hundreds of places where the exact same code is working perfectly, I really don't think the tests or test projects themselves are at fault here.
I have observed that the build output, apparently randomly, sometimes does not contain the test adapter files:
Microsoft.VisualStudio.TestPlatform.MSTest.TestAdapter.dll
Microsoft.VisualStudio.TestPlatform.MSTestAdapter.PlatformServices.dll
Microsoft.VisualStudio.TestPlatform.MSTestAdapter.PlatformServices.Interface.dll
xunit.runner.visualstudio.testadapter.dll
If these files are missing, then VSTest.Console.exe also cannot run the test. But, usually rebuilding the project results in the files magically appearing, and then VSTest.Console.exe works just fine.
I haven't been able to ascertain a reason why the adapter files are sometimes put into the build output and sometimes not, and in either case, the Test Explorer within Visual Studio always fails to run the tests -- it discovers the tests just fine, puts several thousand items into the forest of trees, but when told to run tests, it just sits there for a minute or two and then returns to idle state with no output at all in the "Tests" output window.
This is a brand new installation of Visual Studio Enterprise 2019 Preview, the exact same version that is on my old laptop, but on my old laptop the tests run fine. What do?? I don't know what to check next. :-(
Well, I am thoroughly confused. I tried installing new features, I tried checking for system updates, I rebooted multiple times, and tests did not work. So, finally, I decided to make a cut-down minimal test project and see if I could observe any differences in Process Monitor between the two computers. I made a project with two tiny tests, one with NUnit and one with xUnit, and ... they worked. So, I opened up the big project again and hit Run Tests, and ... they worked. I am completely stumped, and the only advice I can offer to anyone finding this question with a similar problem is, just keep trying.

Confirming compile time scripts execution in Xcode

I am downloading data from a remote server using curl in Build Phases > Run Script. Downloading takes 5-15s, not that much, but multiple times a day it consumes considerable time. Is there a better way to skip a script than commenting it out? Ideally, it would be some kind of confirmation at compile time (e.g Do you really need to download X? y/n).
You can’t make the run script interactive in the console as far as I know. But you can use a shell conditional with an AppleScript interactive dialog, because AppleScript itself blocks while dialog is shown. See for example https://cantina.co/adding-interactivity-to-the-xcode-build-process/.
However, introducing uncertainty into a build is dangerous. Plus you’d never be able to automate the build. In my view you’d be better off flipping a custom build setting / environment variable.

CDash client setup

I'm trying to create a C++ CI environment by using CDash.
I've got CDash running on my computer and I can send some results to it from the CDash clients, by running the ctest manually.
I'm a bit lost on how to setup a client to automatically compile and test the code when the source code changes in the version control system (subversion), or at specific times.
I have the Mastering CMake book, but it doesn't seem to say much on that topic.
Is there any way to do the continuous build without hacking around with scheduled tasks / cron?
Is there any good example that would be useful to check out?
Can I somehow order to run a build on some site from the Dashboard? I kinda remember seeing this somewhere but I'm unable to find it now.
Is CDash any good for CI environments? (use comments to answer this one)
CDas#Home might be the solution. Generally I have my continuous machines run a script on a nightly cron job that polls the repository every couple of minutes for ~24 hours.

GUI for Build Process

I've just implemented build and deploy process which consists of java files, ant script and cmd files. In the process, a release manager will have to check out source, hit the build.cmd button and then carry a zip file over to a server.
I am wondering if it is worthwhile to make a GUI for it? So that the release manager does not need to check out source manually for example?
How do I start? I have quite limited knowledge of javax, but I very much like to learn.
Thanks,
Sarah
This sounds like something that could be handled by Hudson. It can check out source, run Ant scripts, etc., saving you the trouble of maintaining a GUI. I'd give that a shot before rolling your own.
I have helped develop the build process at my current company. The way we currently do it is with a script file. It checks out the latest code from the stable branch of our repository, performs some steps to get some data from a database (such as static SQL data that needs to be loaded at deployment), then compresses everything. The file is then distributed to our production servers and then the setup routine is executed. Everything is automatic and the script is written in Python. Python is great for these types of things because of the sheer number of libraries it has to help the developer.
Perhaps it may be useful to build a GUI for your deployment procedure -- typically this would be useful if the deployment requires user interaction to make decisions, such as "Which server shall I deploy to?", etc. But, if it's just a matter of doing things automatically, then a script file's the way to go. Choose your favourite language and dive in -- I of course recommend Python.
If you'd like to learn how to make a simple GUI in Java (since that seems to be what your company is familiar with), you should check out the stuff at this site:
http://java.sun.com/docs/books/tutorial/uiswing/index.html
I learned everything I know about Java from that site. The section on GUI programming is great.
Best of luck!
Shad

Best way to test a Delphi application

I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI.
My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work.
I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done.
But I have a few questions before I do anything drastic... (and before purchasing anything)
Is it worth it?
Would this be a good way to test?
The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)?
I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp.
Is there a way in testcomplete to specify command-line parameters for an exe?
Does anybody have any similar experiences.
I would suggest you plan to use both DUnit and something like TestComplete, as they each serve a different purpose.
DUnit is great for Unit Testing, but is difficult to use for overall application testing and UI testing.
TestComplete is one of the few automated testing products that actually has support for Delphi, and our QA Engineer tells me that their support is very good.
Be aware though that setting up automated testing is a large and time-consuming job. If you rigourously apply unit testing and auomated UI testing, you could easily end up with more test code than production code.
With a large (existing) application you're in a difficult situation with regards to implementing automated testing.
My recommendation is to set up Unit Testing first, in conjunction with an automated build server. Every time someone checks anything in to source control, the Unit Tests get run automatically. DO NOT try to set up unit tests for everything straight up - it's just too big an effort for an existing application. Just remember to create unit tests whenever you are adding new functionality, and whenever you are about to make changes. I also strongly suggest that whenever a bug is reported that you create a unit test that reproduces the bug BEFORE you fix it.
I'm in a similar situation. (Large app with lots of dependencies). There is almost no automated testing. But there is a big wish to fix this issue. And that's why we are going to tackle some of the problems with each new release.
We are about to release the first version of the new product. And the first signs are good. But it was a lot of work. So next release we sure need some way to automate the test process. That's why I'm already introducing unit tests. Although due to the dependencies, these are no real unit tests, but you have to start somewhere.
Things we have done:
Introduced a more OO approach, because a big part the code was still procedural.
Moved stuff between files.
Eliminated dependencies where possible.
But there is far more on the list of requirements, ensuring enough work for the entire team until retirement.
And maybe I'm a bit strange, but cleaning up code can be fun. Refactoring without unit tests is a dangerous undertaking, especially if there are a lot of side effects. We used pair programming to avoid stupid mistakes. And lots of test sessions. But in the end we have cleaner code, and the amount of new bugs introduced was extremely low.
Oh, and be sure that you know this is an expensive process. It takes lots of time. And you have to fight the tendency to tackle more than one problem in a row.
I can't answer everything, as I've never used testcomplete, but I can answer some of those.
1 - Yes. Regression testing is worth it. It's quite embarrassing to you as a developer when the client comes back to you when you've broken something that used to work. Always a good idea to make sure everything that used to work, still does.
4 - Oracle has something called Flashback which lets you create a restore point in the database. After you've done your testing you can just jump back to this restore point. You can write scripts to use it too, FLASHBACK DATABASE TO TIMESTAMP (FEB-12-2009, 00:00:00);, etc
We're looking at using VMWare to isolate some of our testing.
You can start from a saved snapshot, so you always have a consistent environment and local database state.
VMWare actions can be scripted, so you can automatically install your latest build from a network location, launch your tests and shut down afterwards.
Is it worth it?
Probably. Setting up and maintaining tests can be a big job, but when you have them, tests can be executed very easily and consistently. If your project is evolving, some kind of test suite is very helpful.
Would this be a good way to test?
I would say that proper DUnit test suite is better first step. However if you have large codebase which is not engineered for testing, setting up functional tests is even bigger pain than setting up GUI tests.
The result of the test should in my database (Oracle), is there an easy
way in testcomplete to check these values (multiple fields in multiple tables)?
TestComplete has ADO and BDE interface. Or you can use OLE interface in VBScript to access everything that's available.
Is there a way in testcomplete to specify command-line parameters for
an exe?
Yes.
One way to introduse unittesting in an (old) application could be to have a "Start database" (like the "Flashback" feature described by Rich Adams).
The program som unittest using DUnit to control the GUI.
Se the "GUI testing with DUnit" on http://delphixtreme.com/wordpress/?p=181
Every time the test is started by restoring to "Start database", because, then a known set of data, can be used.
I would need to setup a test database to do all the automated
testing, would there be an easy way to
automate re-setting the test db?
Use transactions: perform a rollback when the test completed. This should revert everything to the initial state.
Recommended reading:
http://xunitpatterns.com/

Resources