I am using python-nose to run some tests. The test code is arranged into modules, where each module's fixtures install some VMs in a new configuration, and the module's tests then check the behaviour of those VMs is what is expected.
I want to install a per-module failure handler that goes off and grabs the logs from the VMs if any test in the module fails. Is there a proper way of doing that? Is there some callback you can register with python-nose which will kick off custom code when a test fails?
Thanks,
Sounds like you should write a plugin. You may be interested in defining afterTest(), handleFailure() methods on your Plugin class. Hope that helps.
Related
I'm writing xUnit unit test cases for a dotnet core application which uses DocumentDB (CosmosDB) as storage. The unit test are written to execute against the local cosmos db emulator. On the Azure DevOps build environment, I've setup the Azure Cosmos DB CI/CD task which internally creates a container to install the emulator. However, I'm not able to figure out that how the endpoint of emulator can be passed to xUnit fixture?
Is there any way through which xUnit fixture can read the .runsettings test parameters or parameters can be passed via other source?
Update
Currently, I implemented the scenario using Environment Variable but still not happy to define the connection string as a environment variable using powershell in build task and read it in through code during unit test execution. I was thinking if there could be another way of achieving it..
Below snapshot shows how the build tasks are configured currently as workaround to achieve the desired:
And code to read the value as
var serviceEndpoint = Environment.GetEnvironmentVariable("CosmosDbEmulatorEndpointEnvironmentVariable");
Since, UnitTest task provides the option to pass .runsettings/.testsettings with option to override the test run parameters so was thinking it something can be achieved using those options.
This is not supported in xUnit.
See SO answers here and here, and this github issue indicating that it is not something that will be supported in xUnit.
Currently, I implemented the scenario using Environment Variable but still not happy to define the connection string as a environment variable using powershell in build task and read it in through code during unit test execution. I was thinking if there could be another way of achieving it..
Below snapshot shows how the build tasks are configured currently as workaround to achieve the desired:
And code to read the value as
var serviceEndpoint = Environment.GetEnvironmentVariable("CosmosDbEmulatorEndpointEnvironmentVariable");
Since, UnitTest task provides the option to pass .runsettings/.testsettings with option to override the test run parameters so was thinking it something can be achieved using those options.
My development server is my Windows computer, and I want to test the task I created before using it to my server on real users.
I know about the windows Task scheuduler but it's very limited, and I want to run my task for example, right now and test it before uploading.
What's the best solution for making sure the task is allright before using it in the server?
You should always unit test your task, just invoke the methods in your task from within your test methods.
Task
$this->dispatch(new SendWelcomeEmail($user));
Test method
Mail::send(new SendWelcomeEmail($user, $view));
You can also see this thread to know which task scheduler you can use to test your command in some local integration tests.
The easiest altogether is to make a virtual machine that resembles your production server and just test the tasks in there.
I'm running some unit tests on Android, and I have to run an http server locally to get images from it during my tests. Until now, I'm using a python script that run the SimpleHttpServer of python, then call Gradle, then kill the server at the end.
I've found that there is a SimpleHttpFileServer that could be used in Gradle.
But I can't make it work. I found absolutely nothing about usage of this
class, except the doc here :
https://gradle.org/docs/current/javadoc/org/gradle/plugins/javascript/envjs/http/simple/SimpleHttpFileServer.html
that doesn't say much. What is a Stoppable for instance? No doc about it.
When I try to use it, I have this error :
Could not find matching constructor for:
org.gradle.plugins.javascript.envjs.http.simple.SimpleHttpFileServer
Anyone here has played with that? Or do you see any other way to do it? Another idea was to run the python server from Gradle, then kill it at the end, probably using task.finalizedBy, but I didn't success either.
Any help on any of those methods would be appreciated.
Thanks,
GĂ©rald
My group will be implementing CI using Jenkins. As such, I want to make sure that any unit and/or integration tests we create integrate easily with Jenkins. We have several different technologies in our stack we are using from C++ code to Oracle PL/SQL packages to Groovy code. We want to develop test drivers (code that wraps and tests these individual code units) that we can integrate with Jenkins so that these tests are automatically run when we perform commits (git) as well as on a nightly basis. My question is, what are the best practices for writing these test drivers so that they will easily integrate with Jenkins when we implement it?
For example, we have have a PL/SQL stored procedure that we want to run tests against as part of our CI testing. I could write a bash shell script that wraps calls to it, I could write a Java program that calls it. Basically I could wrap it in anything. Then the next question is...is there some sort of standard for outputting results so that Jenkins can easily determine if the test passed or failed?
.is there some sort of standard for outputting results so that Jenkins
can easily determine if the test passed or failed?
If your test results are compliant with Junit results,jenkins have junit plugin which give you the better way for tracing test reports (result trend graph) and also test result archiving. converting ant test log to Junit format easier one.
useful links:
http://nose2.readthedocs.org/en/latest/plugins/junitxml.html
https://wiki.jenkins-ci.org/display/JENKINS/JUnit+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/xUnit+Plugin
Jenkins and JUnit
Basically I could wrap it in anything.
I personally prefer to go with Java among your choices.because it give you better Api to create xml files
Use python unittest to wrap any of your tests.
Produce junit xml test results.
One easy way of getting any python unittest to write out junit is from command-line.
yum install pytest
And call your test script like this:
py.test --junitxml result.xml testscript.py
And in jenkins build configuration Post-build actions Add a "Publish JUnit test result report" action with result.xml and any more test result files you produce.
https://docs.python.org/2.7/library/unittest.html
This is just one way of producing junit xml results with python. There are a good few other methods either using unittest module or junitxml or others.
Question
Is there a way to have a method that will always run anytime that test assembly is run through MSTest?
Similar to how the [TestInitialize] and [ClassInitialize] attributes work, but for the entire assembly. I do not want to have to add code to every test class's [ClassInitialize] method.
Reasoning
Some of my tests interact with the database. They delete data and other things that would be very harmful to a production database. There is only a configuration file that tells my unit test project to run against the non-production database.
I would feel better if there was a method that would run on startup that would say "Okay Database name is not 'production'"
Ideas
Log4Net uses an assembly attribute to configure itself.
using log4net.Config;
[assembly: XmlConfigurator()]
Perhaps I can do something simliar?
[assembly: CheckDatabaseNameNot("production")]
Have you tried [AssemblyInitialize]?