We use SpecFlow with NUnit to write our BDD tests.
Since we switched our testruns to run parallel we have an issue with some tests failing because a shared resource seems to be altered which messes up the state we expect when running the scenario.
Unfortunately we cannot simply change this access to a global resource so I'd like to exclude scenarios automatically when they make use of a step that modifies said resource.
I saw that there is a setting addNonParallelizableMarkerForTags:
"generator": {
"addNonParallelizableMarkerForTags": [
"NotParallel"
]
},
but unfortunatelly this seems to only work on feature level and not on scenario level. Also it has to be added manually, so developers could forget to add it.
Is such a thing supported in SpecFlow and if yes, how could this be done?
Related
I'm running tests in Visual Studio using "Test Explorer" with NUnit and a .runsettings file (specified by choosing the option in the GUI "Select Settings File")
My settings file (called mytests.runsettings) is:
<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
<RunConfiguration>
<DisableAppDomain>True</DisableAppDomain>
</RunConfiguration>
<ForceListContent>true</ForceListContent>
<NUnit>
<DomainUsage>None</DomainUsage>
</NUnit>
</RunSettings>
I have verified that it is loading this file (verified by adding the framework node and setting it to a fake version, which then results in an error).
But no matter what I do, it doesn't run without an AppDomain!
Running from the command line does work:
nunit3-console.exe --domain=None --inprocess MyTests.dll
What do i need to do to get it to use that setting in NUnit?
I don't believe you can do that.
There is no recognized DomainUsage element under NUnit in the runsettings file. DomainUsage is an internally used property, which will be honored if set. But you can't set it that way. Your DomainUsage element is simply being ignored.
If the adapter received your DisableAppDomain setting, then it would set DomainUsage to None. However, I don't believe it is actually receiving it.
Point 2 requires some explanation. Note that I haven't worked on the adapter for a few years and I'm going from memory but here it is...
The DisableAppDomain setting was added to allow Visual Studio to force NUnit to try to run without using an AppDomain. The Test Explorer is supposed to set things up so that it's possible to run that way, i.e. by making sure everything is already available in the current domain.
In order to prevent misuse of the feature, I believe that Test Explorer always overrides any user-provided setting. Again, this is from memory of work that was done a few years ago, but it seems as if the results you are seeing validate it.
The rationale for this past decision was that Test Explorer is completely responsible for setting up the Process and AppDomain used to run the tests. The user has no way to impact that and neither does NUnit. Of course, when using the console runner, that's not the case - control is in the hands of the user.
Something else to investigate is why you feel the need to run without a test AppDomain being created. But that's probably another question. :-)
I'll ask some other folks who may have a better memory than I to look at this as well.
UPDATE:
#Terje, who maintains the adapter now, replied and confirmed that there is no way to set DomainUsage in the runsettings file or any other way we know of when running under the test adapter. The docs have been corrected to avoid the implicit suggestion that it's possible.
We believe, but have not confirmed experimentally, that TestExplorer creates it's own AppDomain whenever it uses this setting to suppress its creation by the test adapter.
This answer is a follow up to the answer from #charlie above, just need some more space here.
I've checked on the domains that are being created, based on whether the DisableAppDomain is set or not.
When the disable app domain is not set, the appdomain is created by NUnit with applicationbase and friendly name as shown below:
The NUnitCheckDomain is the test dll.
When the disable app domain is set, NUnit no longer sets its own domain, and you then see it runs under the testhost appdomain, which is the process that runs all tests:
So this seems to work the way it should and can.
Do you need it to run under some other app domain than one of these?
BTW: If I run the NUnitCheckDomain test assembly using the NUnit3 console, then it a) works when run directly b) crashes when used with the parameters you give above (domain=None and --inprocess) with not able to load NUnit.Framework. #charlie - Any reason this should not work the same using NUnit3-Console ?
I'm writing a unified project for 3 smart TVs. I have also 3 configurations created in Visual Studio. Now I want to execute some CLI scripts depending on selected configuration.
The problem is in new ASP.NET 5 project I don't have an editor for post build events.
I know I have to do this in project.json. What I found is:
"scripts": {
"postbuild": ""
}
But using this one I can't create different CLI scripts for different configurations.
I found also:
"configurations": {
},
And I guess this is probably what I want, but... How to use it? Intellisense has no power here and also I wasn't lucky searching the Web...
[edit]
Maybe I should try with .xproj?
You'll need to build a master script which uses the available context and environment variables to switch and run the other scripts of your choice.
In addition to the list of variables Here for compile, you also get these for publish related scripts and then these are available everywhere, as are environment variables returned by Environment.GetEnvironmentVariable, which can be seen here.
The image below shows the intellisense from the VS2015 Update 3 RTM, but it's misleading, since you get others depending on the script block you're using:
So, your full list of context variables that you can use to control flow in your scripts is:
Every script block:
%project:Directory%
%project:Name%
%project:Version%
Compile specific:
%compile:TargetFramework%
%compile:FullTargetFramework%
%compile:Configuration%
%compile:OutputFile%
%compile:OutputDir%
%compile:ResponseFile%
%compile:RuntimeOutputDir% (only available if there is runtime output)
%compile:RuntimeIdentifier% (only availabe if there is runtime output)
%comiple:CompilerExitCode% (only available in the postcompile script block)
Publish specific:
%publish:ProjectPath%
%publish:Configuration%
%publish:OutputPath%
%publish:TargetFramework%
%publish:FullTargetFramework%
%publish:Runtime%
I investigated on this a bit but did not really get to any good result.
There are some project variables that are exposed in scripts. Unfortunately, those are very limited:
%project:Name% gives you the project name
%project:Directory% gives you the project directory
%project:Version% gives you the project version
So there is no way to access the build configuration or the environment here.
The configurations option in the project.json is also limited to build configurations and only allows declaring compilation options there, so that also doesn’t work.
Unfortunately, there also doesn’t seem to be another way to solve this. At least not right now. I would consider myself sending a pull request to DNX to add some additional project variables which one could use but at the moment, it doesn’t really make any sense to invest time into DNX: After all it’s being replaced by the dotnet CLI. We’ll see if that one will come with functionality to access the environment—and if not, I might end up submitting a pull request to add this functionality. But until we get there, I’m afraid there is no solution for this.
I have a build environment with multiple agents.
I would like to set up an Agent Requirement in my build that detects if certain software is installed. In some cases I can look in the env.Path for a particular string. But some software doesn't modify the Path.
I realize that after I install the software i could edit the BuildAgent.properties file to set a particular property, but I'd like it to be more automatic.
The specific instance is I have a build that uses MSDeploy to deploy websites, but it won't work if MSDeploy isn't installed. How can I specify in my build that I need an Agent that has MSDeploy installed?
You can build a simple agent plugin. Here are a few suggestions:
Extend AgentLifeCycleAdapter and implement agentInitialized method
Implement logic of detection the necessary application (for example based on some file existence) inside agentInitialized method
Use agent.getConfiguration().addConfigurationParameter() to report agent parameter to the server
If your detection logic can be implemented through file detection, you can use FileWatcher to monitor specific files and report parameters based on them even without restart of the agent
To my knowledge Agent Requirements work simply by validating either the existence of, or the value set in an Agent Parameter. As you say, this requires editing the <agent home>/conf/buildAgent.properties configuration file, either manually or in some automated way.
In terms of automation, you could take the approach of authoring a build configuration that acts as an agent bootstrapper; i.e. a build that runs on all agents (scheduled overnight / manually triggered) and maintains build agent parameters in the <agent home>/conf/buildAgent.properties file depending on certain conditions on the agent. Something like (pseudo):
if [ exists /path/to/MSDeploy ] then echo MSDeployExists to buildAgent.properties
This comes with a big disclaimer; I haven't tried this myself, and I believe that the agent will restart automatically based on changes to this file, so there may be issues with editing that file automatically. But it's a potential solution to maintaining your requirements in a centralised manner, and if it works then great. I use a similar approach to bootstrapping custom build scripts out to all agents to augment the already rich feature set in TeamCity.
I agree with the answer from SteveChapman. It is clear that you can test for environment variables (exits, contains, starts with, etc.). TeamCity is aware of versions of .NET installed on the agent (as well as Visual Studio, and VS SDKs). However, I can't find anything that would be equivalent to a "testpath" sort of capability.
The sure way to know is to add an agent parameter via the <agent home>/conf/buildAgent.properties configuration file.
Has anyone seen this very strange behaviour before?
I've got a solution whith 70 unit tests. All of them pass on my dev machine.
Whenever I commit my changes, our continuous integration process kicks in and the build box will eventually run the same 70 unit tests.
There is only ONE test in the build box that fails all the time.
The error is in one line that only gets a record from our unit test db. (I know it sucks having unit test to rely on data but please don't focus on this as it's not relevant now)
The most weird thing is when I logon myself to the build box, open up the same visual studio solution and manually kick off the unit tests. Result: ALL PASS!
Has anyone ever had this weird situation? I'm guessing there is some weird thing going on with Cruise Control.NET and MSTest?
Surely your unit test runner produces a good log that shows the exact exception message or error? It's kinda pointless to guess at it but an "access denied" kind of error would be an obvious candidate. Setup whatever dbase engine you use (you forgot to mention that too) to give the user account that runs the tests on the build grunt access to the tables.
As said in another answer, it doesn't make too much sense to guess about it when there are detailed logs around...
But because I had this situation several times, here's a guess anyway:
The account, which is used by the CI server to run the tests, may not have appropriate permissions in the database. This would also explain, why the same test succeeds when you run it manually (then with your user account)...
HTH!
Thomas
thanks for your inputs but it wasn't anything related to credentials at all.
I've found out that other tests that were running before that particular one were leaving my unit test database in an inconsistent state, therefore causing errors to the test in question.
It's not a good practice to have your unit tests relying on data, so unless you are extremely bound to it like myself, this is what a recommend to everyone: DO NOT RELY ON DATA TO DO YOUR UNIT TESTING !!!! Make sure you have all the good stuff in place, specifically a good IOC/dependency injector container so your classes are loosely coupled and you can mock up any interface you may want to unit test easily!
If you have system tests that you want to run on your build server or in general, want to be able to run correctly on any machine, including your own, then you must make sure that their states are independent.
In your case, you should have each test init prepare the DB it uses (either by copying a file-based DB or by emptying/filling a service-based DB). Each test should also attempt to undo its changes (delete file or empty DB) but not assume that other tests have done so successfully.
I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI.
My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work.
I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done.
But I have a few questions before I do anything drastic... (and before purchasing anything)
Is it worth it?
Would this be a good way to test?
The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)?
I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp.
Is there a way in testcomplete to specify command-line parameters for an exe?
Does anybody have any similar experiences.
I would suggest you plan to use both DUnit and something like TestComplete, as they each serve a different purpose.
DUnit is great for Unit Testing, but is difficult to use for overall application testing and UI testing.
TestComplete is one of the few automated testing products that actually has support for Delphi, and our QA Engineer tells me that their support is very good.
Be aware though that setting up automated testing is a large and time-consuming job. If you rigourously apply unit testing and auomated UI testing, you could easily end up with more test code than production code.
With a large (existing) application you're in a difficult situation with regards to implementing automated testing.
My recommendation is to set up Unit Testing first, in conjunction with an automated build server. Every time someone checks anything in to source control, the Unit Tests get run automatically. DO NOT try to set up unit tests for everything straight up - it's just too big an effort for an existing application. Just remember to create unit tests whenever you are adding new functionality, and whenever you are about to make changes. I also strongly suggest that whenever a bug is reported that you create a unit test that reproduces the bug BEFORE you fix it.
I'm in a similar situation. (Large app with lots of dependencies). There is almost no automated testing. But there is a big wish to fix this issue. And that's why we are going to tackle some of the problems with each new release.
We are about to release the first version of the new product. And the first signs are good. But it was a lot of work. So next release we sure need some way to automate the test process. That's why I'm already introducing unit tests. Although due to the dependencies, these are no real unit tests, but you have to start somewhere.
Things we have done:
Introduced a more OO approach, because a big part the code was still procedural.
Moved stuff between files.
Eliminated dependencies where possible.
But there is far more on the list of requirements, ensuring enough work for the entire team until retirement.
And maybe I'm a bit strange, but cleaning up code can be fun. Refactoring without unit tests is a dangerous undertaking, especially if there are a lot of side effects. We used pair programming to avoid stupid mistakes. And lots of test sessions. But in the end we have cleaner code, and the amount of new bugs introduced was extremely low.
Oh, and be sure that you know this is an expensive process. It takes lots of time. And you have to fight the tendency to tackle more than one problem in a row.
I can't answer everything, as I've never used testcomplete, but I can answer some of those.
1 - Yes. Regression testing is worth it. It's quite embarrassing to you as a developer when the client comes back to you when you've broken something that used to work. Always a good idea to make sure everything that used to work, still does.
4 - Oracle has something called Flashback which lets you create a restore point in the database. After you've done your testing you can just jump back to this restore point. You can write scripts to use it too, FLASHBACK DATABASE TO TIMESTAMP (FEB-12-2009, 00:00:00);, etc
We're looking at using VMWare to isolate some of our testing.
You can start from a saved snapshot, so you always have a consistent environment and local database state.
VMWare actions can be scripted, so you can automatically install your latest build from a network location, launch your tests and shut down afterwards.
Is it worth it?
Probably. Setting up and maintaining tests can be a big job, but when you have them, tests can be executed very easily and consistently. If your project is evolving, some kind of test suite is very helpful.
Would this be a good way to test?
I would say that proper DUnit test suite is better first step. However if you have large codebase which is not engineered for testing, setting up functional tests is even bigger pain than setting up GUI tests.
The result of the test should in my database (Oracle), is there an easy
way in testcomplete to check these values (multiple fields in multiple tables)?
TestComplete has ADO and BDE interface. Or you can use OLE interface in VBScript to access everything that's available.
Is there a way in testcomplete to specify command-line parameters for
an exe?
Yes.
One way to introduse unittesting in an (old) application could be to have a "Start database" (like the "Flashback" feature described by Rich Adams).
The program som unittest using DUnit to control the GUI.
Se the "GUI testing with DUnit" on http://delphixtreme.com/wordpress/?p=181
Every time the test is started by restoring to "Start database", because, then a known set of data, can be used.
I would need to setup a test database to do all the automated
testing, would there be an easy way to
automate re-setting the test db?
Use transactions: perform a rollback when the test completed. This should revert everything to the initial state.
Recommended reading:
http://xunitpatterns.com/