I've been tasked with making our coded UI testing come out with passing tests (tests already created, just needed to adjust program code), but I'm having an interesting problem arise with some consistently failed tests. When I test these "failed" tests individually, they pass without any problems. I want to change the order of the tests to see if this can remedy the situation. The [TestInitializer] is set to start the program from the beginning, but isn't doing so when all the tests are running together on another machine with Windows Server 2012 using vstest.console.exe. I think if I can tinker with the order of the tests, I can at least bypass some of the fails that are happening. I see a *.orderedtest, but it doesn't seem to have all the tests that are being tested in there, so I'm not sure if that's the area to mess with. Any advice is greatly appreciated.
Thanks!
you can have some thing like this to make it ordered
[TestMethod]
public void MyIntegratonTestLikeUnitTest()
{
ScenarioA();
ScenarioB();
....
}
private void ScenarioA()
{
// Do your Stuff
}
private void ScenarioB()
{
// Do your stuff
}
or Assign priority to each test like below
[TestMethod]
[Priority(1)]
{
Code.
}
[TestMethod]
[Priority(2)]
{
Code.
}
[TestMethod]
[Priority(3)]
{
Code.
}
Related
I have maven project, and running running mvn test executes test cases within a separate process.
My ask is: Is there any way to run tests by attaching to an existing process?
So that tests can use the attached process environment.
I found surefire has JVMproperty, found here, but could not get how to set it, if this is correct understanding.
Update:
As the project is SpringBoot based, so i executed server (Say it test server) like this-
java -jar target/server-0.0.1-SNAPSHOT.jar
And updated Junit test classes like this-
#Before
public void setup() throws Exception {
VirtualMachine virtualMachine = attach(getTestServerVM());
System.out.println(virtualMachine.id());
}
#Test
public void emptyTest() {
Stack<String> stack = new Stack<String>();
assertTrue(stack.isEmpty());
}
private static VirtualMachine attach(VirtualMachineDescriptor virtualMachineDescriptor) {
VirtualMachine virtualMachine = null;
try {
virtualMachine = VirtualMachine.attach(virtualMachineDescriptor);
} catch (AttachNotSupportedException anse) {
System.out.println("Couldn't attach " + anse);
} catch (IOException ioe) {
System.out.println("Exception attaching or reading a jvm." + ioe);
}
return virtualMachine;
}
private static VirtualMachineDescriptor getTestServerVM() {
String name = "target/server-0.0.1-SNAPSHOT.jar";
List<VirtualMachineDescriptor> vms = VirtualMachine.list();
VirtualMachineDescriptor serverVM = vms.stream()
.filter(vm -> name.equals(vm.displayName()))
.findAny()
.orElse(null);
System.out.println("Test server id : " + serverVM.id());
return serverVM;
}
Is this suffice or do i need to load the agent?
I am asking this, because though program shows that Junit test class attached successfully, i see process id for Junit like this (by running JCMD command)-
3585 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner -version 3 -port 55632 -testLoaderClass org.eclipse.jdt.internal.junit4.runner.JUnit4TestLoader -loaderpluginname org.eclipse.jdt.junit4.runtime -classNames JunitRnD.J
In the end, you are asking: how can I inject arbitrary code into a running JVM.
Probably this is technically doable; but would require a lot of work; and probably dirty hacking. To gain ... what?
Long story short: I think you should step back, and carefully look in the "why" you want to do that. And then strive for another solution!
Given your comment: good unit tests should focus on small units. If you need your "whole stack" to be up and running; then don't call them unit tests; because they aren't. They are functional/integration tests!
When you really want to look into unit tests; then learn about mocking frameworks ... and probably: how to testable code. These videos here a great start into that topic.
Seriously: attaching to a JVM to run unit tests is not a good answer. Instead, step back; and look into your design; and find ways to test it on "smaller" scope.
Another update on your 2nd comment: if you really need such a level of "dynamics"; then I would suggest something completely different: integrate Jython into your product; at least for your "development" setup. That will allow you to send jython scripts (python code that can call all use all your java classes) into the JVM.
But again: look into writing real unit tests; any application can benefit from that. I have coworkers who always tell me that they need to run "functional tests" when debugging problems. Me, I am working on the same software stack; I typically need log files; and then I adapt my unit tests to repro the issue; I only need "system time" very late in the game to do a final test of the fix before that goes into the field.
But just for the record: we have jython in our development systems, too. And yes, that is great to even have a "shell" running in your JVM.
I’ve written a program in selenium webdriver but for my next project I would like to make it more maintainable by using better programming techniques. The main part I want to focus on is launching the browser once (1 session) and run say 10 different test then close the browser but I’m not sure how to do this. Using JUnit this is how I currently have my project laid out :
package example1;
public class TestBase { //main class
#Before
public void setup () {
//launch browser
}
#Test //all test run here
public void test1(){
login();
homepage();
}
#After
public void teardown(){
//close browser
}
}
package example1;
public class login(){
//do some action
}
package example1;
public class homepage(){
//do some action
}
package example1;
public class storeMethods(){
//all methods are stored which are then called by different classes
}
I’m not sure if the #Test annotation should even be in the main class or if it should be in its own class (login(), homepage()) because I read somewhere that test should not depend on each other. I don’t have much experience in java but I’m more than willing to learn. I just need some guidance on best practices and how to write good maintainable test so if someone could help me out or point me in the right direction then I’d really appreciate it.
While what Robbie Wareham said is correct, reusing the browser is not a good idea, you said that your overall goal is maintainability.
The techniques I've found to increase maintainability is the Page Object pattern with separate functions to interact with it.
The Page Object pattern separates the selector from the rest of the code. That way, if an element on a page changes, and your tests uses that element 5 times...you only change your code in 1 spot. It is also standard to include isLoaded(), which is a function that can be used to identify if you are already on the page you need so you don't reload the page.
I would also recommend having your test not directly deal with that Page you created. If you had a toolbar that you had to use to go to X page...and then the toolbar changed so the link you wanted was in a sub-menu, then every time in your tests you used that link, you would have to change the method to click on that link. Creating sets of selenium commands that interact with the page will make your tests high-level and easy to read.
I would suggest that reusing the browser is not following better automation programming practice.
Reusing the browser will result in unstable and unreliable tests, with inter-test dependencies.
In my opinion, it is far better to have atomic self contained tests.
If test runtime is an issue, then look at parallelism and using selenium grid
I have 2 questions in regard doing integration testing using VS 2010
First, I'm really in need of finding a way to execute these testing methods in the order I want them to. Note: I know in Unit Testing, methods should run standalone from anything else, but these are integration tests which I do depend on the order of which method runs first.
On the same note, is there a way to keep a local variable through running the tests? For example like the following code which right now fails.
[TestClass]
public class UnitTest1
{
int i = 0;
[TestMethod]
public void TestMethod1()
{
i = 5;
}
[TestMethod]
public void TestMethod2()
{
Assert.AreEqual(5, i);
}
}
So is there a way to do any of these?
To execute tests in a specific order I followed the next steps:
In a test project with test1, test2 and test3
1 Right click on the project 'Add'->'new test..."
2 Select 'Ordered Test'
3 Double click in the file that appears "OrderedTest1.orderedtest"
4 Build the project if was not build previously
5 From the list of available test select the test you want and order them
From that point on there appears a new test in the test list editor
It is an extra test that runs the enclosed tests in the correct order, but if you run all the test in the project carelessly the tests included in the ordered list will be executed twice so you need to somehow manage lists or test categories to avoid that.
I tried disabling the individual tests but that also disables the ordered test, I don't know a better way to do so.
It is best practice to use functions to set up the tests and to clean them up, by using the attributes [TestInitialize] and [TestCleanUp] or [ClassInitialize] and [ClassCleanup].
http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.unittesting(v=VS.100).aspx
The next code is an example of a similar thing to what you want:
[TestClass]
public class UnitTest1
{
int i=0;
[TestInitialize]
public void Setup()
{
i = 5;
}
[TestMethod]
public void TestMethod1()
{
Assert.AreEqual(5, i);
}
}
The function SetUp will be called before executing each test.
If you need to pass the value from one test to the other you might want to consider using a static variable which is not recommended due to the indeterministic order of execution.
Usually there is a way to avoid needing a specific order by using the setup/cleanup technique, but it is true that this might not be true for very complex integration tests.
If there is no possible way to avoid having them to reorder you can consider merging them in one, breaking again the best practice of having only one assert per test, but if they are so much dependent one from the other it might be even better this way, as in this case one test failing might compromise the result of the others.
EDIT:
May be using ordered tests answers question 1, and using static variables question 2:
http://msdn.microsoft.com/en-us/library/ms182631.aspx
I have an assembly that I've built on Windows
I want to run the xUnit tests on mono in Linux.
However, I have found that while 400 of these tests can run (in order), that certain tests either hang the xUnit runner, or bring it down entirely.
I don't care if certain tests are not able to run on Linux, certain tests are to do with the DTC and some unmanaged gumph that we don't need to support there.
What I do want however, is to apply an ignore to those tests, and have the fact that the test was ignored flagged properly in the build output.
The question can be boiled down to I guess a number of possible solutions
How do I run specific tests in xUnit via the console runner? (I haven't found documentation to this end, maybe I'm just not looking hard enough)
Is it possible to go the other way and say "Here is an assembly, please ignore these specific tests though"
Having an attribute on those tests has been suggested a better way, to formally document that these tests are platform specific - is this possible?
If I could avoid modifying the original code too much that would be grand, as the code isn't really mine to change, and applying lots of cross-platform hacks probably won't go down too well.
XUnit v2.0 is now available. Skippable tests are supported by it directly. Use:
[Fact (Skip = "specific reason")]
I would avoid externalising skipping tests (i.e. a config/command file if it's possible). This somewhat goes against making the tests easy to run and trustworthy. Making the tests ignored in code is the safest approach when other people start to get involved.
I could see a number of options, here are two that involve modification of existing code.
Option 1 - Most intrusive, compile time platform detection
In the VS Solution, define another configuration that defines a precompiler flag MONOWIN (just so that it's explicitly a flag the says that it is for code compiled on Windows for use on Mono).
Then define an attribute that will make the test ignored when compiled for Mono:
public class IgnoreOnMonoFactAttribute : FactAttribute {
#if MONOWIN
public IgnoreOnMonoFactAttribute() {
Skip = "Ignored on Mono";
}
#endif
}
It's actually hard to find any advantages to this method as it involves mocking with the original solution and adds another confiration that needs to be supported.
Option 2 - somewhat intrusive - runtime platform detection
Here is a similar solution to option1, except no separate configuration is required:
public class IgnoreOnMonoFactAttribute : FactAttribute {
public IgnoreOnMonoFactAttribute() {
if(IsRunningOnMono()) {
Skip = "Ignored on Mono";
}
}
/// <summary>
/// Determine if runtime is Mono.
/// Taken from http://stackoverflow.com/questions/721161
/// </summary>
/// <returns>True if being executed in Mono, false otherwise.</returns>
public static bool IsRunningOnMono() {
return Type.GetType("Mono.Runtime") != null;
}
}
Note 1
xUnit runner will run a method twice if it is marked with [Fact] and [IgnoreOnMonoFact]. (CodeRush doesn't do that, in this case I assume xUnit is correct). This means that any tests methods must have [Fact] replaced with [IgnoreOnMonoFact]
Note 2
CodeRush test runner still ran the [IgnoreOnMonoFact] test, but it did ignore the [Fact(Skip="reason")] test. I assume it is due to CodeRush reflecting xUnit and not actually running it with the aid of xUnit libraries. This works fine with xUnit runner.
There is a new options now.
Add Nuget Package SkippableFact, which allows you to use [SkippableFact] instead of [Fact] and you can use Skip.<xyz> within a Tests to dynamically Skip the Test during runtime.
Example:
[SkippableFact]
public void SomeTestForWindowsOnly()
{
Skip.IfNot(Environment.IsWindows);
// Test Windows only functionality.
}
[Fact(Skip="reason")]
works but I prefer to use traits
[Fact, Trait("type","unit")]
public void MyUnitTest(){
// given
// when
// then
}
[Fact, Trait("type","http")]
public void MyHttpIntegrationTest(){
// given
// when do things over HTTP
// then
}
usage
dotnet test --filter type=unit
this protects our builds from accidentally running integration tests that devs forgot to skip e.g. [Fact(Skip="Integration")], however it does require unit tests to "opt in" to CI by adding the correct traits which admittedly isn't great.
The Dominik's solution work for me by this code:
[SkippableFact]
public void Get_WhenCall_ReturnsString()
{
// Arrange
Skip.IfNot(RuntimeInformation.IsOSPlatform(OSPlatform.Windows));
// Act
// Assert
}
To add to the previous answers regarding SkippableFact: Note that each of the tests are still constructed - the constructor is run.
If you have timeconsuming code in a base class constructor, an alternative is to gather environment-specific test cases in suitable files and run the environment check in the constructor:
if (!SupportsTemporalQueries())
throw new SkipException("This test class only runs in environments support temporal queries");
This can speed up the test run considerable. In our system we either extend a "generic" base test class (runs in all environments) or an environment-specific base test class. I find this easier to maintain than filtering in pipelines or other solutions.
This is now solved in 1.8 - you can filter on Traits. See this issue log.
Update: Traits work with the console runner but not MSBuild, I've added a feature request for this support.
I wonder if it is possible to run performance testing based on xUnit?
Not sure what exactly you asking, but you can easily write your own custom attribute to do that. For an example..
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false)]
public class TraceAttribute : BeforeAfterTestAttribute
{
public override void Before(MethodInfo methodUnderTest)
{
//Start timer
}
public override void After(MethodInfo methodUnderTest)
{
//End timer
}
}
Then decorate your Unit Test with the this attribute. Also make sure you write to an output ;)
For JUnit, you might look at JUnitPerf.
If you're working in a different language and a different xUnit, the JunitPerf design concept (decorating normal JUnit tests) and code might give ideas of how to do the same in your language.
Now there is a project on Github : https://github.com/Microsoft/xunit-performance
It provides extensions over xUnit to author performance tests (benchmark).