Junit- How to attach test to an existing process - maven

I have maven project, and running running mvn test executes test cases within a separate process.
My ask is: Is there any way to run tests by attaching to an existing process?
So that tests can use the attached process environment.
I found surefire has JVMproperty, found here, but could not get how to set it, if this is correct understanding.
Update:
As the project is SpringBoot based, so i executed server (Say it test server) like this-
java -jar target/server-0.0.1-SNAPSHOT.jar
And updated Junit test classes like this-
#Before
public void setup() throws Exception {
VirtualMachine virtualMachine = attach(getTestServerVM());
System.out.println(virtualMachine.id());
}
#Test
public void emptyTest() {
Stack<String> stack = new Stack<String>();
assertTrue(stack.isEmpty());
}
private static VirtualMachine attach(VirtualMachineDescriptor virtualMachineDescriptor) {
VirtualMachine virtualMachine = null;
try {
virtualMachine = VirtualMachine.attach(virtualMachineDescriptor);
} catch (AttachNotSupportedException anse) {
System.out.println("Couldn't attach " + anse);
} catch (IOException ioe) {
System.out.println("Exception attaching or reading a jvm." + ioe);
}
return virtualMachine;
}
private static VirtualMachineDescriptor getTestServerVM() {
String name = "target/server-0.0.1-SNAPSHOT.jar";
List<VirtualMachineDescriptor> vms = VirtualMachine.list();
VirtualMachineDescriptor serverVM = vms.stream()
.filter(vm -> name.equals(vm.displayName()))
.findAny()
.orElse(null);
System.out.println("Test server id : " + serverVM.id());
return serverVM;
}
Is this suffice or do i need to load the agent?
I am asking this, because though program shows that Junit test class attached successfully, i see process id for Junit like this (by running JCMD command)-
3585 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner -version 3 -port 55632 -testLoaderClass org.eclipse.jdt.internal.junit4.runner.JUnit4TestLoader -loaderpluginname org.eclipse.jdt.junit4.runtime -classNames JunitRnD.J

In the end, you are asking: how can I inject arbitrary code into a running JVM.
Probably this is technically doable; but would require a lot of work; and probably dirty hacking. To gain ... what?
Long story short: I think you should step back, and carefully look in the "why" you want to do that. And then strive for another solution!
Given your comment: good unit tests should focus on small units. If you need your "whole stack" to be up and running; then don't call them unit tests; because they aren't. They are functional/integration tests!
When you really want to look into unit tests; then learn about mocking frameworks ... and probably: how to testable code. These videos here a great start into that topic.
Seriously: attaching to a JVM to run unit tests is not a good answer. Instead, step back; and look into your design; and find ways to test it on "smaller" scope.
Another update on your 2nd comment: if you really need such a level of "dynamics"; then I would suggest something completely different: integrate Jython into your product; at least for your "development" setup. That will allow you to send jython scripts (python code that can call all use all your java classes) into the JVM.
But again: look into writing real unit tests; any application can benefit from that. I have coworkers who always tell me that they need to run "functional tests" when debugging problems. Me, I am working on the same software stack; I typically need log files; and then I adapt my unit tests to repro the issue; I only need "system time" very late in the game to do a final test of the fix before that goes into the field.
But just for the record: we have jython in our development systems, too. And yes, that is great to even have a "shell" running in your JVM.

Related

Changing order of Coded UI test in Visual Studio 2013

I've been tasked with making our coded UI testing come out with passing tests (tests already created, just needed to adjust program code), but I'm having an interesting problem arise with some consistently failed tests. When I test these "failed" tests individually, they pass without any problems. I want to change the order of the tests to see if this can remedy the situation. The [TestInitializer] is set to start the program from the beginning, but isn't doing so when all the tests are running together on another machine with Windows Server 2012 using vstest.console.exe. I think if I can tinker with the order of the tests, I can at least bypass some of the fails that are happening. I see a *.orderedtest, but it doesn't seem to have all the tests that are being tested in there, so I'm not sure if that's the area to mess with. Any advice is greatly appreciated.
Thanks!
you can have some thing like this to make it ordered
[TestMethod]
public void MyIntegratonTestLikeUnitTest()
{
ScenarioA();
ScenarioB();
....
}
private void ScenarioA()
{
// Do your Stuff
}
private void ScenarioB()
{
// Do your stuff
}
or Assign priority to each test like below
[TestMethod]
[Priority(1)]
{
Code.
}
[TestMethod]
[Priority(2)]
{
Code.
}
[TestMethod]
[Priority(3)]
{
Code.
}

Multiple JUnit test in one browser session

I’ve written a program in selenium webdriver but for my next project I would like to make it more maintainable by using better programming techniques. The main part I want to focus on is launching the browser once (1 session) and run say 10 different test then close the browser but I’m not sure how to do this. Using JUnit this is how I currently have my project laid out :
package example1;
public class TestBase { //main class
#Before
public void setup () {
//launch browser
}
#Test //all test run here
public void test1(){
login();
homepage();
}
#After
public void teardown(){
//close browser
}
}
package example1;
public class login(){
//do some action
}
package example1;
public class homepage(){
//do some action
}
package example1;
public class storeMethods(){
//all methods are stored which are then called by different classes
}
I’m not sure if the #Test annotation should even be in the main class or if it should be in its own class (login(), homepage()) because I read somewhere that test should not depend on each other. I don’t have much experience in java but I’m more than willing to learn. I just need some guidance on best practices and how to write good maintainable test so if someone could help me out or point me in the right direction then I’d really appreciate it.
While what Robbie Wareham said is correct, reusing the browser is not a good idea, you said that your overall goal is maintainability.
The techniques I've found to increase maintainability is the Page Object pattern with separate functions to interact with it.
The Page Object pattern separates the selector from the rest of the code. That way, if an element on a page changes, and your tests uses that element 5 times...you only change your code in 1 spot. It is also standard to include isLoaded(), which is a function that can be used to identify if you are already on the page you need so you don't reload the page.
I would also recommend having your test not directly deal with that Page you created. If you had a toolbar that you had to use to go to X page...and then the toolbar changed so the link you wanted was in a sub-menu, then every time in your tests you used that link, you would have to change the method to click on that link. Creating sets of selenium commands that interact with the page will make your tests high-level and easy to read.
I would suggest that reusing the browser is not following better automation programming practice.
Reusing the browser will result in unstable and unreliable tests, with inter-test dependencies.
In my opinion, it is far better to have atomic self contained tests.
If test runtime is an issue, then look at parallelism and using selenium grid

What are the possible problems with unit testing ASP.NET MVC code in the following way?

I've been looking at the way unit testing is done in the NuGetGallery. I observed that when controllers are tested, service classes are mocked. This makes sense to me because while testing the controller logic, I didn't want to be worried about the architectural layers below. After using this approach for a while, I noticed how often I was running around fixing my mocks all over my controller tests when my service classes changed. To solve this problem, without consulting people that are smarter than me, I started writing tests like this (don't worry, I haven't gotten that far):
public class PersonController : Controller
{
private readonly LESRepository _repository;
public PersonController(LESRepository repository)
{
_repository = repository;
}
public ActionResult Index(int id)
{
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
public class PersonControllerTests
{
public void can_get_person()
{
var person = _helper.CreatePerson(username: "John");
var controller = new PersonController(_repository);
controller.FakeOutContext();
var result = (ViewResult)controller.Index(person.Id);
var model = (VMPerson)result.Model;
Assert.IsTrue(model.Person.Username == "John");
}
}
I guess this would be integration testing because I am using a real database (I'd prefer an inmemory one). I begin my test by putting data in my database (each test runs in a transaction and is rolled back when the test completes). Then I call my controller and I really don't care how it retrieves the data from the database (via a repository or service class) just that the Model to be sent to the view must have the record I put into the database aka my assertion. The cool thing about this approach is that a lot of times I can continue to add more layers of complexity without having to change my controller tests:
public class PersonController : Controller
{
private readonly LESRepository _repository;
private readonly PersonService _personService;
public PersonController(LESRepository repository)
{
_repository = repository;
_personService = new PersonService(_repository);
}
public ActionResult Index(int id)
{
var model = _personService.GetActivePerson(id);
if(model == null)
return PersonNotFoundResult();
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
Now I realize I didn't create an interface for my PersonService and pass it into the constructor of my controller. The reason is 1) I don't plan to mock my PersonService and 2) I didn't feel I needed to inject my dependency since my PersonController for now only needs to depend on one type of PersonService.
I'm new at unit testing and I'm always happy to be shown that I'm wrong. Please point out why the way I'm testng my controllers could be a really bad idea (besides the obvious increase in the time my tests will take to run).
Hmm. a few things here mate.
First, it looks like you're trying to test the a controller method. Great :)
So this means, that anything the controller needs, should be mocked. This is because
You don't want to worry about what happens inside that dependency.
You can verify that the dependency was called/executed.
Ok, so lets look at what you did and I'll see if i can refactor it to make it a bit more testable.
-REMEMBER- i'm testing the CONTROLLER METHOD, not the stuff the controller method calls/depends upon.
So this means I don't care about the service instance or the repository instance (which ever architectural way you decide to follow).
NOTE: I've kept things simple, so i've stripped lots of crap out, etc.
Interface
First, we need an interface for the repository. This can be implemented as a in-memory repo, an entity framework repo, etc.. You'll see why, soon.
public interface ILESRepository
{
IQueryable<Person> GetAll();
}
Controller
Here, we use the interface. This means it's really easy and awesome to use a mock IRepository or a real instance.
public class PersonController : Controller
{
private readonly ILESRepository _repository;
public PersonController(ILESRepository repository)
{
if (repository == null)
{
throw new ArgumentNullException("repository");
}
_repository = repository;
}
public ActionResult Index(int id)
{
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
Unit Test
Ok - here's the magic money shot stuff.
First, we create some Fake People. Just work with me here... I'll show you where we use this in a tick. It's just a boring, simple list of your POCO's.
public static class FakePeople()
{
public static IList<Person> GetSomeFakePeople()
{
return new List<Person>
{
new Person { Id = 1, Name = "John" },
new Person { Id = 2, Name = "Fred" },
new Person { Id = 3, Name = "Sally" },
}
}
}
Now we have the test itself. I'm using xUnit for my testing framework and moq for my mocking. Any framework is fine, here.
public class PersonControllerTests
{
[Fact]
public void GivenAListOfPeople_Index_Returns1Person()
{
// Arrange.
var mockRepository = new Mock<ILESRepository>();
mockRepository.Setup(x => x.GetAll<Person>())
.Returns(
FakePeople.GetSomeFakePeople()
.AsQueryable);
var controller = new PersonController(mockRepository);
controller.FakeOutContext();
// Act.
var result = controller.Index(person.Id) as ViewResult;
// Assert.
Assert.NotNull(result);
var model = result.Model as VMPerson;
Assert.NotNull(model);
Assert.Equal(1, model.Person.Id);
Assert.Equal("John", model.Person.Username);
// Make sure we actually called the GetAll<Person>() method on our mock.
mockRepository.Verify(x => x.GetAll<Person>(), Times.Once());
}
}
Ok, lets look at what I did.
First, I arrange my crap. I first create a mock of the ILESRepository.
Then i say: If anyone ever calls the GetAll<Person>() method, well .. don't -really- hit a database or a file or whatever .. just return a list of people, which created in FakePeople.GetSomeFakePeople().
So this is what would happen in the controller ...
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
First, we ask our mock to hit the GetAll<Person>() method. I just 'set it up' to return a list of people .. so then we have a list of 3 Person objects. Next, we then call a FirstOrDefault(...) on this list of 3 Person objects .. which returns the single object or null, depending on what the value of id is.
Tada! That's the money shot :)
Now back to the rest of the unit test.
We Act and then we Assert. Nothing hard there.
For bonus points, I verify that we've actually called the GetAll<Person>() method, on the mock .. inside the Controller's Index method. This is a safety call to make sure our controller logic (we're testing for) was done right.
Sometimes, you might want to check for bad scenario's, like a person passed in bad data. This means you might never ever get to the mock methods (which is correct) so you verify that they were never called.
Ok - questions, class?
Even when you do not plan to mock an interface, I strongly suggest you to do not hide the real dependencies of an object by creating the objects inside the constructor, you are breaking the Single Responsibility principle and you are writing un-testable code.
The most important thing to consider when writing tests is: "There is no magic key to write tests". There are a lot of tools out there to help you write tests but the real effort should be put in writing testable code rather than trying to hack our existing code to write a test which usually ends up being an integration test instead of a unit-test.
Creating a new object inside a constructor is one of the first big signals that your code is not testable.
These links helped me a lot when I was making the transition to start writing tests and let me tell you that after you start, that will become a natural part of your daily work and you will love the benefits of writing tests I can not picture myself writing code without tests anymore
Clean code guide (used in Google): http://misko.hevery.com/code-reviewers-guide/
To get more information read the following:
http://misko.hevery.com/2008/09/30/to-new-or-not-to-new/
and watch this video cast from Misko Hevery
http://www.youtube.com/watch?v=wEhu57pih5w&feature=player_embedded
Edited:
This article from Martin Fowler explain the difference between a Classical and a Mockist TDD approach
http://martinfowler.com/articles/mocksArentStubs.html
As a summary:
Classic TDD approach: This implies to test everything you can without creating substitutes or doubles (mocks, stubs, dummies) with the exception of external services like web services or databases. The Classical testers use doubles for the external services only
Benefits: When you test you are actually testing the wiring logic of your application and the logic itself (not in isolation)
Cons: If an error occurs you will see potentially hundreds of tests failing and it will be hard to find the code responsible
Mockist TDD approach: People following the Mockist approach will test in isolation all the code because they will create doubles for every dependency
Benefits: You are testing in isolation each part of your application. If an error occurs, you know exactly where it occurred because just a few tests will fail, ideally only one
Cons: Well you have to double all your dependencies which makes tests a little bit harder but you can use tools like AutoFixture to create doubles for the dependencies automatically
This is another great article about writing testable code
http://www.loosecouplings.com/2011/01/how-to-write-testable-code-overview.html
There are some downsides.
First, when you have a test that depends on an external component (like a live database), that test is no longer really predictable. It can fail for any number of reasons - a network outage, a changed password on the database account, missing some DLLs, etc. So when your test suddenly fails, you cannot be immediately sure where the flaw is. Is it a database problem? Some tricky bug in your class?
When you can immediately answer that question just by knowing which test failed, you have the enviable quality of defect localization.
Secondly, if there is a database problem, all your tests that depend on it will fail at once. This might not be so severe, since you can probably realize what the cause is, but I guarantee it will slow you down to examine each one. Widespread failures can mask real problems, because you don't want to look at the exception on each of 50 tests.
And I know you want to hear about factors besides the execution time, but that really does matter. You want to run your tests as frequently as possible, and a longer runtime discourages that.
I have two projects: one with 600+ tests that run in 10 seconds, one with 40+ tests that runs in 50 seconds (this project does actually talk to a database, on purpose). I run the faster test suite much more frequently while developing. Guess which one I find easier to work with?
All of that said, there is value in testing external components. Just not when you're unit-testing. Integration tests are more brittle, and slower. That makes them more expensive.
Accessing the database in unit tests has the following consequences:
Performance. Populating the database and accessing it is slow. The more tests you have, the longer the wait. If you used mocking your controller tests may run in a couple of milliseconds each, compared to seconds if it was using the database directly.
Complexity. For shared databases, you'll have to deal with concurrency issues where multiple agents are running tests against the same database. The database needs to be provisioned, structure needs to be created, data populated etc. It becomes rather complex.
Coverage. You mind find that some conditions are nearly impossible to test without mocking. Examples may include verifying what to do when the database times out. Or what to do if sending an email fails.
Maintenance. Changes to your database schema, especially if its frequent, will impact almost every test that uses the database. In the beginning when you have 10 tests it may not seem like much, but consider when you have 2000 tests. You may also find that changing business rules and adapting the tests to be more complex, as you'll have to modify the data populated in the database to verify the business rule.
You have to ask whether it is worth it for testing business rules. In most cases, the answer may be "no".
The approach I follow is:
Unit classes (controllers, service layers etc) by mocking out dependencies and simulating conditions that may occur (like database errors etc). These tests verify business logic and one aims to gain as much coverage of decision paths as possible. Use a tool like PEX to highlight any issues you never thought of. You'll be surprised how much robust (and resilient) your application would be after fixing some of the issues PEX highlights.
Write database tests to verify that the ORM I'm using works with the underlying database. You'll be surprised the issues EF and other ORM have with certain database engines (and versions). These tests are also useful to for tuning performance and reducing the amount of queries and data being sent to and from the database.
Write coded UI tests that automates the browser and verifies the system actually works. In this case I would populate the database before hand with some data. These tests simply automate the tests I would have done manually. The aim is to verify that critical pieces still work.

How to order methods of execution using Visual Studio to do integration testing?

I have 2 questions in regard doing integration testing using VS 2010
First, I'm really in need of finding a way to execute these testing methods in the order I want them to. Note: I know in Unit Testing, methods should run standalone from anything else, but these are integration tests which I do depend on the order of which method runs first.
On the same note, is there a way to keep a local variable through running the tests? For example like the following code which right now fails.
[TestClass]
public class UnitTest1
{
int i = 0;
[TestMethod]
public void TestMethod1()
{
i = 5;
}
[TestMethod]
public void TestMethod2()
{
Assert.AreEqual(5, i);
}
}
So is there a way to do any of these?
To execute tests in a specific order I followed the next steps:
In a test project with test1, test2 and test3
1 Right click on the project 'Add'->'new test..."
2 Select 'Ordered Test'
3 Double click in the file that appears "OrderedTest1.orderedtest"
4 Build the project if was not build previously
5 From the list of available test select the test you want and order them
From that point on there appears a new test in the test list editor
It is an extra test that runs the enclosed tests in the correct order, but if you run all the test in the project carelessly the tests included in the ordered list will be executed twice so you need to somehow manage lists or test categories to avoid that.
I tried disabling the individual tests but that also disables the ordered test, I don't know a better way to do so.
It is best practice to use functions to set up the tests and to clean them up, by using the attributes [TestInitialize] and [TestCleanUp] or [ClassInitialize] and [ClassCleanup].
http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.unittesting(v=VS.100).aspx
The next code is an example of a similar thing to what you want:
[TestClass]
public class UnitTest1
{
int i=0;
[TestInitialize]
public void Setup()
{
i = 5;
}
[TestMethod]
public void TestMethod1()
{
Assert.AreEqual(5, i);
}
}
The function SetUp will be called before executing each test.
If you need to pass the value from one test to the other you might want to consider using a static variable which is not recommended due to the indeterministic order of execution.
Usually there is a way to avoid needing a specific order by using the setup/cleanup technique, but it is true that this might not be true for very complex integration tests.
If there is no possible way to avoid having them to reorder you can consider merging them in one, breaking again the best practice of having only one assert per test, but if they are so much dependent one from the other it might be even better this way, as in this case one test failing might compromise the result of the others.
EDIT:
May be using ordered tests answers question 1, and using static variables question 2:
http://msdn.microsoft.com/en-us/library/ms182631.aspx

How do I skip specific tests in xUnit based on current platform

I have an assembly that I've built on Windows
I want to run the xUnit tests on mono in Linux.
However, I have found that while 400 of these tests can run (in order), that certain tests either hang the xUnit runner, or bring it down entirely.
I don't care if certain tests are not able to run on Linux, certain tests are to do with the DTC and some unmanaged gumph that we don't need to support there.
What I do want however, is to apply an ignore to those tests, and have the fact that the test was ignored flagged properly in the build output.
The question can be boiled down to I guess a number of possible solutions
How do I run specific tests in xUnit via the console runner? (I haven't found documentation to this end, maybe I'm just not looking hard enough)
Is it possible to go the other way and say "Here is an assembly, please ignore these specific tests though"
Having an attribute on those tests has been suggested a better way, to formally document that these tests are platform specific - is this possible?
If I could avoid modifying the original code too much that would be grand, as the code isn't really mine to change, and applying lots of cross-platform hacks probably won't go down too well.
XUnit v2.0 is now available. Skippable tests are supported by it directly. Use:
[Fact (Skip = "specific reason")]
I would avoid externalising skipping tests (i.e. a config/command file if it's possible). This somewhat goes against making the tests easy to run and trustworthy. Making the tests ignored in code is the safest approach when other people start to get involved.
I could see a number of options, here are two that involve modification of existing code.
Option 1 - Most intrusive, compile time platform detection
In the VS Solution, define another configuration that defines a precompiler flag MONOWIN (just so that it's explicitly a flag the says that it is for code compiled on Windows for use on Mono).
Then define an attribute that will make the test ignored when compiled for Mono:
public class IgnoreOnMonoFactAttribute : FactAttribute {
#if MONOWIN
public IgnoreOnMonoFactAttribute() {
Skip = "Ignored on Mono";
}
#endif
}
It's actually hard to find any advantages to this method as it involves mocking with the original solution and adds another confiration that needs to be supported.
Option 2 - somewhat intrusive - runtime platform detection
Here is a similar solution to option1, except no separate configuration is required:
public class IgnoreOnMonoFactAttribute : FactAttribute {
public IgnoreOnMonoFactAttribute() {
if(IsRunningOnMono()) {
Skip = "Ignored on Mono";
}
}
/// <summary>
/// Determine if runtime is Mono.
/// Taken from http://stackoverflow.com/questions/721161
/// </summary>
/// <returns>True if being executed in Mono, false otherwise.</returns>
public static bool IsRunningOnMono() {
return Type.GetType("Mono.Runtime") != null;
}
}
Note 1
xUnit runner will run a method twice if it is marked with [Fact] and [IgnoreOnMonoFact]. (CodeRush doesn't do that, in this case I assume xUnit is correct). This means that any tests methods must have [Fact] replaced with [IgnoreOnMonoFact]
Note 2
CodeRush test runner still ran the [IgnoreOnMonoFact] test, but it did ignore the [Fact(Skip="reason")] test. I assume it is due to CodeRush reflecting xUnit and not actually running it with the aid of xUnit libraries. This works fine with xUnit runner.
There is a new options now.
Add Nuget Package SkippableFact, which allows you to use [SkippableFact] instead of [Fact] and you can use Skip.<xyz> within a Tests to dynamically Skip the Test during runtime.
Example:
[SkippableFact]
public void SomeTestForWindowsOnly()
{
Skip.IfNot(Environment.IsWindows);
// Test Windows only functionality.
}
[Fact(Skip="reason")]
works but I prefer to use traits
[Fact, Trait("type","unit")]
public void MyUnitTest(){
// given
// when
// then
}
[Fact, Trait("type","http")]
public void MyHttpIntegrationTest(){
// given
// when do things over HTTP
// then
}
usage
dotnet test --filter type=unit
this protects our builds from accidentally running integration tests that devs forgot to skip e.g. [Fact(Skip="Integration")], however it does require unit tests to "opt in" to CI by adding the correct traits which admittedly isn't great.
The Dominik's solution work for me by this code:
[SkippableFact]
public void Get_WhenCall_ReturnsString()
{
// Arrange
Skip.IfNot(RuntimeInformation.IsOSPlatform(OSPlatform.Windows));
// Act
// Assert
}
To add to the previous answers regarding SkippableFact: Note that each of the tests are still constructed - the constructor is run.
If you have timeconsuming code in a base class constructor, an alternative is to gather environment-specific test cases in suitable files and run the environment check in the constructor:
if (!SupportsTemporalQueries())
throw new SkipException("This test class only runs in environments support temporal queries");
This can speed up the test run considerable. In our system we either extend a "generic" base test class (runs in all environments) or an environment-specific base test class. I find this easier to maintain than filtering in pipelines or other solutions.
This is now solved in 1.8 - you can filter on Traits. See this issue log.
Update: Traits work with the console runner but not MSBuild, I've added a feature request for this support.

Resources