I've a test framework written in Scala+Cucumber. In the step definition classes, it loads various dependencies to Spring beans like:
class TestSteps {
val b1 = dependency("bean_b1")
When("I do xxx") {() =>
b1.doSomething()
}
}
I'd like to activate run the test in various scenarios that require the Spring context to be initialized differently and so bean_b1 would be initialized differently. I want to activate these scenarios during the steps execution and write something like:
Given I set test scenario to "scenario1"
When I do xxx
Then I should receive ZZZ
Given I set test scenario to "no_network_available"
When I do xxx
Then I should receive ConnectException
This could be resolved by launching the whole test suite with a different spring context but I need to execute the first steps with network available and then last steps with network down.
I was thinking that when dependency("...") is executed to resolve to a bean in a different Spring context(loaded based on scenario). But how to make the tests work since I have a lot of step def classes that use the "val b1 = dependency("b1_bean")" and those will remain even static no matter if dependency("b1_bean") will now return something else.
Thanks!
Related
A simple question: How do you differentiate between a feature, unit and integration test?
There are a lot of differing opinions, but I'm specifically trying to determine how to organise a Laravel test which touches a model's relationship. Here is an example if some PHP code which would require testing:
public function prices()
{
return $this->hasMany(Prices::class);
}
public function getPriceAttribute($)
{
return $this->prices()->first() * 2;
}
The test descriptions as I understand them (feel free to correct me):
Unit test
Tests the smallest part of your code
Does not touch the database
Does not interact with any other part of the system
Integration test
Tests part of the system working together
e.g controllers which call helper functions which need to be tested together
Feature test
Blackbox test
e.g. Call an api end point, see that it has returned the correct JSON response
Here is my issue given those descriptions:
My Laravel model test needs to test the smallest unit of code - the calculated accessor of a model, which makes it feel like a Unit test
But, it touches the database when it loads the model's relationship
It doesnt feel like an Integration test, because it is only touching other related models, not internal or external services
Other property accessor tests in Laravel would fall under Unit tests when they do not touch the database or the model's relationships
Separating these types of tests into integration tests would mean that a single model's tests against its properties are fragmented between integration and unit tests
So, without mocking relationships between models, where would my test belong?
If I’m interpreting your original question correctly, I think the killer constraint here is:
So, without mocking relationships between models, where would my test belong?
If mocking isn't allowed and you're required to touch a DB then, by your/and google's definition, it has to belong as an integration/medium size test :)
The way I think of this is get price attribute functionality is separate from the DB. Even though it's in the model the prices could come from anywhere. Right now its a RDBMS but what if your org go really big and it split into another service? Basically, I believe, that the capability of getPriceAttributes is distinct from the storage of attributes:
public function getPriceAttribute($)
{
return $this->prices()->first() * 2;
}
If you buy into this reasoning, it creates a logical separation that supports unit tests. prices() can be mocked to returns a collection of 0, 1 & many (2) results. This test can be executed as a unit tests (for orders of magnitude faster test execution (ie on the order of 1ms vs potentially 10s or 100s of ms talking to a local DB)
I am not familiar with php test ecosystem but one way to do this could be with a test specific subclass (not sure if the following is valid PHP :p ):
class PricedModel extends YourModel {
function __construct($stub_prices_supporting_first) {
$this->stub_prices = $stub_prices_supporting_first;
}
public function prices() {
return $this->stub_prices;
}
}
tests
function test_priced_model_0_prices() {
p = new PricedModel(new Prices(array()));
assert.equal(null, p.getPriceAttribute());
}
function test_priced_model_1_price() {
p = new PricedModel(new Prices(array(1)));
assert.equal(2, p.getPriceAttribute());
}
function test_priced_model_2_prices() {
p = new PricedModel(new Prices(array(5, 1)));
assert.equal(10, p.getPriceAttribute());
}
The above should hopeuflly allow you to fully control input into the getPriceAttribute method to support direct IO-free unit testing.
——
Also all the unit tests above can tell you is that you’re able to process prices correctly , it doesn’t price any feedback on if you’re able to query prices !
What distinguishes the tests is their respective goal:
Unit-testing aims at findings those bugs that can be found in isolated small parts of the software. (Note that this does not say you must isolate - it only means your focus is on the isolated code. Isolation and mocking often enough are not needed to reach this goal: Think of a call to a sin function - you almost never need to mock this, you let your system under test just call the original one.)
Integration testing aims at findings bugs in the interaction of two or more components, for example mutual misconceptions about an interface. These bugs can not be found in the isolated software: If you test code in isolation, you also write your tests on your (possibly wrong) understanding of the other components.
Feature tests as you describe them will then have the goal to find further bugs, which the other tests so far could not detect. One example for such a bug could be, that an old version of the feature was integrated (which was correct at that time, but lacked some functionality).
The conclusion, although it may be surprising, is, that it is not in the stricter sense forbidden to make data base accesses in unit-testing. Consider the following scenario: You start writing unit-tests and mock the data base accesses. Later, you realize you can be more lazy and just use the data base without mocking - but otherwise leave all the tests as they are. Your tests have not changed, and they will continue finding the bugs in the isolated code as before. They may run a bit slower now, and the setup may be more complex than with the mocked data base. However, the goal of the test suite was the same - with and without mocking the data base.
This scenario simplifies things a bit, because there may be test cases that can only be done with a mock: For example, testing the case that the data base gets corrupted in a specific way and your code handles this properly. With the real data base such test cases may be practically impossible to set up.
Easymock 3.5.1
JUnit 4.12
Maven 3.5.0
Intellij Build #IU-181.5281.24, built on June 12, 2018
I have a unit test and contained within this unit test is my problem method:
#Test(expected = CheckoutException.class)
public void performCheckout_CheckoutException() throws Exception {
// setup test data
Order order = new OrderImpl();
OMSOrder omsOrder = new OMSOrderImpl();
Order omsOrderProxy = OMSOrderProxy.proxify(order, omsOrder, Logger.getRootLogger());
omsOrderProxy.setId(1L);
FulfillmentOrder fulfillmentOrder = new FulfillmentOrderImpl();
FulfillmentGroup fulfillmentGroup = new FulfillmentGroupImpl();
fulfillmentGroup.setType(FulfillmentType.DIGITAL);
fulfillmentOrder.setFulfillmentGroup(fulfillmentGroup);
((OMSOrder)omsOrderProxy).getAllFulfillmentOrders().add(fulfillmentOrder);
ProcessContext<CheckoutSeed> context = new DefaultProcessContextImpl<>();
// create the expected flow
expect(orderService.save(anyObject(Order.class), eq(false))).andReturn(order).times(2);
replay(orderService);
expect((ProcessContext<CheckoutSeed>)checkoutWorkflow.doActivities(anyObject(CheckoutSeed.class))).andReturn(context);
replay(checkoutWorkflow);
expect(fulfillmentService.fulfill(anyObject(FulfillmentOrder.class))).andThrow(new FulfillmentException());
replay(fulfillmentService);
// test
checkoutService.performCheckout(omsOrderProxy);
// check results
verify(orderService);
verify(checkoutWorkflow);
verify(fulfillmentService);
}
orderService is a strict mock (defined in a #Before setup method):
orderService = createStrictMock(OrderService.class);
Each and every unit test class that uses this orderService creates this mock (whether strict or nice) in this #Before setup method.
Running this test method in Intellij (right-click, Run ...) achieves a successful result. Running the test at class level, again right-click, Run ... achieves another successful result. A mvn clean install (whether in Intellij or at the command line) renders the following error:
java.lang.Exception: Unexpected exception, expected<org.curtiscommerce.core.checkout.service.exception.CheckoutException> but was<java.lang.IllegalStateException>
at org.easymock.internal.ExpectedInvocation.createMissingMatchers(ExpectedInvocation.java:52)
at org.easymock.internal.ExpectedInvocation.<init>(ExpectedInvocation.java:41)
at org.easymock.internal.RecordState.invoke(RecordState.java:51)
at org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:40)
at org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:94)
at com.sun.proxy.$Proxy27.save(Unknown Source)
at com.central.core.checkout.service.TestCheckoutServiceImpl.performCheckout_CheckoutException(TestCheckoutServiceImpl.java:151)
Line 151 (detailed in the code line directly above) relates to:
expect(orderService.save(anyObject(Order.class), eq(false))).andReturn(order).times(2);
which is a line in this method.
Now to get the exception details I remove the 'expected' attribute from the #Test annotation and the exception thrown is clearer:
java.lang.IllegalStateException: 2 matchers expected, 12 recorded.
This exception usually occurs when matchers are mixed with raw values when recording a method:
foo(5, eq(6)); // wrong
You need to use no matcher at all or a matcher for every single param:
foo(eq(5), eq(6)); // right
foo(5, 6); // also right
at org.easymock.internal.ExpectedInvocation.createMissingMatchers(ExpectedInvocation.java:52)
at org.easymock.internal.ExpectedInvocation.<init>(ExpectedInvocation.java:41)
at org.easymock.internal.RecordState.invoke(RecordState.java:51)
at org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:40)
at org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:94)
at com.sun.proxy.$Proxy27.save(Unknown Source)
at com.central.core.checkout.service.TestCheckoutServiceImpl.performCheckout_CheckoutException(TestCheckoutServiceImpl.java:151)
Also, when I run a suite of tests in Intellij say at the level of the package where my unit test class resides (com.central.core.checkout.service), I get this same error. I have removed all other versions of easymock in .m2/repository to ensure there is no conflict.
The concern is; why does this error only occur upon a mvn clean install (in Intellij or cmd line) and at a package level unit test run?
I suppose what really concerns me, aside from differing results depending on how the test is run, is the the exception that is thrown:
java.lang.IllegalStateException: 2 matchers expected, 12 recorded.
tells me there are 2 matchers and 12 recorded. Is this including those created in other unit tests, almost as if spanning a test session? I find this difficult to believe as each unit test method creates a fresh mock #Before invocation.
Added July 6th # 15:21
So, to expedite this current build process and achieve no failing unit tests, I #Ignored this failing unit test and attempted a build. The build failed again but this time the preceding method was the problem child with a similar exception:
java.lang.IllegalStateException: 2 matchers expected, 12 recorded.
This exception usually occurs when matchers are mixed with raw values when recording a method:
foo(5, eq(6)); // wrong
You need to use no matcher at all or a matcher for every single param:
foo(eq(5), eq(6)); // right
foo(5, 6); // also right
I tried a little experiment and #Ignored this current failing method and tried another build but before kinda knew the next preceding method in the class would be the problem child. Lo and behold, it was.
Are you sitting comfortably? Then I will begin...
So I posted this question and of course I kept trying for a solution. I exhausted all channels in fixing the unit tests so I decided to look at this from a different angle. The issue came to light when the development environment was built. I noticed there were two builds quite close together. The former succeeded but the latter failed with the afore mention exception.
Now, going all Columbo I needed to determine the differences between the two builds and it turned out it was one little unit test. This unit test was a straight forward unit test in that it needed no mocking of any sort. What was odd was the Easymock.eq was imported. Strange indeed and even stranger was that it was contained within an assertEquals statement where the 'expected' value was wrapped in this eq(). Yikes and in the wise words of Han Solo, "I've got a bad feeling about this".
I removed this import, together with the eq(), ran the unit test method in isolation...success. I then invoked a build with a test run. Success.
So, what I have learned from this is that using the Easymock.eq() method in the wrong context, in this case an Assert.assertEquals() really strange things happen but still not sure why. Even stranger was that running the unit test in isolation inside my IDE succeeded. :-/
I'll give you a little under the hood of EasyMock insight to help you understand.
When you use a matcher, it is stored in a ThreadLocal. So when the mocked method call actually occurs, a matcher for each parameter is in thread local. EasyMock removes them from there and creates a call expectation.
So, when too many matchers are recording, everything gets misaligned and weird things can happen. That's what the error message is saying. That 12 matchers were recorded but only 2 were expected since you have 2 arguments to your method.
Since Maven and IntelliJ are not forking a new VM between tests, the bad matchers were still there from one test to the other.
I have maven project, and running running mvn test executes test cases within a separate process.
My ask is: Is there any way to run tests by attaching to an existing process?
So that tests can use the attached process environment.
I found surefire has JVMproperty, found here, but could not get how to set it, if this is correct understanding.
Update:
As the project is SpringBoot based, so i executed server (Say it test server) like this-
java -jar target/server-0.0.1-SNAPSHOT.jar
And updated Junit test classes like this-
#Before
public void setup() throws Exception {
VirtualMachine virtualMachine = attach(getTestServerVM());
System.out.println(virtualMachine.id());
}
#Test
public void emptyTest() {
Stack<String> stack = new Stack<String>();
assertTrue(stack.isEmpty());
}
private static VirtualMachine attach(VirtualMachineDescriptor virtualMachineDescriptor) {
VirtualMachine virtualMachine = null;
try {
virtualMachine = VirtualMachine.attach(virtualMachineDescriptor);
} catch (AttachNotSupportedException anse) {
System.out.println("Couldn't attach " + anse);
} catch (IOException ioe) {
System.out.println("Exception attaching or reading a jvm." + ioe);
}
return virtualMachine;
}
private static VirtualMachineDescriptor getTestServerVM() {
String name = "target/server-0.0.1-SNAPSHOT.jar";
List<VirtualMachineDescriptor> vms = VirtualMachine.list();
VirtualMachineDescriptor serverVM = vms.stream()
.filter(vm -> name.equals(vm.displayName()))
.findAny()
.orElse(null);
System.out.println("Test server id : " + serverVM.id());
return serverVM;
}
Is this suffice or do i need to load the agent?
I am asking this, because though program shows that Junit test class attached successfully, i see process id for Junit like this (by running JCMD command)-
3585 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner -version 3 -port 55632 -testLoaderClass org.eclipse.jdt.internal.junit4.runner.JUnit4TestLoader -loaderpluginname org.eclipse.jdt.junit4.runtime -classNames JunitRnD.J
In the end, you are asking: how can I inject arbitrary code into a running JVM.
Probably this is technically doable; but would require a lot of work; and probably dirty hacking. To gain ... what?
Long story short: I think you should step back, and carefully look in the "why" you want to do that. And then strive for another solution!
Given your comment: good unit tests should focus on small units. If you need your "whole stack" to be up and running; then don't call them unit tests; because they aren't. They are functional/integration tests!
When you really want to look into unit tests; then learn about mocking frameworks ... and probably: how to testable code. These videos here a great start into that topic.
Seriously: attaching to a JVM to run unit tests is not a good answer. Instead, step back; and look into your design; and find ways to test it on "smaller" scope.
Another update on your 2nd comment: if you really need such a level of "dynamics"; then I would suggest something completely different: integrate Jython into your product; at least for your "development" setup. That will allow you to send jython scripts (python code that can call all use all your java classes) into the JVM.
But again: look into writing real unit tests; any application can benefit from that. I have coworkers who always tell me that they need to run "functional tests" when debugging problems. Me, I am working on the same software stack; I typically need log files; and then I adapt my unit tests to repro the issue; I only need "system time" very late in the game to do a final test of the fix before that goes into the field.
But just for the record: we have jython in our development systems, too. And yes, that is great to even have a "shell" running in your JVM.
Recently, I got started reading on BDD and TDD and I got hooked. I got lost with the amount of unorganized sources of information and different opinions of what's best and what not. At the end I settled on xBehave & xUnit. I like the fluent syntax and the ease of defining the behaviors with Fluent Assertions and Fluent Validation.
I'm also trying to implement the onion architecture with a test project I'm working on for learning. Here's my scenario: The project, to make it simple, is a product tracker. I can create products and track who owns it. I want to implement two specs:
when a new product is created without a name then an error should be displayed
when a new product is created without an owner assigned then an error should be displayed.
I created the spec which instantiated a new Product and a new ProductService which in turns creates the Product. The spec passes and the validation is occuring now the question is:
How do I test my ProductRepository class? Do I test it next or mock it and finish all specs first then come back and test repository classes?
Should I have mocked the ProductService class in the first spec?
Is that done at the unit test level? should i create a unit test class?
Wouldn't testing the repository make it an integration test?
so far, I don't have a UI and i'm writing my specs for the domain, service, and infrastructure layers.
do i need to use watin for my UI tests?
would switching to watin/specflow makes more sense and would save on efforts to have fully tested layers from top to bottom?
Here's one of the specs I worked on:
[Scenario]
public void creating_new_product_without_a_name_should_throw_error()
{
var productService = default(IProductService);
var action = default(Action);
_
.Given("a new product", () =>
productService = new ProductService() as IProductService)
.When("creating the new product without a name", () =>
action = () => productService.Create(new Product()))
.Then("it should should display an error", () =>
action.ShouldThrow<ValidationException("Name is required."));
}
Thank you for your reply in advance and, please, if you are answering this thread back up with some materials/articles/sample code on to why your suggestion would be better to follow.
It sounds like you are testing small parts and then plan to glue them together into something big. This is (IMO, againt) not TDD (and certainly not BDD): you are not letting the tests drive the design/architecture.
To start with, don't think that much about the design. Don't think onion architecture, don't think repositories, don't think services.
Start by writing a test that verifies the whole solution, from end to end. Make that test as small as possible. To start with, just verify that e.g. a label is displayed. Then write the smallest solution that you can think of to make the test pass. Then write another test and the solution for that.
After a while, have a look at the code (production, but also test) to find resposibilities. Is there an embryo of a service in there somewhere? Extract it! But don't do it prematurely. Let the code tell you what it wants to look like.
Start thinking about the domain (Product, Owner, etc) up front and include it early in your code. Wait a little longer with persistance (repositories), but not too long.
Keep testing at this level (end-to-end). Add micro tests when necessary. So my answer to the question "how do I test my ProductRepository/Service class" is a question to you: do you need to? Or is it sufficiently covered by the end-to-end tests? If not, why?
I am having an interesting situation. In my test assembly, I have folders having specific test classes, i.e., TestFixture's. Consider, for e.g., the following hierarchy in VS:
Sol
TestProject
TestFolder1
TestClass1
TestClass2
TestFolder2
TestClass3
Now, when I run the following at command line:
nunit-console.exe /run:Sol.TestProject.TestFolder1.TestClass2 TestProject.dll
Things are running fine and all the tests are passing. But, if I run as below:
nunit-console.exe /run:Sol.TestProject.TestFolder1 TestProject.dll
In this case, some of the tests in TestClass2 are failing.
I have tried dumping the state of some of the relevant objects involved in the test, and the state seemed fine at the beginning of the test code in both cases. Also, TestClass1/2/3 do not have a superclass doing something - so that is ruled out as well. Any ideas what else can be happening here?
I am using VS2010/.NET4.0 (4.0.30319.1)/nUnit 2.5.9.
Finally figured this out. I was using a singleton class for storing certain options. Looks like the singleton class instance is retained between runs of different TestFixtures (i.e., test classes), when they are run together, e.g., for a folder or for a project. I did not dump the state of this object initially, because I thought that the singleton class will be having new instance for each of the TestFixtures. Interesting finding, hope this helps someone.