I have imported a Maven project into IntelliJ 12.1.4.
One of the tests gives each object an id and when run from the command line (or in Eclipse or Netbeans) the unit tests all have the id starting at zero. From within IntelliJ the ids increase without being reset to zero causing my tests to fail.
public class CORE_C {
static AtomicLong globalCounter = new AtomicLong();
final long counter;
public CORE_C() {
counter = globalCounter.getAndIncrement();
}
}
It is my understanding that JUnit invokes each test in its own classloader causing the behaviour I see outside IntelliJ. Is this a bug, or just an option I have not yet set correctly? If the latter, what is the proper fix?
Related
I'm trying to test an OSGi service annodated with Declaratice Services annotations (org.osgi.service.component.annotations). I have generated my project based on the AEM Multi-Project Example.
public class PostServiceTest {
#Rule
public AemContext context = new AemContext((AemContextCallback) context -> {
context.registerInjectActivateService(new PostService());
}, ResourceResolverType.RESOURCERESOLVER_MOCK);
#Test
public void shouldFetchRandomPosts() {
final PostService postsService = context.getService(PostService.class);
final List<Post> posts = postsService.randomPosts(100);
assertEquals(100, posts.size());
}
}
Whenever I run this test in IntelliJ, OSGi Mocks complain about hte absence of SCR metadata on the tested class.
org.apache.sling.testing.mock.osgi.NoScrMetadataException: No OSGi SCR metadata found for class com.example.PostServiceTest
at org.apache.sling.testing.mock.osgi.OsgiServiceUtil.injectServices(OsgiServiceUtil.java:381)
at org.apache.sling.testing.mock.osgi.MockOsgi.injectServices(MockOsgi.java:148)
at org.apache.sling.testing.mock.osgi.context.OsgiContextImpl.registerInjectActivateService(OsgiContextImpl.java:153)
at org.apache.sling.testing.mock.osgi.context.OsgiContextImpl.registerInjectActivateService(OsgiContextImpl.java:168)
at com.example.PostServiceTest.shouldReturnTheEnpointNamesWithValidConfigurationAsTheListOfAcceptableKeys(PostServiceTest.java:23)
Does this mean I can only test classes annotated with the older SCR annotations that come with Apache Felix? The documentation for OSGi Mocks suggests that Declarative Services annotations is supported in version 2.0.0 and higher. The version I'm using meets this criterion.
Interestingly enough, this only happened when I ran the test directly form the IDE. It turns out that IntelliJ was not generating the SCR metadata when compiling my classes for tests.
When I compile the class under test with Gradle, the 'com.cognifide.aem.bundle' plugin is used to generate the SCR descriptor and place it in the resulting Java Archive. That's why unit tests executed by Gradle work fine. Just clicking the Run button in IntelliJ caused this step to be missed.
In order to get this working, I ended up setting up IntelliJ to allow me to run unit tests via Gradle.
I went to Settings > Build, Execution, Deployment > Gradle > Runner and used the dropdown menu so that I could decide whether to use Gradle on a test-by-test basis.
Using Jbehave, my runner class extends JUnitStories, I can generate the plain-style report with the following:
#Override
public Configuration configuration() {
Class<? extends Embeddable > embeddableClass = this.getClass();
return new MostUsefulConfiguration().useStoryLoader(new LoadFromClasspath(embeddableClass))
.useStoryControls(new StoryControls().doResetStateBeforeScenario(false).useStoryMetaPrefix("story_").useScenarioMetaPrefix("scenario_"))
.useStoryReporterBuilder(
new StoryReporterBuilder()
.withCodeLocation(CodeLocations.codeLocationFromClass(embeddableClass))
.withDefaultFormats().withFormats(CONSOLE, HTML).withFailureTrace(true)
.withFailureTraceCompression(true));
}
Now I want to integrate JBehave with Serenity for better looking reports ^_^. So I changed my runner class to inherit from SerenityStories instead. After adding dependencies and running via maven, the tests pass. However, the Serenity generated report always sees '0 test scenarios'.
I saw that SerenityStories inherits JUnitStories, and overrides the configuration() method as well.
How can I make Serenity see my test scenarios? Do I need to override the configuration() method differently? And how?
Thank you very much!
I was able to make it work by creating a new maven project instead.
Used the archetype: 'serenity-jbehave-archetype'.
There will be a pre-created & empty runner class that inherits SerenityStories.
I just then merged my files to this new project.
As for the runner class, I've overridden the methods stepsFactory() and storyPaths() to match my steps/stories.
Hope that made sense. Thanks!
My project has two different cucumber test and each one needs a different spring context configuration.
The problem I have is that when I run each test individually from Intellij, they load the right Spring Context and the tests are passing, but I press run all test, none of them are passing.
Running a maven test, both test are passing.
this is my code:
#RunWith(FCSCucumber.class)
#Cucumber.Options(strict = true
// , tags = {"~#ignore"}
// , tags = {"#Only"}
, glue ="feature.scenario1"
, features = "src/test/resources/feature/scenario1/"
)
#FCSApplicationProperties(commonProps="config/scenario1/exec.common.properties",
environmentProps="src/test/resources/scenario1-test.properties")
public class TesScenario1Features {
}
#ContextConfiguration("/cucumber-scenario1.xml")
public class scenario1Steps {
......
}
#RunWith(FCSCucumber.class)
#Cucumber.Options(strict = true
// , tags = {"~#ignore"}
// , tags = {"#Only"}
, glue ="feature.scenario2"
, features = "src/test/resources/feature/scenario2/"
)
#FCSApplicationProperties(commonProps="config/scenario2/exec.common.properties",
environmentProps="src/test/resources/scenario2-test.properties")
public class TesScenario2Features {
}
#ContextConfiguration("/cucumber-scenario2.xml")
public class scenario2Steps {
......
}
Thank you very much for your help
The issue is that the IntelliJ cucumber plugin is using the cucumber cli to run tests, without using the JUnit runner at all. This causes several limitations, like requiring the spring annotations on the step definition classes instead of the runner, or by default requiring the steps definitions to be in the same package as the scenario files.
In your example I would actually expect also running a single test to fail, unless the correct application properties are also referenced by the /cucumber-scenario{1,2}.xml files.
The only option I see with the standard cucumber implementation would be to extract the tests into separate projects.
I'm actually working on an alternative implementation of cucumber with an improved spring integration that you might want to try. It's not fully integrated with IntelliJ yet though.
In my project, I have acceptance tests which take a long time to run. When I add new features to the code and write new tests, I want to skip some existing test cases for the sake of time. I am using Spring 3 and junit 4 using SpringJUnit4ClassRunner. My idea is to create an annotation (#Skip or something) for the test class. I am guessing I would have to modify the runner to look for this annotation and determine from system properties if a test class should be included while testing. My question is, is this easily done? Or am I missing an existing functionality somewhere which will help me?
Thanks.
Eric
Annotate your class (or unit test methods) with #Ignore in Junit 4 and #Disabled in Junit 5 to prevent the annotated class or unit test from being executed.
Ignoring a test class:
#Ignore
public class MyTests {
#Test
public void test1() {
assertTrue(true);
}
}
Ignoring a single unit test;
public class MyTests {
#Test
public void test1() {
assertTrue(true);
}
#Ignore("Takes too long...")
#Test
public void longRunningTest() {
....
}
#Test
public void test2() {
assertTrue(true);
}
}
mvn install -Dmaven.test.skip=true
so you can build your project without test,
mvn -Dtest=TestApp1 test
you can just add the name of your application and you can test it.
I use Spring profiles to do this. In your test, autowire in the Spring Environment:
#Autowired
private Environment environment;
In tests you don't want to run by default, check the active profiles and return immediately if the relevant profile isn't active:
#Test
public void whenSomeCondition_somethingHappensButReallySlowly() throws Exception{
if (Arrays.stream(environment.getActiveProfiles()).noneMatch(name -> name.equalsIgnoreCase("acceptance"))) {
return;
}
// Real body of your test goes here
}
Now you can run your everyday tests with something like:
> SPRING_PROFILES_ACTIVE=default,test gradlew test
And when you want to run your acceptance tests, something like:
> SPRING_PROFILES_ACTIVE=default,test,acceptance gradlew test
Of course that's just an example command line assuming you use Gradle wrapper to run your tests, and the set of active profiles you use may be different, but the point is you enable / disable the acceptance profile. You might do this in your IDE, your CI test launcher, etc...
Caveats:
Your test runner will report the tests as run, instead of ignored, which is misleading.
Rather than hard code profile names in individual tests, you probably want a central place where they're all defined... otherwise it's easy to lose track of all the available profiles.
I've tried just about every configuration I can think of (and reviewed some answers on StackOverflow), but all of our tests show the 'Failed to load ApplicationContext' error when run through Hudson. What is interesting is that some tests appear to run and pass, while some run and fail (as expected), but regardless I'm always getting the errors list for all tests. Here is the basic configuration:
#ContextConfiguration(locations = "classpath:/MyTest-context.xml")
#RunWith(SpringJUnit4ClassRunner.class)
public class MyTest {
#Autowired
private ApplicationContext applicationContext;
public MyTest() {}
#Test
public void doSomething() {
// Implementation...
}
}
UPDATE:
There appears to be a duplicate set of tests running, one for Emma coverage reporting, and the other the normal tests. It is when the tests run for Emma coverage that they are showing the errors. If I turn off the "emma:emma package" goal so those don't run then I don't get the errors, and the tests appear to run fine. I'm not sure if that helps any.
The answer ended up being close to what gontard was pointing to, which is an issue that was hidden by the way Emma's classloader works. Between my local JUnit tests, what was running in our DEV environment, and what was running in Hudson with Emma, all of them have a different way in which the classloader orders the loading of libraries and classes. I ended up reviewing the stack trace on the test results, and it turns out on my local, a new version of a library was loaded via the POM, but in Hudson Emma was loading an old version of a library first. I had to find and remove the old version, and everything now works fine.