project-reactor - Nullpointer when loading UUIDUtils with a failsafe integration test - reactor

I am running into a NP when running an integration test of the reactor EventBus with failsafe
I think this this because the class UUIDUtils is being loaded by the bootstrap classloader and hence the call to getClassLoader() is returning null?
Caused by: java.lang.NullPointerException
at reactor.core.support.UUIDUtils.<clinit>(UUIDUtils.java:39)
IS_THREADLOCALRANDOM_AVAILABLE = null != UUIDUtils.class.getClassLoader().loadClass(
"java.util.concurrent.ThreadLocalRandom"
);

Would you mind creating a GitHub issue on this so we can track a fix? Sounds like maybe we just need to try/catch setting this boolean and let it be false in this case.

Related

Failure running Spring-Boot + Camel tests in batch

For my Kotlin application with Spring-Boot 2.7.0 and Apache Camel 3.17.0, I am running into a rather surprising issue: I have a set of JUnit 5 test cases that individually run fine (using mvn test -DTest="MyTest"); but when run in batch via mvn test or in IntelliJ IDEA, some test cases fail with org.apache.camel.FailedToCreateRouteException... because of Cannot set tracer on a started CamelContext.
The funny thing is, that these test cases do not have tracing enabled. My test setup looks like the following for most of the tests:
#CamelSpringBootTest
#SpringBootTest(
classes = [TestApplication::class],
properties = ["camel.springboot.java-routes-include-pattern=**/SProcessingTestRoute"]
)
#TestConstructor(autowireMode = TestConstructor.AutowireMode.ALL)
#UseAdviceWith
internal class ProcessingTest(
val template: FluentProducerTemplate,
#Value("classpath:test-resource") private val TestResource: Resource,
val camelContext: CamelContext
) {
#EndpointInject("mock:result")
lateinit var resultMock: MockEndpoint
#Test
fun `test my route`() {
AdviceWith.adviceWith(camelContext, "processing-route") { route ->
route.weaveAddLast().to("mock:result")
}
resultMock.expectedCount = 1
camelContext.start()
// ...
// here comes the actual test
}
}
There are a couple of tests where I do not advice routes; i.e., these test cases do not have the #AdviceWith annotation, and these test cases do not fail during the batch run.
Debugging this issue is hard; therefore, I would highly appreciate any pointers, hints, or hypothesis for potential causes, and ideas on what to try to narrow down the problem!
You probably need a fresh camel context for each test. Try adding #DirtiesContext to each test class. If that doesn't work, add it to each test method.

AEM - after upgrade to JDK11 I can no longer pass class parameter to the scheduled job

After upgrade to JDK11 I'm no longer able to run some of my AEM 6.5 Sling jobs. It seems there is some problem with visibility of class that is used to pass parameters to the job.
Here is how the job is prepared and scheduled:
final Map<String, Object> props = new HashMap<String, Object>();
props.put("stringParam", "something");
props.put("classParam", new Dto());
Job job = jobManager.addJob("my/special/jobtopic", props);
The jobs is not started, as it seems there is any problem during job start, during parameters setup.
The stringParam is ok, but classParam usage throws following exception:
28.01.2022 17:28:25.978 *WARN* [sling-oak-observation-17] org.apache.sling.event.impl.jobs.queues.QueueJobCache
Unable to read job from /var/eventing/jobs/assigned/.../my.package.myJob/2022/1/27/15/50/...
java.lang.Exception: Unable to deserialize property 'classParam'
at org.apache.sling.event.impl.support.ResourceHelper.cloneValueMap(ResourceHelper.java:218)
at org.apache.sling.event.impl.jobs.Utility.readJob(Utility.java:181)
...
Caused by: java.lang.ClassNotFoundException: my.package.Dto
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at org.apache.sling.launchpad.base.shared.LauncherClassLoader.loadClass(LauncherClassLoader.java:160)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
at org.apache.felix.framework.BundleWiringImpl.doImplicitBootDelegation(BundleWiringImpl.java:1817)
I'm pretty sure that the Dto class is visible and exported from my OSGI bundle, it can be used and consumed from another bundles. But for some reason internal sling logic is unable to resolve it. How can I make my Dto class accessible to internal Sling logic?
Any idea why does this happen and how to solve it?
The java.lang.ClassNotFoundException exception is misleading in this case.
The true reason for this problem is "Java Serialization Filter" that was added in JDK 9. It affects object deserialization rules.
I tried to do parameter serialization / deserialization myself and pass serialized object in base64 string:
String serializedString = job.getProperty("dto", String.class);
byte [] serializedBytes = Base64.getDecoder().decode(serializedString);
ByteArrayInputStream bais = new ByteArrayInputStream(serializedBytes);
ObjectInputStream ois = new ObjectInputStream(bais);
dtoParam = (Dto)ois.readObject();
Job was scheduled and run, however the result was java.io.InvalidClassException: filter status: REJECTED
It helped to find the true cause:
AEM implementation uses internal deserialization filter com.adobe.cq.deserfw.impl.FirewallSerialFilter that could be configured in OSGI Felix console. The component name is com.adobe.cq.deserfw.impl.DeserializationFirewallImpl.name.
Here add your class or package name.

Spring test : strange behavior of context config caching between test classes ?

I am writing tests for a Spring Integration project, and I am running into something strange : I've read about how Spring caches the context between tests and how we can force to clean the cache with #DirtiesContext annotation. However, I'm not able to explain the behavior I observe, and it makes me think it's maybe a bug...
I have 2 different tests :
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = { "classpath:myInterface-core.xml",
"classpath:myInterface-datasource-test.xml"})
public class PropertyConfigurerTest {
#Test
public void shouldResolvePropertyForOutPutFile(){
}
}
(it does nothing, simply loads the context, intentionnaly)
And another one, more complex with actual tests in it (skipping them in below snippet) :
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {
"classpath:myInterface-core.xml",
"classpath:myInterface-rmi.xml",
"classpath:myInterface-datasource-test.xml"})
public class MontranMessagesFlowTest {
...
}
As you can see, these 2 tests don't load exactly the same config : second test loads one extra config file that is not required for first one.
When I run these 2 tests one after the other, second test is failing : in a nutshell, the goal of the test is to insert 2 rows in inMemory DB, start my Spring Integration flow and assert with a listener (inside a jms:listener-container) that I've received 2 JMS messages on the outbound side. I see in debug mode that actually the 2 messages don't go to same listener so I get one message instead of the 2 I expect. Somehow, the fact that I'm loading some elements of the context in first test (even if I don't do anything with them) has an impact on the second test.
I have found 2 different workarounds :
adding #DirtiesContext(classMode=ClassMode.AFTER_CLASS) on my first test.
modify the list of Spring files that I load in my first test, so that it matches exactly the one defined in the second test.
But still, I don't understand the rationale, and it looks like a bug to me.
I am using Spring Test 4.1.4.RELEASE. I've put the minimum code necessary in a separate project to be able to reproduce. I can share it if required.
Does anybody have an explanation for this ? Bug or not ?
Thanks
Vincent
#M. Deinum is correct in his comment.
For what it's worth, in Spring Integration framework tests themselves, we have started adding #DirtiesContext to all tests, to ensure any active components (such as inbound message-driven adapters) are always stopped after the tests complete.
This also has a performance/memory usage improvement for large test suites.

Spring #Value default property is not taken when runing JUnit tests

This is confusing. I have a property outerParameter, which is optionaly given among VM options when starting tomcat. I am using it by the following way in my logic:
#Value("${outerParameter:paused}")
private String featureStatus = "active";
public String getFeatureStatus() {
return featureStatus;
}
When starting tomcat without parameter - getFeatureStatus gives "paused", as expected. When starting with defined parameter - gives this parameter value, as expected.
The confusing part is that when I am runing JUnit tests for getFeatureStatus, it anyway gives me "active" and not the default "paused". The context for tests doesn't contain any <context:property-placeholder../> configuration.
I am trying to understand what I am missing, maybe somebody could give me a hand
I found this:
Spring #Value annotation not using defaults when property is not present
which could be the answer for my case too. It says "Perhaps initialization of property placeholder configurer fails due to missed properties file, so that placeholders are not resolved".
But if so, why it doesn't fail when starting tomcat without defined outerParameter?
Thanks
It means that the property is not loaded in the test case's classpath. Try loading the properties file in the context for test.

JUnit Liferay services

I use liferay service builder in my project and now i want to test *Util classes. It would be easy, but i don't know simple method of init environment.
For example in ant testing with spring configuration from service.xml (auto generated) i use InitUtil.initWithSpring() for init beans, but get follow error:
[junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 2,413 sec
[junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 2,413 sec
[junit]
[junit] Testcase: testJournalArticleSearch(MTest): Caused an ERROR
[junit] BeanLocator has not been set for servlet context My-portlet
[junit] com.liferay.portal.kernel.bean.BeanLocatorException: BeanLocator has not been set for servlet context My-portlet
[junit] at com.liferay.portal.kernel.bean.PortletBeanLocatorUtil.locate(PortletBeanLocatorUtil.java:42)
[junit] at com.my.service.EntityLocalServiceUtil.getService(EntityLocalServiceUtil.java:70)
[junit] at MTest.setUp(MTest.java:21)
I've seen a few articles on this problem, but it doesn't work or i don't understand these articles...
Somebody knows a simple solution to this problem?
I'm writing this as an answer - it would be more a comment, but the formatting options and length of an answer are what I'm going after.
I frequently see that people have problems writing unit tests for generated code - and *Util, together with servicebuilder, sounds like generated code (*LocalServiceUtil) to me.
My advice is to rather test your *LocalServiceImpl code and trust that the codegenerator is correct (or trust that the codegenerator tests will catch mistakes in there, but this is outside of your scope). After all, the functionality that servicebuilder's *LocalServiceUtil classes deliver is an indirection that looks up the correct implementation (based on spring configuration) and delegate to it. There's no business logic in *LocalServiceUtil classes - this is in *LocalServiceImpl classes.
The next point is: Sometimes even the *Impl-classes are hard to test, because they reach out to other services, which would need to be mocked. In this case - to keep the unit tests readable and independent of the database - I'm proposing to test a layer of code that doesn't reach out to other services. To pick on the code I stole from this answer, here's how I'd rather test it, excluding UserLocalService from the equation (caution: pseudocode, never saw a compiler, I'm editing in this input field)
The code we're about to test is:
class MyUserUtil {
public static boolean isUserFullAge(User user) {
Date birthday = user.getBirthday();
long years = (System.currentTimeMillis() - birthday.getTime()) / ((long)365*24*60*60*1000);
return years >= 18;
}
}
My Test for this would be ruling out UserLocalService:
#Test
public void testIsUserFullAge() throws Exception {
//setup (having it here for brevity of the code sample)
SimpleDateFormat format = new SimpleDateFormat("yyyy_MM_dd");
Date D2000_01_01 = format.parse("2000_01_01");
Date D1990_06_30 = format.parse("1990_06_30");
User mockUserThatIsFullAge = mock(User.class);
when(mockUserThatIsFullAge.getBirthday()).thenReturn(D1990_06_30);
User mockUserThatIsNotFullAge = mock(User.class);
when(mockUserThatIsNotFullAge.getBirthday()).thenReturn(D2000_01_01);
//run
asertTrue(MyUserUtil.isUserFullAge(mockUserThatIsFullAge));
asertFalse(MyUserUtil.isUserFullAge(mockUserThatIsNotFullAge));
}
The important part here is: Your code works on a User object, not on a user Id. Thus you don't need to test the lookup. If you desperately want to test the lookup as well (e.g. test on a broader scale), call it integration test. But don't complain if it breaks often because of some unrelated changes. Because now the reasons for your test to fail are of two different sources: The lookup fails OR your implementation is incorrect. You want your UNIT test to fail for exactly one of the reasons, e.g. immediately know what went wrong when the test fails, not start debugging.
Oh, and yes, that test will start to fail in 2018, in real life I'd test more corner cases, e.g. someone who turns 18 tomorrow or did so yesterday), but this is a different topic.
I use Mockito and PowerMock for mocking the Liferay services. The PowerMock allows to mock static methods like XXXLocalServiceUtil. In the linked answer from Prakash K, you can find detailed description: Testing for custom plugin portlet: BeanLocatorException and Transaction roll-back for services testing

Resources