AEM - after upgrade to JDK11 I can no longer pass class parameter to the scheduled job - osgi

After upgrade to JDK11 I'm no longer able to run some of my AEM 6.5 Sling jobs. It seems there is some problem with visibility of class that is used to pass parameters to the job.
Here is how the job is prepared and scheduled:
final Map<String, Object> props = new HashMap<String, Object>();
props.put("stringParam", "something");
props.put("classParam", new Dto());
Job job = jobManager.addJob("my/special/jobtopic", props);
The jobs is not started, as it seems there is any problem during job start, during parameters setup.
The stringParam is ok, but classParam usage throws following exception:
28.01.2022 17:28:25.978 *WARN* [sling-oak-observation-17] org.apache.sling.event.impl.jobs.queues.QueueJobCache
Unable to read job from /var/eventing/jobs/assigned/.../my.package.myJob/2022/1/27/15/50/...
java.lang.Exception: Unable to deserialize property 'classParam'
at org.apache.sling.event.impl.support.ResourceHelper.cloneValueMap(ResourceHelper.java:218)
at org.apache.sling.event.impl.jobs.Utility.readJob(Utility.java:181)
...
Caused by: java.lang.ClassNotFoundException: my.package.Dto
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at org.apache.sling.launchpad.base.shared.LauncherClassLoader.loadClass(LauncherClassLoader.java:160)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
at org.apache.felix.framework.BundleWiringImpl.doImplicitBootDelegation(BundleWiringImpl.java:1817)
I'm pretty sure that the Dto class is visible and exported from my OSGI bundle, it can be used and consumed from another bundles. But for some reason internal sling logic is unable to resolve it. How can I make my Dto class accessible to internal Sling logic?
Any idea why does this happen and how to solve it?

The java.lang.ClassNotFoundException exception is misleading in this case.
The true reason for this problem is "Java Serialization Filter" that was added in JDK 9. It affects object deserialization rules.
I tried to do parameter serialization / deserialization myself and pass serialized object in base64 string:
String serializedString = job.getProperty("dto", String.class);
byte [] serializedBytes = Base64.getDecoder().decode(serializedString);
ByteArrayInputStream bais = new ByteArrayInputStream(serializedBytes);
ObjectInputStream ois = new ObjectInputStream(bais);
dtoParam = (Dto)ois.readObject();
Job was scheduled and run, however the result was java.io.InvalidClassException: filter status: REJECTED
It helped to find the true cause:
AEM implementation uses internal deserialization filter com.adobe.cq.deserfw.impl.FirewallSerialFilter that could be configured in OSGI Felix console. The component name is com.adobe.cq.deserfw.impl.DeserializationFirewallImpl.name.
Here add your class or package name.

Related

Redis cannot serialize/deserialize inner Map objects

I am using redis repository over spring. In the repository, redis is not deserializing the data structure explained below.
class A {
String id;
Map<String, Map<String, Boolean>> someData;
}
It gives this error:
Caused by: java.lang.UnsupportedOperationException: No accessor to set
property final float java.util.HashMap.loadFactor!
Redis, as default, looks for no-args constructor. If missing, it uses all-args constructor. If no one is there, it uses custom constructor. But I do not have. control over Map so can you help me to resolve this problem? I have not instantiated any converters separately. Default config is there.
Thx

How to get Step Name and Step Execution ID in ItemReadListener?

My Spring Batch job uses a FlatFileItemReader to read .csv files. To implement error handling, I created a custom ItemReadListener and provided an overridden onReadError implementation.
Here, I'd like access to the StepName and StepExecutionId from which the error was thrown (i.e. at the reader level). Can I access the StepExecution in the my custom listener? When I try to inject it into any method or constructor, I get a "No beans of type StepExecution found" error.
Thanks.
Try with the following in your ItemReadListener.
#Value("#{stepExecution}")
private StepExecution stepExecution;
This should work if the scope is step. Also your ItemReadListener should be a spring bean.

Spring Boot / Kafka Json Deserialization - Trusted Packages

I am just starting to use Kafka with Spring Boot & want to send & consume JSON objects.
I am getting the following error when I attempt to consume an message from the Kafka topic:
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition dev.orders-0 at offset 9903. If needed, please seek past the record to continue consumption.
Caused by: java.lang.IllegalArgumentException: The class 'co.orders.feedme.feed.domain.OrderItem' is not in the trusted packages: [java.util, java.lang]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all (*).
at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.getClassIdType(DefaultJackson2JavaTypeMapper.java:139) ~[spring-kafka-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.toJavaType(DefaultJackson2JavaTypeMapper.java:113) ~[spring-kafka-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:218) ~[spring-kafka-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:923) ~[kafka-clients-1.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.Fetcher.access$2600(Fetcher.java:93) ~[kafka-clients-1.0.1.jar:na]
I have attempted to add my package to the list of trusted packages by defining the following property in application.properties:
spring.kafka.consumer.properties.spring.json.trusted.packages = co.orders.feedme.feed.domain
This doesn't appear to make any differences. What is the correct way to add my package to the list of trusted packages for Spring's Kafka JsonDeserializer?
Since you have the trusted package issue solved, for your next problem you could take advantage of the overloaded
DefaultKafkaConsumerFactory(Map<String, Object> configs,
Deserializer<K> keyDeserializer,
Deserializer<V> valueDeserializer)
 and the JsonDeserializer "wrapper" of spring kafka
JsonDeserializer(Class<T> targetType, ObjectMapper objectMapper)
Combining the above, for Java I have:
new DefaultKafkaConsumerFactory<>(properties,
new IntegerDeserializer(),
new JsonDeserializer<>(Foo.class,
new ObjectMapper()
.registerModules(new KotlinModule(), new JavaTimeModule()).setSerializationInclusion(JsonInclude.Include.NON_NULL)
.setDateFormat(new ISO8601DateFormat()).configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false))));
Essentially, you can tell the factory to use your own Deserializers and for the Json one, provide your own ObjectMapper. There you can register the Kotlin Module as well as customize date formats and other stuff.
Ok, I have read the documentation in a bit more detail & have found an answer to my question. I am using Kotlin so the creation of my consumer looks like this with the
#Bean
fun consumerFactory(): ConsumerFactory<String, FeedItem> {
val configProps = HashMap<String, Any>()
configProps[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = bootstrapServers
configProps[ConsumerConfig.GROUP_ID_CONFIG] = "feedme"
configProps[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
configProps[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = JsonDeserializer::class.java
configProps[JsonDeserializer.TRUSTED_PACKAGES] = "co.orders.feedme.feed.domain"
return DefaultKafkaConsumerFactory(configProps)
}
Now I just need a way to override the creation of the Jackson ObjectMapper in the JsonDeserializer so that it can work with my Kotlin data classes that don't have a zero-argument constructor :)

Use JSON deserializer for Batch job execution context

I'm trying to get a list of job executions which have been stored in Spring batch related tables in the database using:
List<JobExecution> jobExecutions = jobExplorer.getJobExecutions(jobInstance);
The above method call seems to invoke ExecutionContextRowMapper.mapRow method in JdbcExecutionContextDao class.
The ExecutionContextRowMapper uses com.thoughtworks.xstream.Xstream.fromXML method to deserialize the JSON string of JobExecutionContext stored in DB.
It looks like an incorrect or default xml deserializer is used for unmarshalling JSONified JobExecutionContext.
Is there any configuration to use a JSON deserializer in this scenario.
The serializer/deserializer for the ExecutionContext is configurable in 2.2.x. We use the ExecutionContextSerializer interface (providing two implementations, one using java serialization and one using the XStream impl you mention). To configure your own serializer, you'll need to implement the org.springframework.batch.core.repository.ExecutionContextSerializer and inject it into the JobRepositoryFactoryBean (so that the contexts are serialized/deserialized correctly) and the JobExplorerFactoryBean (to reserialize the previously saved contexts).
It is important to note that changing the serialization method will prevent Spring Batch from deserializing previously saved ExecutionContexts.

Getting stack overflow error in hadoop

I am getting stack overflow error while accessing haddop file using java code.
import java.io.InputStream;
import java.net.URL;
import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;
import org.apache.hadoop.io.IOUtils;
public class URLCat
{
static
{
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
public static void main(String[] args) throws Exception
{
InputStream in = null;
try
{
in = new URL(args[0]).openStream();
IOUtils.copyBytes(in, System.out, 4096, false);
}
finally
{
IOUtils.closeStream(in);
}
}
}
i used eclipse to debug this code then i came to know line
in = new URL(args[0]).openStream();
producing error.
I am runnung this code by passing hadoop file path i.e
hdfs://localhost/user/jay/abc.txt
Exception (pulled from comments) :
Exception in thread "main" java.lang.StackOverflowError
at java.nio.Buffer.<init>(Buffer.java:174)
at java.nio.ByteBuffer.<init>(ByteBuffer.java:259)
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:52)
at java.nio.ByteBuffer.wrap(ByteBuffer.java:350)
at java.nio.ByteBuffer.wrap(ByteBuffer.java:373)
at java.lang.StringCoding$StringEncoder.encode(StringCoding.java:237)
at java.lang.StringCoding.encode(StringCoding.java:272)
at java.lang.String.getBytes(String.java:946)
at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
.. stack trace truncated ..
1) This is because of the bug in the FSURLStreamHandlerFactory class provided by hadoop. Please note that the bug is fixed in the latest jar which contains this class.
2) This file is located in hadoop-common-2.0.0-cdh4.2.1.jar. To understand the problem completely we have to understand how the java.net.URL class works.
Working of URL object
When we create a new URL using any one of its constructor without passing "URLStreamHandler" (either through passing null for its value or calling constructor which does not take URLStreamHandler object as its parameter) then internally it calls a method called getURLStreamHandler(). This method returns the URLStreamHandler object and sets a member
variable in URL class.
This object knows how to construct a connection of a particular scheme like "http", "file"... and so on. This URLStreamHandler is constructed by the factory called
URLStreamHandlerFactory.
3) In the problem example given above the URLStreamHandlerFactory was set to "FsUrlStreamHandlerFactory" by calling the following static method.
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
So when we create a new URL then this "FSUrlStreamHandlerFactory" is used to create the URLStreamHandler object for this new URL by calling its createURLStreamHandler(protocol) method.
This method inturn calls a method called loadFileSystems() of FileSystem class. The loadFileSystems() method invokes the ServiceLoader.load("FileSystem.class") so it tries to read the binary names of the FileSystem implementation classes by searching all META-INF/services/*.FileSystem files of all jar files in classpath and reading its entries.
4) Remember that the each jar is handled as URL object meaning for each jar an URL object is created by the ClassLoader internally. The class loader supplies the URLStreamHandler object
when constructing the URL for these jars so these URLs will not be affected by the "FSUrlStreamHandlerFactory" we set because the URL has already having the "URLStreamHandler". Since we are
dealing with jar files the class loader sets the "URLStreamHandler" as of type "sun.net.www.protocol.jar.Handler".
5) Now inorder to read the entries inside the jar files for the FileSystem implementation classes the "sun.net.www.protocol.jar.Handler" needs to construct the URL object for each entry by
calling the URL constructor without the URLStreamHandler object. Since we already defined the URLStreamHandlerFactory as "FSUrlStreamHandlerFactory" it calls the createURLStreamHandler
(protocol) method which causes to recurse indefinetly and lead to the "StackOverflowException".
This bug is known as the "HADOOP-9041" by the Hadoop committters. The link is https://issues.apache.org/jira/browse/HADOOP-9041.
I know this is somewhat complicated.
So in short the solution to this problem is given below.
1) Use the latest jar hadoop-common-2.0.0-cdh4.2.1.jar which has the fix for this bug
or
2) Put the following statement in the static block before setting the URLStreamHandlerFactory.
static {
FileSystem.getFileSystemClass("file",new Configuration());
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
Note that the first statement inside the static block doesn't depend on FsUrlStreamHandlerFactory now and uses the default handler for file:// to read the file entires in META-INF/services/*.FileSystem files.
I have a workaround.
It would be great if someone more familiar with the current state of the Hadoop world (Jan 2014) would enlighten us and/or explain the behavior.
I encountered the same of StackOverflowError when trying to run URLCat from Haddop The Definitive Guide Third Edition Tom White
I have the problem with Cloudera QuickStart 4.4.0 and 4.3.0
Using both jdk1.6.0_32 and jdk1.6.0_45
The problem occurs during initializion/class loading of org.apache.hadoop.fs.FileSystem underneath java.net.URL
There is some kind of recursive exception handling that is kicking in.
I did the best I could to trace it down.
The path leads to java.util.ServiceLoader which then invokes sun.misc.CompoundEnumeration.nextElement()
Unfortunately, the source for sun.misc.CompoundEnumeration is not included in the jdk src.zip ... perhaps an oversight because it is in java package sun.misc
In an attempt to trigger the error through another execution path I came up with a workaround ...
You can avoid the conditions that lead to StackOverflowError by invoking org.apache.hadoop.fs.FileSystem.getFileSystemClass(String, Configuration) prior to registering the StreamHandlerFactory.
This can be done by modifying the static initialization block (see original listing above):
static {
Configuration conf = new Configuration();
try {
FileSystem.getFileSystemClass("file", conf);
} catch (Exception e) {
throw new RuntimeException(e.getMessage());
};
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
This can also be accomplished by moving the contents of this static block to your main().
I found another reference to this error from Aug 2011 at stackoverflow with FsUrlStreamHandlerFactory
I am quite puzzled that more hadoop newbies have not stumbled onto this problem ... buy the Hadoop book ... download Cloudera QuickStart ... try a very simple example ... FAIL!?
Any insight from more experienced folks would be appreciated.

Resources