Map Reduce Job: Protobuf related error - hadoop

I am getting an error while running a Map Reduce job:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.yarn.proto.YarnProtos$LocalResourceProto.hashLong(J)I
at org.apache.hadoop.yarn.proto.YarnProtos$LocalResourceProto.hashCode(YarnProtos.java:11655)
at org.apache.hadoop.yarn.api.records.impl.pb.LocalResourcePBImpl.hashCode(LocalResourcePBImpl.java:62)
at java.util.HashMap.hash(HashMap.java:362)
at java.util.HashMap.put(HashMap.java:492)
Doing a little google search, I bumped into a thread, which suggested this as proto version related error. It says that my application is having a dependency on proto3 with yarn using proto2.
According to my pom.xml, protobuf-java-2.6.1.jar is being used.
Can anyone please help me understand the issue and how to fix it?

This error indicates that the jar file used when the code was compiled is different from the jar file used when it was run. You need to make sure that you're using exactly the same version of the protobuf jar as the code was compiled against. If you didn't compile the code yourself, you'll need to figure out what version the provider used.

Related

How to solve Error starting application: Error creating query engine Drools when using SWRL API?

I get this error message
Error starting application: Error creating query engine Drools. Exception: java.lang.NoSuchMethodError. Message: 'void org.semanticweb.owlapi.util.PriorityCollection.add(java.io.Serializable)'
I am using SWRL API for Java to run SQWRL queries on OWL ontologies. So, I built edu.stanford.swrl swrlapi-example from Maven updating in the pom file the default swrl API and swrlapi-drools-engine from versions 1.0.3 to 2.0.9 and the owl API from default 4.2.3 to 5.1.17. I am runnig this code in the executable main :
OWLOntologyManager ontologyManager = OWLManager.createOWLOntologyManager();
File file = new File("C:\\Users\\Hugo\\Desktop\\Universidad\\SUPAERO\\S4\\SWRLapiTEST\\Prueba.owl");
OWLOntology ontology = ontologyManager.loadOntologyFromOntologyDocument(file);
ontologyManager.createOntology();
SQWRLQueryEngine queryEngine = SWRLAPIFactory.createSQWRLQueryEngine(ontology);
The program stops at the last command. I don't know what I am doing wrong. If I use the version 1.0.3 of swrl api and swrlapi-drools-engine it works, bu I wanted to use some commands incuded in the tbox and abox libraries that are not implemented in that ancient version.
The exception you're seeing is a symptom of multiple incompatible OWLAPI versions in your classpath. Make sure you only have one version in use (5.1.17 from what you say). If you cannot find the problem, please add details on how you're setting your classpath (you can print the java.class.path environment variable at the beginning of your code to get the exact classpath being used).

NoSuchAlgorithmException: DH KeyPairGenerator not available on camel-ftp

I'm using Apache Camel in a project and when I needed to use the camel-ftp component to send some files to an remote server, I've got this exception:
com.jcraft.jsch.JSchException: Session.connect: java.security.NoSuchAlgorithmException: DH KeyPairGenerator not available
I was wondering why it could be happening in my project. So, I've started a quick small project with camel-core and camel-ftp components only and I pasted the route there and it worked fine.
from("file:data/input?noop=true")
.log("Uploading file ${file:name}")
.to("sftp://www.mydestination.com:22/../opt/tmp?autoCreate=false&username=MyUser&password=MyPassword&passiveMode=true")
.log("Uploaded file ${file:name} complete.");
I'm using Apache Karaf to run OSGI Bundles (my application is one of them). I've checked in different environments but the result still beeing the exception.
I really don't know what it could be. Anyone has some ideas about what can be the possible cause of it?
DH KeyPair Generator is normally part of the JRE/JSE and should be included if your JDK (>BTW which exact JDK version are you using ?).
Given that, your error is probably due to a wrong classpath.
I suggest you to check the value of "-Djava.ext.dirs" property (and the contents of the corresponding folders), for instance:
Windows:
java -Djava.ext.dirs="C:\Program Files\Java\jdk1.6.0_07\jre\lib\ext;C:\dir2"
Unix:
java -Djava.ext.dirs=$JAVA_HOME/jre/lib/ext:/dir2
You also need to specify/modify the Karaf security provider, take a look at:
https://karaf.apache.org/manual/latest/security

optaplanner with aws lambda

I am using optaplanner to solve a scheduling problem. I want to invoke the scheduling code from AWS Lambda (i know that Lambda's max execution time is 5 minutes and thats okay for this application)
To achieve this I have build a maven project with two modules:
module-1: scheduling optimization code
module-2: aws lambda handler ( calls scheduling code from module-1)
When i run my tests in IntelliJ Idea for module-1(that has optaplanner code), it runs fine.
When I invoke the lambda function, i get following exception:
java.lang.ExceptionInInitializerError:
java.lang.ExceptionInInitializerError
java.lang.ExceptionInInitializerError
at org.kie.api.internal.utils.ServiceRegistry.getInstance(ServiceRegistry.java:27)
...
Caused by: java.lang.RuntimeException: Child services [org.kie.api.internal.assembler.KieAssemblers] have no parent
at org.kie.api.internal.utils.ServiceDiscoveryImpl.buildMap(ServiceDiscoveryImpl.java:191)
at org.kie.api.internal.utils.ServiceDiscoveryImpl.getServices(ServiceDiscoveryImpl.java:97)
...
I have included following dependency in maven file: org.optaplanner optaplanner-core 7.7.0.Final
Also checked that jar file have drools-core, kie-api, kei-internal, drools-compiler. Does anyone know what might be the issue?
Sounds like a bug in drools when running in a restricted environment such as AWS-lambda. Please create a JIRA and link it here.
I was getting the same error attempting to run a fat jar containing an example OptaPlanner project. A little debugging revealed that the problem was services was empty when ServiceDiscoveryImpl::buildMap was invoked; I was using the first META-INF/kie.conf in the build, and as a result services were missing from that file. Naturally your tests would work properly because the class path would contain all of the dependencies (that is, several distinct META-INF/kie.conf files), and not the assembly you were attempting to execute on the lambda.
Concatenating those files instead (using an appropriate merge strategy in assembly) fixes the problem and appears appropriate given how those are loaded by ServiceDiscoveryImpl. The updated JAR runs properly as an AWS lambda.
Note: I was using the default scoreDrl from the v7.12.0.Final Cloud Balancing example.

How do I fix Akka plugin is not registered in Play assembly?

I have built a Play assembly in Maven using
<plugin>
<groupId>com.google.code.play2-maven-plugin</groupId>
<artifactId>play2-maven-plugin</artifactId>
<version>1.0.0-beta1</version>
</plugin>
via the maven-shade-plugin with entry point play.core.server.NettyServer. When I try to run it using
java -Dhttp-port=7000 -jar p3-users-1.0.0-SNAPSHOT-allinone.jar
I get
Play server process ID is 2924
Oops, cannot start the server.
java.lang.RuntimeException: Akka plugin is not registered.
at scala.sys.package$.error(package.scala:27)
at play.api.libs.concurrent.Akka$$anonfun$system$2.apply(Akka.scala:25)
at play.api.libs.concurrent.Akka$$anonfun$system$2.apply(Akka.scala:25)
at scala.Option.getOrElse(Option.scala:120)
at play.api.libs.concurrent.Akka$.system(Akka.scala:24)
at securesocial.core.UserServicePlugin.onStart(UserService.scala:129)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88)
at scala.collection.immutable.List.foreach(List.scala:318)
at play.api.Play$$anonfun$start$1.apply$mcV$sp(Play.scala:88)
at play.api.Play$$anonfun$start$1.apply(Play.scala:88)
at play.api.Play$$anonfun$start$1.apply(Play.scala:88)
at play.utils.Threads$.withContextClassLoader(Threads.scala:18)
at play.api.Play$.start(Play.scala:87)
at play.core.StaticApplication.<init>(ApplicationProvider.scala:52)
at play.core.server.NettyServer$.createServer(NettyServer.scala:243)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:279)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:274)
at scala.Option.map(Option.scala:145)
at play.core.server.NettyServer$.main(NettyServer.scala:274)
at play.core.server.NettyServer.main(NettyServer.scala)
I am not sure what 'registered' is supposed to mean. Is this a missing dependency somewhere, or some other configuration problem?
As an aside, I previously built the assembly in SBT using the sbt-assembly-plugin, and it all seemed to work fine, so I know it's possible to build an uber-jar for a Play application. However, the sbt-assembly-plugin seems to have some serious algorithmic problems, causing it to take 20 times longer to build assemblies than the maven-shade-plugin.
OK, I seem to have manually fixed the problem.
According to https://www.playframework.com/documentation/2.4.x/ScalaPlugins plugins are registered in conf/play.plugins. I could see that Akka was missing from that, but was not sure why. Somehow the sbt-assembly-plugin did the right thing, but the maven-shade-plugin did not.
I searched my target directory for more play.plugins files, and found a couple under streams/... and one included a definition for
1000:play.api.libs.concurrent.AkkaPlugin
so I added that, and other definitions to my conf/play.plugins file. I am not sure if this is something that the play2-maven-plugin should be handling or not. Likely SBT has a built in awareness of Play projects, and handled the Play plugin registrations properly.
Anyway, this is more of a workaround than a solution.
There is no AkkaPlugin in Play! 2.4.x. I don't use 2.4.x, so I don't know if it was moved somewhere or just removed from play/api/libs/concurrent/Akka.scala file.
Could you provide simple test project, or a least your dependencies and plugins configuration in pom.xml file?

Understanding how to resolve "Inconsistent stackmap frames" exception

I get an exception on startup of the web application as guice is trying to construct the class mentioned.
java.lang.VerifyError: Inconsistent stackmap frames at branch target 2770 in method com.aptusi.apps.magazine.api.servlet.internal.EditorServlet.service(Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;Ljava/lang/String;Lcom/aptusi/persistence/runtime/framework/DboSession;)V at offset 200
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2483)
at java.lang.Class.getDeclaredConstructors(Class.java:1891)
at com.google.inject.spi.InjectionPoint.forConstructorOf(InjectionPoint.java:243)
at com.google.inject.internal.ConstructorBindingImpl.create(ConstructorBindingImpl.java:96)
at com.google.inject.internal.InjectorImpl.createUninitializedBinding(InjectorImpl.java:629)
at com.google.inject.internal.InjectorImpl.createJustInTimeBinding(InjectorImpl.java:845)
at com.google.inject.internal.InjectorImpl.createJustInTimeBindingRecursive(InjectorImpl.java:772)
at com.google.inject.internal.InjectorImpl.getJustInTimeBinding(InjectorImpl.java:256)
at com.google.inject.internal.InjectorImpl.getBindingOrThrow(InjectorImpl.java:205)
at com.google.inject.internal.InjectorImpl.getBinding(InjectorImpl.java:146)
at com.google.inject.internal.InjectorImpl.getBinding(InjectorImpl.java:66)
at com.google.inject.servlet.ServletDefinition.init(ServletDefinition.java:103)
at com.google.inject.servlet.ManagedServletPipeline.init(ManagedServletPipeline.java:82)
at com.google.inject.servlet.ManagedFilterPipeline.initPipeline(ManagedFilterPipeline.java:102)
at com.google.inject.servlet.GuiceFilter.init(GuiceFilter.java:172)`
I know about the -XX:-UseSplitVerifier and -noverify jvm options but I don't want to use as I want to ensure that all the code in this project is at least java version 7.
In order to do this It would be useful to understand where exactly this is occurring in my code, its not clear to me what the offset of 200 mentioned is but can it be related to a line number?
Also does anyone know of a way that I can find out the java versions of all classes on my classpath, I am using maven so there are a lot of dependencies, so I'm looking for an automated way of finding any classes on the classpath that may have been compiled to a lower java version than 1.7?
To find the version of a classfile, just look at the 8th byte of the classfile. It will be 51 for Java 7 classes. A framework like ASM will do this for you.
As far as the error goes, it means your classfile is malformed. How did you create these classes? Did you do any bytecode manipulation? If so, you probably have a bug in your code.

Resources