I built an eclipse plugin for hadoop-2.3.0. Bundle-classpath is
Bundle-classpath: classes/,
lib/hadoop-mapreduce-client-core-${hadoop.version}.jar,
lib/hadoop-mapreduce-client-common-${hadoop.version}.jar,
lib/hadoop-mapreduce-client-jobclient-${hadoop.version}.jar,
lib/hadoop-auth-${hadoop.version}.jar,
lib/hadoop-common-${hadoop.version}.jar,
lib/hadoop-hdfs-${hadoop.version}.jar,
lib/protobuf-java-${protobuf.version}.jar,
lib/log4j-${log4j.version}.jar,
lib/commons-cli-1.2.jar,
lib/commons-configuration-1.6.jar,
lib/commons-httpclient-3.1.jar,
lib/commons-lang-2.5.jar,
lib/commons-collections-${commons-collections.version}.jar,
lib/jackson-core-asl-1.8.8.jar,
lib/jackson-mapper-asl-1.8.8.jar,
lib/slf4j-log4j12-1.7.5.jar,
lib/slf4j-api-1.7.5.jar,
lib/guava-${guava.version}.jar,
lib/netty-${netty.version}.jar
I added built jar file hadoop-eclipse-plugin.jar in eclipse\plugins. I am using Eclipse kepler sr2 package. On trying to create a hdfs and connect ..it is generating an internal error-->
An internal error occurred during: "Map/Reduce location status updater". org/apache/commons/lang/StringUtils
What might have cause this error and how to resolve it? Any help is appriciated.
Related
I'm having trouble using the GeoMondrian Workbench on my Ubuntu 18.04 LTS system. I've followed the installation instructions and have installed the following:
Oracle Java 8
PostgreSQL 9.5
PostGIS 2.5
I've downloaded the GeoMondrian Workbench and the "simple_geofoodmart.sql" file, created a database, and passed the necessary parameters to the workbench.
However, when I try to open the "simple_geofoodmart.xml" schema file, I get the following error:
Error: Schema file /home/tarik/workbench/demo/simple_geofoodmart.xml is invalid. org/opengis/referencing/NoSuchAuthorityCodeException java.lang.NoClassDefFoundError: org/opengis/referencing/NoSuchAuthorityCodeException
Additionally, when I try to use an MDX query, I get the following error:
"Exception in thread 'AWT-EventQueue-0' java.lang.NoClassDefFoundError: Could not initialize class mondrian.olap.fun.GlobalFunTable at mondrian.rolap.RolapSchema$RolapSchemaFunctionTable.defineFunctions(RolapSchema.java:1643)"
I've tried to resolve all the dependencies and have made sure that I have the necessary JAR files in my classpath, but I'm still getting the same errors. Can anyone help me figure out what's going wrong and how to fix it? Thank you!
II successfully built the Apache Ambari from git repository, installed and configured the ambari-server. But it just won't start. In the log is the following error:
Error injecting constructor, org.apache.ambari.server.AmbariException: Unable to
find stack definitions under stackRoot = /var/lib/ambari-server/resources/stacks
at org.apache.ambari.server.stack.StackManager.<init>(StackManager.java:149)
while locating org.apache.ambari.server.stack.StackManager annotated with #com.google.inject.internal.UniqueAnnotations$Internal(value=1)
at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:272)
at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:131)
while locating org.apache.ambari.server.api.services.AmbariMetaInfo
for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:180)
at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:180)
while locating org.apache.ambari.server.controller.AmbariServer
What could be the problem?
I've got it running now at least ambari server, but not the stack.
Thank you for the hint.
When running my Beam pipeline locally it all works as expected but when trying to run it on the DataflowRunner I suddenly get the error below. Honestly I don't even know where to start evaluating this because the DataflowRunner seems to be a black box.
Jan 14, 2019 11:26:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 165 files. Enable logging at DEBUG level to see which files will be staged.
Exception in thread "main" java.lang.IncompatibleClassChangeError: Class org.apache.beam.model.pipeline.v1.RunnerApi$StandardPTransforms$Primitives does not implement the requested interface com.google.protobuf.ProtocolMessageEnum
at org.apache.beam.runners.core.construction.BeamUrns.getUrn(BeamUrns.java:27)
at org.apache.beam.runners.core.construction.PTransformTranslation.<clinit>(PTransformTranslation.java:58)
at org.apache.beam.runners.core.construction.UnconsumedReads$1.visitValue(UnconsumedReads.java:49)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:666)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$600(TransformHierarchy.java:311)
at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245)
at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:458)
at org.apache.beam.runners.core.construction.UnconsumedReads.ensureAllReadsConsumed(UnconsumedReads.java:40)
at org.apache.beam.runners.dataflow.DataflowRunner.replaceTransforms(DataflowRunner.java:868)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:660)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:173)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:313)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:299)
at (my code: pipe.run().waitUntilFinish();)
check the versions of beam etc and upgrade your dependencies where possible.
I had the same error and after seeing you get this error, I thought it must be a dependency conflict as it didn't exist before.
I'm using scio to deploy to dataflow and just referenced what they're using. https://github.com/spotify/scio/blob/v0.7.1/build.sbt
I updated guava and protobuf also.
I know you're using java, but try updating beam to 2.9.0 and maybe guava, protobuf...
I am trying to deploy a web application using WSAdmin tool. But it is throwing an error.
JACl script that I am using is :
$AdminApp install /opt/www/temp/SampleApp.war {-nopreCompileJSPs -nodeployejb -server delivery -cell delivery_cell -node delivery_node -appname SampleApp -contextroot SampleApp -MapWebModToVH {{"SampleApp" SampleApp.war,WEB-INF/web.xml default_host}}}
Error I am getting is:
com.ibm.ws.scripting.ScriptingException: WASX7109E: Insufficient data for install task "MapResRefToEJB
ADMA0007E: A validation error occurred in task Mapping resource references to resources. The Java Naming and Directory Interface (JNDI) name is not specified for resource reference jdbc/app_DB in module SampleApp with EJB name.
From the error above I understand that I need to configure my JNDI with -MapResRefToEJB. I tried to understand this option but getting too confused.
Can anyone help me to resolve this issue?
These errors appear to be caused by the MapResRefToEJB option in
the wsadmin command not being set correctly, or the resource it is pointing to
not existing correctly in the web.xml file.
Additional information on MapResRefToEJB
Options for the AdminApp object install, installInteractive, edit,
editInteractive, update, and updateInteractive commands
http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd.doc/info/ae/ae/rxml_taskoptions.html
Thank you
Note : Opinions are my own.
Does anyone know about HPCC/HDFS connector.we are using both HPCC and HADOOP.There is one utility(HPCC/HDFS connector) developed by HPCC which allows HPCC cluster to acess HDFS data
i have installed the connector but when i run the program to acess data from hdfs it gives error as libhdfs.so.0 doesn't exist.
I tried to build libhdfs.so using command
ant compile-libhdfs -Dlibhdfs=1
its giving me error as
target "compile-libhdfs" does not exist in the project "hadoop"
i used one more command
ant compile-c++-libhdfs -Dlibhdfs=1
its giving error as
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
[get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
to /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
BUILD FAILED java.net.ConnectException: Connection timed out
any suggestion will be a great help
Chhaya, you might not need to build libhdfs.so, depending on how you installed hadoop, you might already have it.
Check in HADOOP_LOCATION/c++/Linux-<arch>/lib/libhdfs.so, where HADOOP_LOCATION is your hadoop install location, and arch is the machine’s architecture (i386-32 or amd64-64).
Once you locate the lib, make sure the H2H connector is configured correctly (see page 4 here).
It's just a matter of updating the HADOOP_LOCATION var in the config file:
/opt/HPCCSystems/hdfsconnector.conf
good luck.