Ambari Server fails to start because of missing stack definitions - hadoop

II successfully built the Apache Ambari from git repository, installed and configured the ambari-server. But it just won't start. In the log is the following error:
Error injecting constructor, org.apache.ambari.server.AmbariException: Unable to
find stack definitions under stackRoot = /var/lib/ambari-server/resources/stacks
at org.apache.ambari.server.stack.StackManager.<init>(StackManager.java:149)
while locating org.apache.ambari.server.stack.StackManager annotated with #com.google.inject.internal.UniqueAnnotations$Internal(value=1)
at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:272)
at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:131)
while locating org.apache.ambari.server.api.services.AmbariMetaInfo
for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:180)
at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:180)
while locating org.apache.ambari.server.controller.AmbariServer
What could be the problem?

I've got it running now at least ambari server, but not the stack.
Thank you for the hint.

Related

org.apache.kylin.job.exception.ExecuteException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/serde2/typeinfo/TypeInfo

I find similar error on https://issues.apache.org/jira/browse/KYLIN-2511
env:
hadoop-2.7.1
hbase-1.3.2
apache-hive-2.1.1-bin
apache-kylin-1.6.0-hbase1.x-bin
I've tried copy all the hive libs to kylin, but get another ERROR.
org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoClassDefFoundError: org/apache/hadoop/hive/serde2/typeinfo/TypeInfo
The missing class should be in hive-exec-.jar; Check and debug the "bin/find-hive-dependency.sh" to see why it wasn't able to locate this jar from your server. You can manually add it to the "hive_exec_path" variable.
BTW, Kylin 1.6 is quite old, try to upgrade to a 2.x version.
Why you just try the method mentioned in https://issues.apache.org/jira/browse/KYLIN-2511. You'd better prepare the env according to the document of v16. It is better for using the latest version of Kylin. It has more feature and fixes some bugs.

Beam / DataFlow unexpected error ProtocolMessageEnum not implemented when using DataFlowRunner

When running my Beam pipeline locally it all works as expected but when trying to run it on the DataflowRunner I suddenly get the error below. Honestly I don't even know where to start evaluating this because the DataflowRunner seems to be a black box.
Jan 14, 2019 11:26:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 165 files. Enable logging at DEBUG level to see which files will be staged.
Exception in thread "main" java.lang.IncompatibleClassChangeError: Class org.apache.beam.model.pipeline.v1.RunnerApi$StandardPTransforms$Primitives does not implement the requested interface com.google.protobuf.ProtocolMessageEnum
at org.apache.beam.runners.core.construction.BeamUrns.getUrn(BeamUrns.java:27)
at org.apache.beam.runners.core.construction.PTransformTranslation.<clinit>(PTransformTranslation.java:58)
at org.apache.beam.runners.core.construction.UnconsumedReads$1.visitValue(UnconsumedReads.java:49)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:666)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$600(TransformHierarchy.java:311)
at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245)
at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:458)
at org.apache.beam.runners.core.construction.UnconsumedReads.ensureAllReadsConsumed(UnconsumedReads.java:40)
at org.apache.beam.runners.dataflow.DataflowRunner.replaceTransforms(DataflowRunner.java:868)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:660)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:173)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:313)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:299)
at (my code: pipe.run().waitUntilFinish();)
check the versions of beam etc and upgrade your dependencies where possible.
I had the same error and after seeing you get this error, I thought it must be a dependency conflict as it didn't exist before.
I'm using scio to deploy to dataflow and just referenced what they're using. https://github.com/spotify/scio/blob/v0.7.1/build.sbt
I updated guava and protobuf also.
I know you're using java, but try updating beam to 2.9.0 and maybe guava, protobuf...

Internal error while connecting to hadoop dfs

I built an eclipse plugin for hadoop-2.3.0. Bundle-classpath is
Bundle-classpath: classes/,
lib/hadoop-mapreduce-client-core-${hadoop.version}.jar,
lib/hadoop-mapreduce-client-common-${hadoop.version}.jar,
lib/hadoop-mapreduce-client-jobclient-${hadoop.version}.jar,
lib/hadoop-auth-${hadoop.version}.jar,
lib/hadoop-common-${hadoop.version}.jar,
lib/hadoop-hdfs-${hadoop.version}.jar,
lib/protobuf-java-${protobuf.version}.jar,
lib/log4j-${log4j.version}.jar,
lib/commons-cli-1.2.jar,
lib/commons-configuration-1.6.jar,
lib/commons-httpclient-3.1.jar,
lib/commons-lang-2.5.jar,
lib/commons-collections-${commons-collections.version}.jar,
lib/jackson-core-asl-1.8.8.jar,
lib/jackson-mapper-asl-1.8.8.jar,
lib/slf4j-log4j12-1.7.5.jar,
lib/slf4j-api-1.7.5.jar,
lib/guava-${guava.version}.jar,
lib/netty-${netty.version}.jar
I added built jar file hadoop-eclipse-plugin.jar in eclipse\plugins. I am using Eclipse kepler sr2 package. On trying to create a hdfs and connect ..it is generating an internal error-->
An internal error occurred during: "Map/Reduce location status updater". org/apache/commons/lang/StringUtils
What might have cause this error and how to resolve it? Any help is appriciated.

smslib configuration issue

It's being 2 days I am trying to configure smslib on my computer and I always got the exception below while trying to execute the sample code (SendMessage) contained in the zip file :
log4j:WARN No appenders could be found for logger (smslib).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "Thread-3" java.lang.ExceptionInInitializerError
at org.smslib.modem.SerialModemDriver.connectPort(SerialModemDriver.java:69)
at org.smslib.modem.AModemDriver.connect(AModemDriver.java:114)
at org.smslib.modem.ModemGateway.startGateway(ModemGateway.java:189)
at org.smslib.Service$1Starter.run(Service.java:276)
Caused by: java.lang.RuntimeException: CommPortIdentifier class not found
at org.smslib.helper.CommPortIdentifier.<clinit>(CommPortIdentifier.java:76)
... 4 more
I have done everything asked on the smslib web site, I have read all the post related to the same error, I have also configured the JAVA_HOME path, but I steel get the same error.
I am working on Windows 7, with Eclipse Juno and the JDK 7.
Please can someone help try to fix this issue.
And one more thing; is there another lib we can use instead of smslib?
Thanks
It fine now, I guess it was due to the fact that my eclipse was configured with the jre path instead of the JDK path.
I put the required files in the jre's folders and it work fine.
Thank you very much !

HPCC/HDFS Connector

Does anyone know about HPCC/HDFS connector.we are using both HPCC and HADOOP.There is one utility(HPCC/HDFS connector) developed by HPCC which allows HPCC cluster to acess HDFS data
i have installed the connector but when i run the program to acess data from hdfs it gives error as libhdfs.so.0 doesn't exist.
I tried to build libhdfs.so using command
ant compile-libhdfs -Dlibhdfs=1
its giving me error as
target "compile-libhdfs" does not exist in the project "hadoop"
i used one more command
ant compile-c++-libhdfs -Dlibhdfs=1
its giving error as
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
[get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
to /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
BUILD FAILED java.net.ConnectException: Connection timed out
any suggestion will be a great help
Chhaya, you might not need to build libhdfs.so, depending on how you installed hadoop, you might already have it.
Check in HADOOP_LOCATION/c++/Linux-<arch>/lib/libhdfs.so, where HADOOP_LOCATION is your hadoop install location, and arch is the machine’s architecture (i386-32 or amd64-64).
Once you locate the lib, make sure the H2H connector is configured correctly (see page 4 here).
It's just a matter of updating the HADOOP_LOCATION var in the config file:
/opt/HPCCSystems/hdfsconnector.conf
good luck.

Resources