Unable to run hadoop programs after Hadoop 7682 patch installation - hadoop

I am new to Hadoop. I am using windows 7 and cygwin to work on Hadoop 1.2.1. Mine is a single node system. I realized the task tracker was not starting up whenever i run the mapreduce script file, therefore I applied the hadoop 7682 patch from https://github.com/congainc/patch-hadoop_7682-1.0.x-win to solve the problem
I added the jar file to the libs folder and also modified the core-site xml file. Now, I am able to run the task tracker. However, now If I am trying to run any program which makes use of map reduce, say from example, using the mahout clustering command or any command that has to do with mapreduce.
$MAHOUT_HOME/bin/mahout org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
I get the below error
"Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException solution"
$ bin/mahout org.apache.mahout.clustering.syntheticcontrol.kmeans.Job MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
hadoop binary is not in PATH,HADOOP_HOME/bin,HADOOP_PREFIX/bin, running locally
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/Mahout/trunk/examples/target/mahout-examples-0.9-SNAPSHOT-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/Mahout/trunk/examples/target/dependency/slf4j-jcl-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.JCLLoggerFactory]
Aug 27, 2013 5:53:44 PM org.slf4j.impl.JCLLoggerAdapter warn
WARNING: No org.apache.mahout.clustering.syntheticcontrol.kmeans.Job.props found on classpath, will use command-line arguments only
Aug 27, 2013 5:53:45 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Running with default arguments
Aug 27, 2013 5:53:53 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Preparing Input
Aug 27, 2013 5:53:55 PM org.apache.hadoop.mapred.JobClient copyAndConfigureFiles
WARNING: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
Aug 27, 2013 5:53:55 PM org.apache.hadoop.mapred.JobClient$2 run
INFO: Cleaning up the staging area hdfs://localhost:9000/tmp/hadoop-USER/mapred/staging/USER/.staging/job_201308271750_0001
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: com.conga.services.hadoop.patch.HADOOP_7682.WinLocalFileSystem
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:857)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1440)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:234)
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1230)
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1206)
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1178)
at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:864)
at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:734)
at org.apache.hadoop.mapred.JobClient.access$400(JobClient.java:179)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:951)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at org.apache.mahout.clustering.conversion.InputDriver.runJob(InputDriver.java:108)
at org.apache.mahout.clustering.syntheticcontrol.kmeans.Job.run(Job.java:130)
at org.apache.mahout.clustering.syntheticcontrol.kmeans.Job.main(Job.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:195)
Caused by: java.lang.ClassNotFoundException: com.conga.services.hadoop.patch.HADOOP_7682.WinLocalFileSystem
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:855)
... 29 more
Am I missing something here?

Related

Phoenix-5.0.0-HBase-2.0 installation problem

As descibed in installation guid I copied file phoenix-5.0.0-HBase-2.0-server.jar into hbase library path /opt/hbase/lib/. The Hbase crashes at startup. If I remove it, hbase starting and running well again. What is the problem?
I use hadoop version:
$ hadoop version
Hadoop 3.1.3
Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579
Compiled by ztang on 2019-09-12T02:47Z
Compiled with protoc 2.5.0
From source with checksum ec785077c385118ac91aadde5ec9799
This command was run using /opt/hadoop/share/hadoop/common/hadoop-common-3.1.3.jar
Hbase version:
hbase(main):001:0> version
2.0.6, rd65cccb5fda039217954a558c65bda423e0d6df3, Wed Aug 14 15:44:48 UTC 2019
Took 0.0003 seconds
Error details in log file /opt/hbase/logs/hbase-master.log:
2020-06-06 14:54:48,454 ERROR [main] regionserver.HRegionServer: Failed construction RegionServer
java.lang.AbstractMethodError: org.apache.phoenix.trace.PhoenixMetricsSink.init(Lorg/apache/commons/configuration/SubsetConfiguration;)V
at org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:529)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:501)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:480)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:189)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:164)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
at org.apache.hadoop.hbase.metrics.BaseSourceImpl$DefaultMetricsSystemInitializer.init(BaseSourceImpl.java:54)
at org.apache.hadoop.hbase.metrics.BaseSourceImpl.<init>(BaseSourceImpl.java:116)
at org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:46)
at org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:38)
at org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createIO(MetricsRegionServerSourceFactoryImpl.java:84)
at org.apache.hadoop.hbase.io.MetricsIO.<init>(MetricsIO.java:35)
at org.apache.hadoop.hbase.io.hfile.HFile.<clinit>(HFile.java:195)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:536)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:477)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3050)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3068)
2020-06-06 14:54:48,456 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster.
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3057)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3068)
Caused by: java.lang.AbstractMethodError: org.apache.phoenix.trace.PhoenixMetricsSink.init(Lorg/apache/commons/configuration/SubsetConfiguration;)V
at org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:529)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:501)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:480)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:189)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:164)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
at org.apache.hadoop.hbase.metrics.BaseSourceImpl$DefaultMetricsSystemInitializer.init(BaseSourceImpl.java:54)
at org.apache.hadoop.hbase.metrics.BaseSourceImpl.<init>(BaseSourceImpl.java:116)
at org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:46)
at org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:38)
at org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createIO(MetricsRegionServerSourceFactoryImpl.java:84)
at org.apache.hadoop.hbase.io.MetricsIO.<init>(MetricsIO.java:35)
at org.apache.hadoop.hbase.io.hfile.HFile.<clinit>(HFile.java:195)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:536)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:477)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3050)
... 5 more

Import Table exception in Apache Hbase 1.2.6 from older version

About two month ago, I exported a table from Hbase (0.98.14). Now I have configured 1.2.6 version that is working well. I have to import that table. I have used following command to do the job
hbase -Dhbase.import.version=0.98 org.apache.hadoop.hbase.mapreduce.Import blogs_webpage /tmp/blogs/tb2_webpage
But after table creation in Hbase (with same schema as old table). Following exception occured. (even I run without version info, same exception happened)
java.io.FileNotFoundException: /usr/local/hadoop/logs/userlogs/application_1512560494315_0001/container_1512560494315_0001_01_000001 (Is a directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.hadoop.yarn.ContainerLogAppender.activateOptions(ContainerLogAppender.java:55)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.apache.log4j.Logger.getLogger(Logger.java:104)
at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262)
at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025)
at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:844)
at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.service.AbstractService.<clinit>(AbstractService.java:43)
Dec 06, 2017 4:42:11 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class
But if I repeat same for a new table with sample content in same version. Import is successful. Where is the problem?

Getting error in Talend job with tHiveConnection component

I am getting this error while executing the talend job with tHiveConnection.
I am using Java 1.7, Hadoop 2.2 & talend open studio for big data 6.0
Please help me in identify this error.
Please find below the error details
Starting job CH04_01_HIVE_PROCESSING_HASH_TAGS at 09:15 09/08/2015.
[statistics] connecting to socket on port 3662
[statistics] connected
Exception in component tHiveConnection_1
java.lang.ClassNotFoundException: org.apache.hadoop.hive.jdbc.HiveDriver
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:190)
at packt_big_data.ch04_01_hive_processing_hash_tags_0_1.CH04_01_HIVE_PROCESSING_HASH_TAGS.tHiveConnection_1Process(CH04_01_HIVE_PROCESSING_HASH_TAGS.java:689)
at packt_big_data.ch04_01_hive_processing_hash_tags_0_1.CH04_01_HIVE_PROCESSING_HASH_TAGS.runJobInTOS(CH04_01_HIVE_PROCESSING_HASH_TAGS.java:2084)
at packt_big_data.ch04_01_hive_processing_hash_tags_0_1.CH04_01_HIVE_PROCESSING_HASH_TAGS.main(CH04_01_HIVE_PROCESSING_HASH_TAGS.java:1833)
[statistics] disconnected
Job CH04_01_HIVE_PROCESSING_HASH_TAGS ended at 09:15 09/08/2015. [exit code=1]
org.apache.hadoop.hive.jdbc.HiveDriver was used to connect to the original HiveServer.
With HiveServer2 use org.apache.hive.jdbc.HiveDriver -- and reading some documentation could do no harm, especially the comment that reads "we need the following jars in the classpath"

Class not found Exception in Hazelcast plugin in openfire in cluster environment

I have installed openfire in 3 EC2 instance and added hazelcast plugin in all three. All three nodes work fine. But some time i get this exception in stderror log, Please help me to identify why this exception occurs and how can i avoid this error logs? ? ?
Apr 15, 2015 12:16:18 PM com.hazelcast.spi.OperationService
SEVERE: [172.31.22.121]:5701 [openfire] java.lang.ClassNotFoundException: com.jivesoftware.util.cache.ClusteredCacheFactory$CallableTask
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.jivesoftware.util.cache.ClusteredCacheFactory$CallableTask
at com.hazelcast.nio.serialization.DefaultSerializers$ObjectSerializer.read(Defaul tSerializers.java:190)
at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAd apter.java:40)
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(Serializati onServiceImpl.java:276)
at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readObject(ByteArrayOb jectDataInput.java:431)
at com.hazelcast.executor.BaseCallableTaskOperation.readInternal(BaseCallableTaskO peration.java:91)
at com.hazelcast.spi.Operation.readData(Operation.java:295)
at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:105)
at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:36)
at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAd apter.java:59)
at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(Serialization ServiceImpl.java:218)
at com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:156)
at com.hazelcast.spi.impl.OperationServiceImpl$RemoteOperationProcessor.run(Operat ionServiceImpl.java:724)
at com.hazelcast.util.executor.ManagedExecutorService$Worker.run(ManagedExecutorSe rvice.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at com.hazelcast.util.executor.PoolExecutorThreadFactory$ManagedThread.run(PoolExe cutorThreadFactory.java:59)
Caused by: java.lang.ClassNotFoundException: com.jivesoftware.util.cache.ClusteredCacheFactory$CallableTask
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at com.hazelcast.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:83)
at com.hazelcast.nio.IOUtil$1.resolveClass(IOUtil.java:77)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1612)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at com.hazelcast.nio.serialization.DefaultSerializers$ObjectSerializer.read(Defaul tSerializers.java:185)
... 16 more
It seems you have to deploy some openfire jar inside your Hazelcast cluster. Hazelcast does not offer remote classloading for security reasons (otherwise everybody could inject everything).

Flume-- Could not find the main class: org.apache.flume.tools.GetJavaProperty

I am using cloudera CDH 4.4.When I ran the flume cmd -
"bin/flume-ng agent -n agentA -f conf/MultipleFlumes.properties -Dflume.root.logger=INFO,console"
I got an error:
[cloudera#localhost Flume]$ bin/flume-ng agent -n agentA -f conf/MultipleFlumes.properties -Dflume.root.logger=INFO,console
Warning: No configuration directory set! Use --conf <dir> to override.
Info: Including Hadoop libraries found via (/usr/bin/hadoop) for HDFS access
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flume/tools/GetJavaProperty
Caused by: java.lang.ClassNotFoundException: org.apache.flume.tools.GetJavaProperty
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.apache.flume.tools.GetJavaProperty. Program will exit.
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.6.1.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar from classpath
Info: Excluding /usr/lib/hadoop-0.20-mapreduce/lib/slf4j-api-1.6.1.jar from classpath
Info: Including HBASE libraries found via (/usr/bin/hbase) for HBASE access
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flume/tools/GetJavaProperty
Caused by: java.lang.ClassNotFoundException: org.apache.flume.tools.GetJavaProperty
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.apache.flume.tools.GetJavaProperty. Program will exit.
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flume/node/Application
Caused by: java.lang.ClassNotFoundException: org.apache.flume.node.Application
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.apache.flume.node.Application. Program will exit.
I tried to echo HADOOP_HOME but it returned blank. What is the problem in above command?
Please guide.
First of all, add the -c parameter to the command like this:
bin/flume-ng agent -n agentA -c conf -f conf/MultipleFlumes.properties -Dflume.root.logger=INFO,console
Adding that parameter does not resolve the issue but if you don't include it you get another error because of log4j configuration file.
As for your problem, check if FLUME_HOME is defined, and if that is the case, unset it with
unset FLUME_HOME

Resources