I have installed Hadoop version 2.6.0, HBase version 0.99.0 , Hive version 1.2, Kylin version 1.5.0.
I have setup all of the above in Standalone mode while in running Kylin it checks in early stage about Hadoop, HBase and Hive. Each and everything has been installed but when I start Kylin it give an error of HBase common lib not found.
Following is the log of Apache Kylin.
KYLIN_HOME is set to bin/../
16/03/24 18:02:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
kylin.security.profile is set to testing
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/kunalgupta/Desktop/kunal/Desktop/Backup/Kunal/Downloads/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/kunalgupta/Downloads/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/kunalgupta/Desktop/kunal/Desktop/Backup/Kunal/Downloads/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/kunalgupta/Downloads/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Logging initialized using configuration in jar:file:/Users/kunalgupta/Desktop/kunal/Desktop/Backup/Kunal/Downloads/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
cut: illegal option -- -
usage: cut -b list [-n] [file ...]
cut -c list [file ...]
cut -f list [-s] [-d delim] [file ...]
HIVE_CONF is set to: /Users/kunalgupta/Desktop/kunal/Desktop/Backup/Kunal/Downloads/apache-hive-1.2.1-bin/conf/, use it to locate hive configurations.
HCAT_HOME is set to: /Users/kunalgupta/Desktop/kunal/Desktop/Backup/Kunal/Downloads/apache-hive-1.2.1-bin/hcatalog, use it to find hcatalog path:
usage: dirname path
find: -printf: unknown primary or operator
hive dependency: /Users/kunalgupta/Desktop/kunal/Desktop/Backup/Kunal/Downloads/apache-hive-1.2.1-bin/conf/::/Users/kunalgupta/Desktop/kunal/Desktop/Backup/Kunal/Downloads/apache-hive-1.2.1-bin/hcatalog/share/hcatalog/hive-hcatalog-core-1.2.1.jar
cut: illegal option -- -
usage: cut -b list [-n] [file ...]
cut -c list [file ...]
cut -f list [-s] [-d delim] [file ...]
hbase-common lib not found
Please somebody help me out.
The issue you encountered is that cut command on mac-osx does not support "--output-delimiter" option. I encountered the same error while installing kylin-1.5.1.
The easy to solve it is to use the gnu binaries in your shell instead of osx binaries.
Use brew for installing coreutils (I changed all the commonly used shell utils to their gnu versions)
Use the below command for that.
brew install coreutils findutils gnu-tar gnu-sed gawk gnutls gnu-indent gnu-getopt --default-names
Now to make the shell use these instead of mac binaries add the path to these utils in your shell profile file.
vi ~/.profile
add the following lines to this file
PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
After this open a new terminal window and do
echo $PATH
The result should have the path that we set in the previous step in the front (prepended)
Now start kylin, should work smoothly.
Some references links that helped me:
Mac forum link
Installation guide from apple.se
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera/
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH
export KYLIN_HOME=/usr/local/kylin
export CDH_HOME=/opt/cloudera/parcels/CDH
export SPARK_HOME=${CDH_HOME}/lib/spark
export HBASE_HOME=${CDH_HOME}/lib/hbase
export HIVE_HOME=${CDH_HOME}/lib/hive
export HADOOP_HOME=${CDH_HOME}/lib/hadoop
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
source /etc/profile
added ${HBASE_HOME} to /etc/profile
cat /opt/cloudera/parcels/CDH/lib/hbase/bin/hbase
if [ -n "${shaded_jar}" ] && [ -f "${shaded_jar}" ]; then
CLASSPATH="${CLASSPATH}:${shaded_jar}"
# fall through to grabbing all the lib jars and hope we're in the omnibus tarball
#
# N.B. shell specifically can't rely on the shaded artifacts because RSGroups is only
# available as non-shaded
#
# N.B. pe and ltt can't easily rely on shaded artifacts because they live in hbase-mapreduce:test-jar
# and need some other jars that haven't been relocated. Currently enumerating that list
# is too hard to be worth it.
#
else
for f in $HBASE_HOME/lib/*.jar; do
CLASSPATH=${CLASSPATH}:$f;
done
# make it easier to check for shaded/not later on.
shaded_jar=""
fi
for f in "${HBASE_HOME}"/lib/client-facing-thirdparty/*.jar; do
if [[ ! "${f}" =~ ^.*/htrace-core-3.*\.jar$ ]] && \
[ "${f}" != "htrace-core.jar$" ] && \
[[ ! "${f}" =~ ^.*/slf4j-log4j.*$ ]]; then
CLASSPATH="${CLASSPATH}:${f}"
fi
done
I use centos7 ,apache-kylin-3.0.0-alpha2-bin-hadoop3 and hbase 2.1.4 .
I found solution in these links.
https://issues.apache.org/jira/browse/KYLIN-3564
http://92072234.wiz03.com/share/s/2i1O8Q1L1k042IDoOy3h7BgH2K4G6J2SoQv42Xc4b01xpCrj
The second link solve my problem.
Edit hbase file
$HBASE_HOME/bin/hbase
change
CLASSPATH=${CLASSPATH}:$JAVA_HOME/lib/tools.jar
to
CLASSPATH=${CLASSPATH}:$JAVA_HOME/lib/tools.jar:/YOUR HBASE FULL PATH or $HBASE_HOME/lib/*
I solved it as the following:
1. export HBASE_CLASSPATH=/opt/cloudera/parcels/CDH-6.2.0-/hbase-common-2.0.0-cdh6.2.0.jar
2. then start kylin again.
You are running on Windows? Sorry Kylin only runs on Linux as of version 1.5
Related
I was trying to change M2_HOME in bash_profile to configure a new version of Maven. Earlier, it was set to 2.2.1. Now I'm trying to change the path to 3.3.3. This is my bash_profile
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home
export M2_HOME=/usr/local/apache-maven-3.3.3
#export M2_HOME=/usr/local/apache-maven-2.2.1
export PATH=$PATH:$JAVA_HOME/bin
export PATH=$PATH:$M2_HOME/bin
export CATALINA_HOME=/Library/Tomcat/apache-tomcat-7.0.68
When I try to run source ~/.bash_profile and then mvn -version I get the following error -
$mvn -version
Error: Could not find or load main class org.codehaus.classworlds.Launcher
Any suggestions to solve this please?
PS: I'm on OS X El Captain
A simpler alternative is to set up some bash aliases. I have added the following to my ~/.bash_profile for switching between maven versions and Java versions:
export BASE_PATH=$PATH
#alias java5="export JAVA_HOME=`/usr/libexec/java_home -v1.5 -a x86_64 -d64`"
alias java6="export JAVA_HOME=`/usr/libexec/java_home -v1.6`"
alias java7="export JAVA_HOME=`/usr/libexec/java_home -v1.7`"
alias java8="export JAVA_HOME=`/usr/libexec/java_home -v1.8`"
# maven versions
alias m30="PATH=~/tools/apache-maven-3.0.5/bin:$BASE_PATH"
alias m31="PATH=~/tools/apache-maven-3.1.1/bin:$BASE_PATH"
alias m32="PATH=~/tools/apache-maven-3.2.5/bin:$BASE_PATH"
alias m33="PATH=~/tools/apache-maven-3.3.9/bin:$BASE_PATH"
Note the use of /usr/libexec/java_home for setting up JAVA_HOME which is similar to linux alternatives for switching java versions.
So, in a new terminal session the following:
[steve#steves-mbp ~]$ java8
[steve#steves-mbp ~]$ m33
[steve#steves-mbp ~]$
sets me up to use maven 3.3 and Java 8.
Please also take into account the comment by ~khmarbaise regarding M2_HOME and forget that this environment variable exists.
Add a new symlink for mvn3 worked for me
ln -s /usr/local/apache-maven-3.3.3/bin/mvn /usr/local/User/bin/mvn3
It is common, particularly in an environment where a product portfolio is quite large, to have to support multiple Maven versions. Similar to having to support multiple Java versions, you can create a script that will track and modify your environment accordingly. I use a Mac, so the notion of a jEnv type of mechanism is what I use for Java. On Windows, a similar concept can be used although It would take some coding to properly adjust the path settings.
Here's a /usr/local/bin/mvnEnv bash script that I use to quickly change my Maven runtime. It's not nearly as comprehensive as jEnv, but it works for me so perhaps it can work for you. Adjust the various parameters to conform to your various Maven installs and update your PATH appropriately, if on Windows. (I know you're using a Mac, so the Windows comment is for others that may have this issue on Windows.)
Just update your ~/.bash_profile to call this script with the appropriate parameters if you need a default. Then, when you need a different version of Maven, you can just execute the script like
mvnEnv v33
And voila, you've just quickly changed your Maven version! If you don't know what versions of Maven are supported, simply execute the mvnEnv command and a list of valid versions will be printed. You will, however, have to add any new versions of Maven to the script for the new version to be available.
#!/bin/bash
echo "Setting the maven implementation version"
v22=/usr/local/Cellar/maven2/2.2.1/libexec/bin/mvn
v30=/usr/local/Cellar/maven30/3.0.5/libexec/bin/mvn
v31=/usr/local/Cellar/maven31/3.1.1/libexec/bin/mvn
v32=/usr/local/Cellar/maven32/3.2.5/libexec/bin/mvn
v33=/usr/local/Cellar/maven/3.3.9/libexec/bin/mvn
if [ -e /usr/local/bin/mvn ]
then
echo "Remove the maven soft link."
sudo rm /usr/local/bin/mvn
else
echo "Maven soft link could not be found."
fi
maven=$v22
if [ $# == 0 ] || [ -z "${!1// }" ]
then
echo "No Arguments supplied, using default $maven"
echo "Available versions:"
echo " v22 = 2.2.1"
echo " v30 = 3.0.5"
echo " v31 = 3.1.1"
echo " v32 = 3.2.5"
echo " v33 = 3.3.9"
elif [ -e ${!1} ]
then
echo "Setting maven to use ${!1} via $1"
maven=${!1}
else
echo "Using the default maven setting, provided argument [$1] is not recognized."
fi
echo "Creating new soft link to $maven";
sudo ln -s $maven /usr/local/bin/mvn
We have a linux hadoop cluster but for a variety of reasons have some windows clients connecting and pushing data to the linux cluster.
In hadoop1 we had been able to run hadoop via cygwin
However in hadoop2 as stated on the website cygwin is not required or not supported.
Questions
what exactly has changed ? why would a client (only) not run under
cygwin or it could ? Apart from paths what other considerations are at play ?
Apart from the property below for job submissions is there anything else that needs to considered for windows/client interacting with a linux cluster
conf.set("mapreduce.app-submission.cross-platform", "true");
Extracting the hadoop-2.6.0-cdh5.5.2 and running it from cygwin with the right configurations under $HADOOP_HOME/etc yields some classpath or classpath formation issues class not found issues ? For instance the following run
hdfs dfs -ls
Error: Could not find or load main class org.apache.hadoop.fs.FsShell
Then looking at the classpath looks like they contain cygwin paths . attempt to convert them to windows paths so that the jar can be looked up
in $HADOOP_HOME/etc/hdfs.sh locate the dfs command and change to
elif [ "$COMMAND" = "dfs" ] ; then
if $cygwin; then
CLASSPATH=`cygpath -p -w "$CLASSPATH"`
fi
CLASS=org.apache.hadoop.fs.FsShell
This results in the following:
16/04/07 16:01:05 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:378)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:393)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:386)
at org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:438)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:484)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:362)
16/04/07 16:01:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Warning: fs.defaultFs is not set when running "ls" command.
Found 15 items
-ls: Fatal internal error
java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:478)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:831)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:814)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1100)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:582)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getOwner(RawLocalFileSystem.java:565)
at org.apache.hadoop.fs.shell.Ls.adjustColumnWidths(Ls.java:139)
at org.apache.hadoop.fs.shell.Ls.processPaths(Ls.java:110)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Ls.processPathArgument(Ls.java:98)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:305)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:362)
For the above my question should I be going further to try and fix this so that i can reuse my existing client .sh scripts or just convert them .bat ?
the problem is that cygwin needs to return windows paths rather than cygwin paths. Also winutils.exe needs to be installed in the path as described here
Simply fix the scripts to return the actual win paths and turn off a few commands which don't run under cygwin
#!/bin/bash
# fix $HADOOP_HOME/bin/hdfs
sed -i -e "s/bin=/#bin=/g" $HADOOP_HOME/bin/hdfs
sed -i -e "s#DEFAULT_LIBEXEC_DIR=\"\$bin\"/../libexec#DEFAULT_LIBEXEC_DIR=\"\$HADOOP_HOME\\\libexec\"#g" $HADOOP_HOME/bin/hdfs
sed -i "/export CLASSPATH=$CLASSPATH/i CLASSPATH=\`cygpath -p -w \"\$CLASSPATH\"\`" $HADOOP_HOME/bin/hdfs
# fix $HADOOP_HOME/libexec/hdfs-config.sh
sed -i -e "s/bin=/#bin=/g" $HADOOP_HOME/libexec/hdfs-config.sh
sed -i -e "s#DEFAULT_LIBEXEC_DIR=\"\$bin\"/../libexec#DEFAULT_LIBEXEC_DIR=\"\$HADOOP_HOME\\\libexec\"#g" $HADOOP_HOME/libexec/hdfs-config.sh
# fix $HADOOP_HOME/libexec/hadoop-config.sh
sed -i "/HADOOP_DEFAULT_PREFIX=/a HADOOP_PREFIX=" $HADOOP_HOME/libexec/hadoop-config.sh
sed -i "/export HADOOP_PREFIX/i HADOOP_PREFIX=\`cygpath -p -w \"\$HADOOP_PREFIX\"\`" $HADOOP_HOME/libexec/hadoop-config.sh
# fix $HADOOP_HOME/bin/hadoop
sed -i -e "s/bin=/#bin=/g" $HADOOP_HOME/bin/hadoop
sed -i -e "s#DEFAULT_LIBEXEC_DIR=\"\$bin\"/../libexec#DEFAULT_LIBEXEC_DIR=\"\$HADOOP_HOME\\\libexec\"#g" $HADOOP_HOME/bin/hadoop
sed -i "/export CLASSPATH=$CLASSPATH/i CLASSPATH=\`cygpath -p -w \"\$CLASSPATH\"\`" $HADOOP_HOME/bin/hadoop
Trying to set up Hadoop 2.4.1 on my machine using Cygwin and I'm stuck when I try to run
$ hdfs namenode -format
which gives me
Error: Could not find or load main class org.apache.hadoop.hdfs.server.namenode.NameNode
I think it's due to an undefined environment variable, since I can run
$ hadoop version
without a problem. I've defined the following:
JAVA_HOME
HADOOP_HOME
HADOOP_INSTALL
as well as adding the Hadoop \bin and \sbin (and Cygwin's \bin) to the Path. Am I missing an environment variable that I need to define?
Ok, looks like the file hadoop\bin\hdfs also has to be changed like the hadoop\bin\hadoop file described here.
The end of the file must be changed from:
exec "$JAVA" -Dproc_$COMMAND $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$#"
to
exec "$JAVA" -classpath "$(cygpath -pw "$CLASSPATH")" -Dproc_$COMMAND $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$#"
I assume I'll have to make similar changes to the hadoop\bin\mapred and hadoop\bin\yarn when I get to using those files.
I already have Hadoop 1.2 running on my Ubuntu VM which is running on Windows 7 machine. I recently installed Pig 0.12.0 on my same Ubuntu VM. I have downloaded the pig-0.12.0.tar.gz from the apache website. I have all the variables such as JAVA_HOME, HADOOP_HOME, PIG_HOME variables set correctly. When I try to start pig in local mode this is what I see:
chandeln#ubuntu:~$ pig -x local
pig: invalid option -- 'x'
usage: pig
chandeln#ubuntu:~$ echo $JAVA_HOME
/usr/lib/jvm/java7
chandeln#ubuntu:~$ echo $HADOOP_HOME
/usr/local/hadoop
chandeln#ubuntu:~$ echo $PIG_HOME
/usr/local/pig
chandeln#ubuntu:~$ which pig
/usr/games/pig
chandeln#ubuntu:~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/jvm/java7/bin:/usr/local/hadoop/bin:/usr/local/pig/bin
Since I am not a Unix expert I am not sure if this is the problem but the command which pig actually returns /usr/games/pig instead of /usr/local/pig. Is this the root cause of the problem?
Please guide.
I was able to fix the problem by changing the following lines in my .bashrc. This gave precedence to the /usr/local/pig directory instead of /usr/games/pig
BEFORE: export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$PIG_HOME/bin
AFTER: export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PIG_HOME/bin:$PATH
I have built a hadoop cluster which 1 master-slave node and the other is slave. And now, I wanna build a flume to get all log of the cluster on master machine. However, when I try to install flume from tarball and I always get:
Error: Could not find or load main class org.apache.flume.node.Application
So, please help me to find the answer, or the best way to install flume on my cluster.
many thanks!
It is basically because of FLUME_HOME..
Try this command
$ unset FLUME_HOME
I know its been almost a year for this question, but I saw it!
When you set your agnet using sudo bin/flume-ng.... make sure to specify the file where the agent configuration is.
--conf-file flume_Agent.conf -> -f conf/flume_Agent.conf
This did the trick!
look like you run flume-ng in /bin folder
flume after build in
/flume-ng-dist/target/apache-flume-1.5.0.1-bin/apache-flume-1.5.0.1-bin
run flume-ng in this
I suppose you are trying to run flume from cygwin on windows? If that is the case, I had a similar issue. The problem might be with the flume-ng script.
Find the following line in bin/flume-ng:
$EXEC java $JAVA_OPTS $FLUME_JAVA_OPTS "${arr_java_props[#]}" -cp "$FLUME_CLASSPATH" \
-Djava.library.path=$FLUME_JAVA_LIBRARY_PATH "$FLUME_APPLICATION_CLASS" $*
and replace it with this
$EXEC java $JAVA_OPTS $FLUME_JAVA_OPTS "${arr_java_props[#]}" -cp `cygpath -wp "$FLUME_CLASSPATH"` \
-Djava.library.path=`cygpath -wp $FLUME_JAVA_LIBRARY_PATH` "$FLUME_APPLICATION_CLASS" $*
Notice that the paths have been replaced with the windows directories. Java would not be able to find the library paths from the cygdrive paths and we would have to convert it to the correct windows paths wherever applicable
Maybe you are using the source files, you first should compile the source code and generate the binary code, then inside the binary files directory, you can execute: bin/flume-ng agent --conf ./conf/ -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent1. All these information you can follow: https://cwiki.apache.org/confluence/display/FLUME/Getting+Started
I got same issue before, it's simply due to FLUME_CLASSPATH not set
the best way to debug is see the java command being fired and make sure that flume lib is included in the CLASSPATH (-cp),
As in following command its looking for /lib/*, thats where the flume-ng-*.jar are, but its incorrect because there's nothing in /lib, in this line -cp '/staging001/Flume/server/conf://lib/*:/lib/*'. It has to be ${FLUME_HOME}/lib.
usr/lib/jvm/java-1.8.0-ibm-1.8.0.3.20-1jpp.1.el7_2.x86_64/jre/bin/java -Xms100m -Xmx500m $'-Dcom.sun.management.jmxremote\r' \
-Dflume.monitoring.type=http \
-Dflume.monitoring.port=34545 \
-cp '/staging001/Flume/server/conf://lib/*:/lib/*' \
-Djava.library.path= org.apache.flume.node.Application \
-f /staging001/Flume/server/conf/flume.conf -n client
So, if you look at the flume-ng script,
There's FLUME_CLASSPATH setup, which if absent it is setup based on FLUME_HOME.
# prepend $FLUME_HOME/lib jars to the specified classpath (if any)
if [ -n "${FLUME_CLASSPATH}" ] ; then
FLUME_CLASSPATH="${FLUME_HOME}/lib/*:$FLUME_CLASSPATH"
else
FLUME_CLASSPATH="${FLUME_HOME}/lib/*"
fi
So make sure either of those environments is set. With FLUME_HOME set, (I'm using systemd)
Environment=FLUME_HOME=/staging001/Flume/server/
Here's the working java exec.
/usr/lib/jvm/java-1.8.0-ibm-1.8.0.3.20-1jpp.1.el7_2.x86_64/jre/bin/java -Xms100m -Xmx500m \
$'-Dcom.sun.management.jmxremote\r' \
-Dflume.monitoring.type=http \
-Dflume.monitoring.port=34545 \
-cp '/staging001/Flume/server/conf:/staging001/Flume/server/lib/*:/lib/*' \
-Djava.library.path= org.apache.flume.node.Application \
-f /staging001/Flume/server/conf/flume.conf -n client