Not able to set JAVA_HOME variable - hadoop

I am editing conf/hadoop-env.sh to define JAVA_HOME variable. But i am getting these errors.
bin/hadoop: line 350: C:\Program: command not found
/bin/java: No such file or directoryes\Java\jdk1.6.0_26\bin
/bin/java: cannot execute: No such file or directoryk1.6.0_26\bin
I am using cygwin for hadoop 1.2.1 on my windows.
My java is installed in C:\Program Files\Java\jdk1.6.0_26\bin
I have set the environment variable and edited the env.sh file in this way.
export JAVA_HOME=C:\ProgramFiles\Java\jdk1.6.0_26\bin
When i am trying to echo JAVA_HOME,it is giving C:\Program only
But still it is not working for me. Can anyone please suggest me what to do. I have tried everything possible i read on internet.

Related

How to get Kotlinc 1.8.0 to run with Ubuntu on Windows

I installed Kotlinc through the zip file kotlin-compiler-1.8.0.zip and extracted it and moved the kotlinc\bin files to my C:\Program files. I later then verified that I had installed it correctly by running the kotlinc -version command in the terminal and got back info: kotlinc-jvm 1.5.21 (JRE 16.0.2+7-67). So I believe I have that working just fine. So I believe now the error is arising from Java somehow.
When I go to run my simple hello world program with this:
kotlinc main.kt -include-runtime -d main.jar
I am met with the error:
/mnt/c/Program Files/kotlinc/bin/kotlinc: line 98: java: command not found
Previously I was just getting: kotlinc command not found. I later read on StackOverflow someone else was having the same problem and the answer to that solution was just add it to the environment Variable path, in which I did but I very soon then ran into this issue.
I have since tried everything I have came across on this issue, I've reinstalled the kotlinc compiler and java and put the Java\jdk16.0.2\bin in my environment variable path as well. When I try to run the simple command kotlinc help in Ubuntu I am also met with the same line 98: java command not found error.

Hadoop: bad execution compiling WordCount

I have installed Hadoop 3.1.1 and it is working. However, when I try to compile the WordCount example, I am receiving this error:
/usr/local/hadoop/libexec/hadoop-functions.sh: line 2358: HADOOP_COM.SUN.TOOLS.JAVAC.MAIN_USER: bad substitution
/usr/local/hadoop/libexec/hadoop-functions.sh: line 2453: HADOOP_COM.SUN.TOOLS.JAVAC.MAIN_OPTS: bad substitution
To compile, I used the next line:
hadoop com.sun.tools.javac.Main WordCount.java
I have the next variables in the .bashrc:
#Hadoop variables
export HADOOP_HOME=/usr/local/hadoop
export CONF=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
#Java home
export JAVA_HOME=${JAVA_HOME}/java-8-oracle
#Path Java Tools
export HADOOP_CLASSPATH=$JAVA_HOME/lib/tools.jar
This time I am using Java 8 of Oracle because the apt-get of Ubuntu 18.08 LTS does not give me the option of installing the OpenJDK8. I have updated and upgraded Ubuntu.
I have read a lot of different post and possible solutions, but I cannot solve it.
This is a viable solution I found in https://janzhou.org/2014/how-to-compile-hadoop.html
Set the HADOOP_CLASSPATH:
export HADOOP_CLASSPATH=$(bin/hadoop classpath)
Compile:
javac -classpath ${HADOOP_CLASSPATH} -d WordCount/ WordCount.java

start hadoop failed with hadoop-functions.sh

I tried to start hadoop, but it failed with nothing started. Following the console log.
Mac:sbin lqs2$ sh start-all.sh
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-functions.sh: line 398:
syntax error near unexpected token `<'
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-functions.sh: line 398:
`done < <(for text in "${input[#]}"; do'
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-config.sh: line 70:
hadoop_deprecate_envvar: command not found
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-config.sh: line 87:
hadoop_bootstrap: command not found
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-config.sh: line 104:
hadoop_parse_args: command not found
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-config.sh: line 105:
shift: : numeric argument required
WARNING: Attempting to start all Apache Hadoop daemons as lqs2 in 10
seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
I have tried any ways to solve it but nothing woked. Even I reinstalled the latest version. But the error is the same. It almost drives me mad.
Any answer is helpful. Thanks.
Hadoop scripts require bash, not sh
$ chmod +x start-all.sh
$ ./start-all.sh
Though I would suggest starting HDFS and YARN separately so that you can isolate other issues
You also need to downgrade Hadoop to at least the latest 2.7 release for Spark to work
In my case, I was seeing this error in OSX after installing Hadoop using HomeBrew. The solution was to do a fresh install after downloading the Hadoop (3.2.1) binary directly from the official website. While installing, I had set HADOOP_HOME and JAVA_HOME environment variables.
A word of caution: I found that the issue can occur if the following environment variables are defined in hadoop-env.sh :
export HDFS_NAMENODE_USER="root"
export HDFS_DATANODE_USER="root"
export HDFS_SECONDARYNAMENODE_USER="root"
export YARN_RESOURCEMANAGER_USER="root"
export YARN_NODEMANAGER_USER="root"
I had initially added these variables while trying to fix the issue. Ultimately I removed them and the error disappeared.
Note, I executed all the Hadoop commands and scripts as non-root user, and also upgraded bash to version 5.0.17.

Cannot locate pig-core-h2.jar. do 'ant -Dhadoopversion=23 jar', and try again

I downloaded pig 0.14.0 and I am running Hadoop 2.6.0 on MAC OSX. I followed all the installation steps for PIG at https://github.com/ucbtwitter/getting-started/wiki/Installing-Pig .I had set JAVA_HOME correctly as mentioned.
Even after running the ant "-Dhadoopversion=23 jar" command I am getting the same error "Cannot locate pig-core-h2.jar. do 'ant -Dhadoopversion=23 jar', and try again".
This error constantly arising
Cannot locate pig-core-h2.jar. do 'ant -Dhadoopversion=23 jar', and try again.
I studied the shell script by opening pig-0.14.0/bin/pig file and found that this error is related to the setting of CLASSPATH and PIG_HOME and JAVA_HOME variables.
Then I found that I mispelled the PIG_HOME then I corrected it.
Next I ran that specified command('ant -Dhadoopversion=23 jar') in the pig installation directory.
Then I got this error
Not a valid JAR: /Users/../../../pig-0.14.0/pig-0.14.0-SNAPSHOT-core-h2.jar /Users/../../../pig-0.14.0/pig-0.14.0-core-h2.jar
To resolve it remove that jar file in that location.
Then I got it working.
Find the path to the file pig-*-core-h2.jar.
I installed pig using brew install pig and found the jar in the path /usr/local/Cellar/pig/0.17.0/libexec
Run export PIG_HOME=/usr/local/Cellar/pig/0.17.0/libexec
This will fix your error.
i did this to fix the pig
mv /data/mapr/pig/pig-0.14/pig-0.14.0-mapr-1603-core-h2.jar /data/mapr/pig/pig-0.14/pig-0.14.0-mapr-1603-core-h2.jar.orig
The following solutions works:
Please make sure in your .bash_profile or .bashrc you have following environment variables:
export PIG_HOME="/Library/apache-pig-0.15.0"
export PATH="/Library/apache-pig-0.15.0/bin:${PATH}"
restart the machine or restart the unix terminal;
I replaced /Library/apache-pig-0.15.0/ with "home/cwu/Downloads/pig-0.15.0-src
"

bin/hadoop version throws an error in CYGWIN [WIndows 7]

When i execute the following command:
Anands#Tx-D-AnandS /usr/local/hadoop-1.1.2
$ bin/hadoop version
I get the following error:
cygwin warning:
MS-DOS style path detected: C:\Program_Files\Java\jre6\x0D/bin/java
Preferred POSIX equivalent is: /cygdrive/c/Program_Files/Java/jre6\x0D/bin/java
CYGWIN environment variable option "nodosfilewarning" turns off this warning.
Consult the user's guide for more details about POSIX paths:
http://cygwin.com/cygwin-ug-net/using.html#using-pathnames
/bin/java: No such file or directoryes\Java\jre6
/bin/java: No such file or directoryes\Java\jre6
/bin/java: cannot execute: No such file or directorye6
Can anyone help me with this? Any help appreciated!
set the system environment variable for java
Also set export JAVA_HOME=c:/Program_Files/Java/jre6 in hadoop-evn.sh file.

Resources