I want to install Hive and hadoop on my ubuntu.I followed this article all of things seems good but the end step when I write this command an error about Java appear like this:
/home/babak/Downloads/hadoop/bin/../bin hadoop: row 258:/usr/lib/j2sdk1.5-sun/bin/java: file or Folder not found
what should i do to solve this problem?
You need to find where on your machine java is installed:
which java
and then from there follow any symlinks or wrapper scripts to the actual location of the java executable.
An easier way to do this is to run the file indexer and then locate the file (here i use the jps executable, which is in the same folder as java:
#> sudo updatedb
#> locate jps
Whatever you get back, trim off the bin/jps suffix, and that's your JAVA_HOME value. If you can't find the executable, than you'll need to install java
Hadoop requires Java version 1.6 or higher. It seems like hadoop is looking for Java 1.5. Also, make sure the variable HADOOP_HOME is set in file /conf/hadoop-env.sh
I have a line like the following in mine:
export JAVA_HOME=/usr/lib/jvm/java-6-sun/
Related
I downloaded the binary tarball of hadoop from here: http://hadoop.apache.org/releases.html (ver 2.8.4). I unpacked the tar.gz file and then changed the etc/hadoop-env.sh from
export JAVA_HOME={$JAVA_HOME}
to my java jdk locaction:
export JAVA_HOME=C:\Program Files\Java\jdk1.8.0_131
I also added these two lines:
export HADOOP_HOME=D:/hadoop/hadoop-2.8.4
export PATH=$PATH:$HADOOP_HOME/bin
But when i try to run
$ hadoop version
from cmd i get an error message that says
Error: HADOOP_HOME is not set correctly
What did I do wrong and how should I change the hadoop_home path for it to work?
Other than {$JAVA_HOME} has the dollar sign in the wrong spot (needs to be outside the brackets), Windows doesn't run the shell script to locate your variables
You need to set environment variables in Windows from the Control Panel. And you also need to remove all spaces from the file path of "Program Files"
Its not clear if you're using Cygwin or using Windows Linux subsystem, but it's different from the native CMD
Set the path HADOOP_HOME Environment variable as below:
export HADOOP_HOME=D:\hadoop\hadoop-2.8.4
export PATH=$PATH:$HADOOP_HOME\bin
$ hadoop version
It will work
I come across such error when I try to use hadoop-3.3.1, the latest version. I have searched a lot about "HADOOP_HOME not correctly set" and there are no useful results.
But after I downgrade to hadoop-3.2.2, this error disappears.
I think you can try the non-latest version again.
I am following Joseph Adler instructions on how to install ( page 555 here - http:// it-e
books. info/book/1014/ ) Hadoop on my lubuntu.
I wrote in terminal:
wget http://archive.cloudera.com/cdh/3/hadoop-0.20.2-cdh3u4.tar.gz
tar xvfz hadoop-0.20.2-cdh3u4.tar.gz
and everything went fine, .tar.gz file was downloaded and then it was untarred.
But when I wrote
hadoop version
in the terminal, there appeared a message saying that there is no command hadoop.
Does anybody has an idea on what should I do to use (already) installed but (still) somehow invisible Hadoop?
Thanks for help!
In Linux invoking a command without prefixing its path requires the location where the command resides should be present the environment variable PATH.
Here, For executing the command you got to specify either absolute or relative path of the command. Following can be used, replace with the extracted location.
<EXTRACT_LOC_PATH>/hadoop-0.20.2-cdh3u4/bin/hadoop version
If your present working directory is /hadoop-0.20.2-cdh3u4/bin/ then ./hadoop version would be sufficient.
Whenever you are getting COMMAND NOT FOUND ERROR the problem will be there in .bashrc file only. You might not have properly set the JAVA_HOME, HADOOP_HOME and PATH Variable. So check it out whether you have given proper path for all these 3 variables or not.
I've checked answers on stackoverflow, no solutions work for my case.
Command:
bin/hadoop namenode -format
Error Message:
/bin/java: No such file or directory1.7.0_09/
/bin/java: No such file or directory1.7.0_09/
/bin/java: cannot execute: No such file or directory
Relevant change in hadoop_env.sh
# The java implementation to use. Required.
export JAVA_HOME=/usr/local/jdk1.7.0_09/
I use soft-link by
ln -s "c:\Program Files\java\jdk1.7.0_09" /usr/local/jdk1.7.0_09
Java HOME:
C:\Program Files\Java\jdk1.7.0_09
Path :
C:\cygwin64\bin;C:\cygwin64\usr\sbin
If any one has clues, please feel free to point it out. Thanks.
#xhudik #s.singh Finally! There is a problem when modifying hadoop_env.sh in Windows. I've fixed the problem with dos2unix command to eliminate dos style character.
If dos2unix command can't be found in cygwin, re-download cygwin and update it.
Please follow the link here:
https://superuser.com/questions/612435/cygwin-dos2unix-command-not-found
The command is
dos2unix hadoop_env.sh
Then everything is all set. Hope my experience would help others.
Thanks for s.singh and xhudik's help.
there is no java. Are you sure that your java binaries (./java , ./javac...) are in the specified directories? Maybe ln is a problem. Java also doesn't like " " in directory name (c:\program files) ...
You need to correctly place java distribution and then define JAVA_HOME variable. You can test it by:
$JAVA_HOME/bin/java -version
Set your java home like this:
JAVA_HOME=C:/Program Files/java/jdk1.7.0_09 in hadoop_env.sh
also you need to set Java Path in environment variable for java.
If still getting issue, then please let us know.
For learning and best practice on hadoop, try using cloudera version or Hortonworks version of hadoop . You can download their windows version. Please check link:
hortonworks.
cloudera
Or you can use IBM Smart Cloud enterprise. IBM is giving free access for students and learning.
I'm currently trying to configure a standalone Hadoop node on a new computer (for my own learning purposes) but when I try to find the root of my java installation using $which java
as per this question
root of java installation
I get the following error
Me#myhouse /cygdrive/c/Program Files (x86)
$ which java
which: no java in (/usr/local/bin:/usr/bin:/cygdrive/c/Windows/system32:/cygdrive
/c/Windows:/cygdrive/c/Windows/System32/Wbem:/cygdrive/c/Windows/System32
/WindowsPowerShell/v1.0:/cygdrive/c/Program Files (x86)/ATI Technologies/ATI.ACE/Core-Static)
I get the exact same error message at the top level of my C: drive.
I know that Java is there - I just installed it! Can anyone explain what I have done wrong?
Your command just checks if java is found in the common paths from cygwin
So in cygwin you need to add the java path to the paths: export JAVA_HOME="/cygdrive/c/Program\ Files/Java/jre1.8.0_102" export PATH="$PATH:$JAVA_HOME/bin"
I installed Hadoop (1.0.2) for a single node on Windows 7 with Cygwin, and it is working. However, I cannot get PIG (0.10.0) to see the Hadoop.
1) "Error: JAVA_HOME is not set."
I added this line to pig (under bin): export JAVA_HOME=/cygdrive/c/PROGRA~1/Java/jdk1.7.0_05
2) which: no hadoop in (/usr/local/b.....)
cygpath: cannot create short name of C:\pig-0.10.0\logs
Cannot locate pig.jar. do 'ant jar', and try again
I tried adding below lines to pig and it is still not finding hadoop. What should i do?
export PIG_HOME="/cygdrive/c/pig-0.10.0"
export PATH=$PATH:$PIG_HOME/bin
export PIG_CLASSPATH=/cygdrive/hadoop/hadoop-1.0.2/conf
You might need to add your Hadoop install to your path as well. e.g.
export HADOOP_INSTALL=/Users/yourname/dev/hadoop-0.20.203.0
export PATH=$PATH:$HADOOP_INSTALL/bin
I had same issue with pig-0.11. Seems this is cygwin specific issue.
Copying pig-0.11.1-withouthadoop to pig-withouthadoop.jar under PIG_HOME fixed the issue for me
I was trying to set up PIG on my gateway machine which has Windows 7 installed on it.
This issue is very specific to Cygwin.
After breaking my head for a couple of hours I found the solution :
Solution is very simple.
Just rename the jar file under ”pig-0.10.1-withouthadoop.jar” to “pig-withouthadoop.jar”.
Its documented here
Also, you can add path : (hadoop directory)\hadoop-v.v.v\bin to environment variables manually in Windows 7. This will solve this problem
which: no hadoop in (/usr/local/b.....)
You must visit this for installing pig 12 on hadoop 2.2.0 without any errors as it re compiles the pig library for hadoop version specified.
http://javatute.com/javatute/faces/post/hadoop/2014/installing-pig-11-for-hadoop-2-on-ubuntu-12-lts.xhtml
After following the steps, you will get the running pig without any errors on grunt.
Just enjoy doing.
% pig [return]
I had a similar problem with Pig 0.12.0 (and Hadoop 1.0.3) installed on Fedora 19.
When trying any Pig command, e.g.
pig -help
I was getting the error:
Cannot locate pig-withouthadoop.jar. do 'ant jar-withouthadoop.jar', and try again
Hadoop and Pig installation /bin folders were properly included in my PATH.
Simply copying pig-0.12.0-withouthadoop.jar to PIG_HOME folder fixed the issue for me.