I'm installing Hanborq optimized Hadoop Distribution (fully distribution mode ) ,i followed all steps exactly in the following links,and there is no errors happened .when I reach to step that format the hdfs file :
$ hadoop namenode -format
An error accursed tells that "HADOOP_HOME is not set correctly
please set your hadoop_home variable to the absolute path of the directorythat contains hadoop-core-VERSION.jar"
installation_steps_1
installation_steps_2
It seems you did not set HADOOP_HOME correctly in .bashrc file. Add below lines in your .bashrc file and execute it by . .bashrc. Please give reply if it works
#HADOOP_HOME setup
export HADOOP_HOME="/usr/local/hadoop/hadoop-2.6"
PATH=$PATH:$HADOOP_HOME/bin
export PATH
Note: HADOOP_HOME is location of hadoop directory
Related
I need to edit the logs directory (contains application_* files) for hadoop. Currently they are at :
/home/hadoop/hadoop-3.2.0/logs/userlogs/
HADOOP_LOG_DIR is the property which can be configured to change the location of logs generated by Hadoop. You can find the same in your hadoop-env.sh file.
You can find the hadoop-env.sh file in the $HADOOP_HOME/etc/hadoop location.
echo $HADOOP_HOME
cd /path/to/hadoop_home/etc
vi hadoop-env.sh
Enter the value/path against the property HADOOP_LOG_DIR and it should be done.
I am trying to run Spark using yarn and I am running into this error:
Exception in thread "main" java.lang.Exception: When running with master 'yarn' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment.
I am not sure where the "environment" is (what specific file?). I tried using:
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
in the bash_profile, but this doesn't seem to help.
While running spark using Yarn, you need to add following line in to spark-env.sh
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
Note: check $HADOOP_HOME/etc/hadoop is correct one in your environment. And spark-env.sh contains export of HADOOP_HOME as well.
For the Windows environment, open file load-spark-env.cmd in the Spark bin folder and add the following line:
set HADOOP_CONF_DIR=%HADOOP_HOME%\etc\hadoop
just an update to answer by Shubhangi,
cd $SPARK_HOME/bin
sudo nano load-spark-env.sh
add below lines , save and exit
export SPARK_LOCAL_IP="127.0.0.1"
export HADOOP_CONF_DIR="$HADOOP_HOME/etc/hadoop"
export YARN_CONF_DIR="$HADOOP_HOME/etc/hadoop"
I have installed hadoop on ubuntu on virtual box(host os Windows 7).I have also installed Apache spark, configured SPARK_HOME in .bashrc and added HADOOP_CONF_DIR to spark-env.sh. Now when I start the spark-shell it throws error and do not initialize spark context, sql context. Am I missing something in installation and also I would want to run it on a cluster (hadoop 3 node cluster is set up).
I have the same issue when trying to install Spark local with Windows 7. Please make sure the below paths is correct and I am sure I will work with you. I answer same question in this link So, you can follow the below and it will work.
Create JAVA_HOME variable: C:\Program Files\Java\jdk1.8.0_181\bin
Add the following part to your path: ;%JAVA_HOME%\bin
Create SPARK_HOME variable: C:\spark-2.3.0-bin-hadoop2.7\bin
Add the following part to your path: ;%SPARK_HOME%\bin
The most important part Hadoop path should include bin file before winutils.ee as the following: C:\Hadoop\bin Sure you will locate winutils.exe inside this path.
Create HADOOP_HOME Variable: C:\Hadoop
Add the following part to your path: ;%HADOOP_HOME%\bin
Now you can run the cmd and write spark-shell it will work.
I have installed the hadoop 2.6 on ubuntu 14.04.I just followed this blog.
While I am trying to format the namenode, I am hitting with below error:
hduser#data1:~$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
/usr/local/hadoop/bin/hdfs: line 276: /home/hduser/usr/lib/jvm/java-7-openjdk-amd64/bin/java: No such file or directory
/home/hduser/usr/lib/jvm/java-7-openjdk-amd64/bin/java: No such file or directory
This error occurs because the JAVA_HOME you have provided does not have java.
Just add this line in hadoop-env.sh and /home/hduser/.bashrc:
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
I think you have already set the $JAVA_HOME but you did it wrong (just a guess):
/home/hduser/usr/lib/jvm/java-7-openjdk-amd64/bin/java
It would be :
/usr/lib/jvm/java-7-openjdk-amd64/bin/java
You probably have added ~ before the path when you exported that JAVA_HOME and this added the home directory /home/hduser.
To check this out, type java -version and see if java is working. And type echo $JAVA_HOME and check the path manually.
I figured out. The entry we made was for amd64. it is really i386 computers. Please verify the path and that should fix the issue.
So a little background. I've been trying to setup Hive on a CentOS 6 machine. I followed the instructions of this Youtube video: http://www.youtube.com/watch?v=L2lSrHsRpOI
For my case, I'm using Hadoop-1.1.2 and Hive 0.9.0, all the directories labeled "mnt" in this video I replaced it with "opt" because that's where all of my hadoop and hive packages have been opened up.
As I reached the portion of the video where I was actually supposed to run Hive via "./hive"
this error popped up:
"Cannot find hadoop installation: $HADOOP_HOME must be set or hadoop must be in the path"
I guess one of the questions I have is, in which directory did I have to edit the ".profile" file? because I don't understand why we would have to go to the "home" directory for this change. And also if this helps, this is what I had put down in the ".profile" file in my /home/hadoop directory
export HADOOP_HOME=/opt/hadoop/hadoop
export HIVE_HOME=/opt/hadoop/hive
export PATH=$HADOOP_HOME/bin:$HIVE_HOME/bin
Thank you so much!
Go to /etc/profile.d directory and create a hadoop.sh file in there with
export HADOOP_HOME=/opt/hadoop/hadoop
export HIVE_HOME=/opt/hadoop/hive
export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin
After you save the file, make sure to
chmod +x /etc/profile.d/hadoop.sh
source /etc/profile.d/hadoop.sh
This should take care of it.