cannot remove $HADOOP_PREFIX from .bashrc which gives error in hbase - hadoop

I have installed hadoop using tar files. i have added $HADOOP_PREFIX=/usr/local/hadoop in the .bashrc file . everything was working fine. Now , i installed Hadoop using Horton's Ambari. I have removed the previous hadoop environment variable $HADOOP_PREFIX from all the system from .bashrc file.
Now when i give the command echo $HADOOP_PREFIX it is still showing the old path /usr/local/hadoop . Is there any way to remove that variable.?

delete the $HADOOP_PREFIX from .bahrc file then run this command
unset HADOOP_PREFIX

Related

Hadoop namenode format error

I am trying to configure hadoop on ubuntu but when executing the command bin/hadoop namenode -format it shows the following message
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it,
/home/sonali/hadoop-2.2.0/bin/hdfs: line 201:
/usr/lib/jvm/java-6-openjdk-i386/bin/java: No such file or directory
line 201: /usr/lib/jvm/java-6-openjdk-i386/bin/java: No such file or directory
The problem seems to be with Java. Try to cd to above path and then it will throw error as well(most probably).
So you need to set JAVA_HOME properly in your .bashrc file.
You can try setting JAVA_HOME = /usr/lib/jvm/java-6-openjdk-i386
Make sure that your JAVA_HOME in hadoop_base_dir/etc/hadoop/hadoop-env.sh is right

Where is the classpath set for hadoop

Where is the classpath for hadoop set?
When I run the below command it gives me the classpath. Where is the classpath set?
bin/hadoop classpath
I'm using hadoop 2.6.0
Open your bash profile (~/.profile or ~/.bash_profile) for editing and add the following:
export HADOOP_HOME="/usr/local/Cellar/hadoop" then Replace with your own path
export HADOOP_CLASSPATH=$(find $HADOOP_HOME -name '*.jar' | xargs echo | tr ' ' ':')
Save the changes and reload.
source ~/.profile
As said by almas shaikh it's set in hadoop-config.sh, but you could add more jars to it in hadoop-env.sh
Here is a relevant code from hadoop-env.sh which adds additional jars like capacity-scheduler and aws jar's.
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
# Extra Java CLASSPATH elements. Automatically insert capacity-scheduler.
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
if [ "$HADOOP_CLASSPATH" ]; then
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
else
export HADOOP_CLASSPATH=$f
fi
done
# ... some other lines omitted
# Add Aws jar
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:share/hadoop/tools/lib/*
When you run hadoop command, it sources a file hadoop-config.sh that resides in $HADOOP_HDFS_HOME/libexec which sets your classpath (CLASSPATH) by picking jars residing in various directories viz.
$HADOOP_HDFS_HOME/share/hadoop/mapreduce
$HADOOP_HDFS_HOME/share/hadoop/common
$HADOOP_HDFS_HOME/share/hadoop/hdfs etc.
As per this blog post, it is in an environment variable named HADOOP_CLASSPATH. You can set it as you would any other environment variable, the specifics of which depend on which shell you use. If you use bash, then you can call like export HADOOP_CLASSPATH=/path/to/wherever:/path/to/wherever/else.
I also encountered the problem and have solved it, but my hadoop version is 2.10.1.
I hope it have some help for people who use a newer hadoop version. So far, the following methods should have worked as well in the latest hadoop version 3.3.0.
You just need to edit your .bashrc or .profile, I will give an example of .bashrc.
# edit .bashrc
$ vim ~/.bashrc
Add HADOOP_HOME, PATH of hadoop bin direcotry and HADOOP_CLASSPATH in .bashrc.
# export HADOOP_HOME=${your hadoop install directory}, an example as follows:
export HADOOP_HOME=/usr/local/hadoop-2.10.1
export PATH=${HADOOP_HOME}/bin:${PATH}
export HADOOP_CLASSPATH=`hadoop classpath`
Then,
$ source ~/.bashrc

Hadoop commands

I have Hadoop installed in this location
/usr/local/hadoop$
Now I want to list the files inside the dfs. The command I used is :
hduser#ubuntu:/usr/local/hadoop$ bin/hadoop dfs -ls
This gave me the files in the dfs
Found 3 items
drwxr-xr-x - hduser supergroup 0 2014-03-20 03:53 /user/hduser/gutenberg
drwxr-xr-x - hduser supergroup 0 2014-03-24 22:34 /user/hduser/mytext-output
-rw-r--r-- 1 hduser supergroup 126 2014-03-24 22:30 /user/hduser/text.txt
Next time, I tried the same command in a different manner
hduser#ubuntu:/usr/local/hadoop$ hadoop dfs -ls
It also gave me the same result.
Could some one please explain why both are working despite of executing the ls command from different folders. I hope you guys understood my question.Just explain me difference between these two :
hduser#ubuntu:/usr/local/hadoop$ bin/hadoop dfs -ls
hduser#ubuntu:/usr/local/hadoop$ hadoop dfs -ls
In unix an executable file can be executed in two ways, either by giving the absolute/relative path or commands in system executables path(path should be specified in PATH variable)
When you execute bin/hadoop dfs -ls should be inside the directory /usr/local/hadoop. Or /usr/local/hadoop/bin/hadoop dfs -ls will also work
There is one environment variable PATH in unix which keeps in the list of executable location by default it keeps the following path /usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin: . Whenever we execute any command like ls, mkdir etc it is taking from the one location in PATH variable. When you give the command hadoop(it will be taken from the path /usr/local/hadoop/bin/). Since you have specified the path /usr/local/hadoop/bin/ in PATH variable. Use the following command to check the value of your PATH variable
echo $PATH
You set a hadoop global path HADOOP_HOME in your ~/.bashrc file so that Hadoop commands will works in anywhere in Terminal.
In both case you got same result because you already set HADOOP_HOME/bin in bashrc file you can check the entry by sudo nano ~/.bashrc , we do this because it give us convenience to execute command from terminal irrespective of current file directory
if you remove HADOOP_HOME/bin entry from bashrc file you will not get the same result
First of all, you are executing the same command. bin/hadoop in hadoop installation dir and hadoop are same, For this check your .bashrc file where you must have specified the executable path for hadoop.
if you call hadoop it means you are calling /usr/local/hadoop/bin/hadoop command.
If you are having problem with output of ls-
you are executing ls on hadoop file system, not on local file system. It will show you the content available in hadoop file system. for more details go to localhost:50070 and check the content
In both case you got same result because you already set HADOOP_HOME/bin in bashrc file. Because in unix an executable file can be executed in two ways, either by giving the absolute/relative path or commands in system's executable path.
When you execute - "bin/hadoop dfs -ls" should be inside the directory /usr/local/hadoop.
To locate where the executable file associated with the hadoop command, just run:
which hadoop
This will print out the location of the hadoop command used.
At a certain point during the Hadoop installation you configure the hdfs filesystem. And eventually you format it using hdfs namenode -format. From that point on dfs does not refer to your own filesystem, but the hdfs filesystem. When you execute hadoop dfs -ls it displays the user's home directory on the hdfs filesystem. It doesn't matter from where you are located on the host filesystem when you execute the command because it's not being used.
However it is possible not to configure hdfs and it'll use the local filesystem. Either way hadoop dfs -ls displays the contents of the user's home directory.
With that note, if you remove the user directory /user/hduser and execute hadoop dfs -ls it will give you an error because the user directory does not exist.
Source:
https://amodernstory.com/2014/09/23/installing-hadoop-on-mac-osx-yosemite/
Another:
This is related to OS and not Hadoop.
When ever you run a command without an explicit path the OS will search the locations in the PATH variable. In your case during installation of Hadoop you must have set some what following variables in your user profile (.bashrc or .profiles)
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$HADOOP_INSTALL/bin:$PATH
So whenever you type following it will check in $HADOOP_INSTALL/bin as you have set this path in OS PATH variable.
hadoop dfs -ls
And when you type following then it will use current folder path which is in your snippet is /usr/local/hadoop and just under your current folder there is bin/hadoop file
bin/hadoop dfs -ls
Hence in both the cases it is using same file for execution but identification of it done via PATH variable in one case and current directory content path (absolute path) in another case.
Its because PATH Variable you set while configuring Hadoop. PATH contains path to your HADOOP home so thats why even if you not specified the /bin path, it is taking it from PATH variable.
Both the commands are doing the same thing.
Check your /etc/bashrc or /root/.bashrc
There you will find the HADOOP_HOME set and bin path is added along with the path variable.
While setting this,we will be able to execute the hadoop commands anywhere from the command line.No other use on it..!!!
It is working because you are not executing shell "ls" command instead passing "ls" as parameter to the command "hadoop" so in both case same exact command with same parameters is executed i.e. 'hadoop dfs -ls' only thing changing is that in one case you are qualifying it with path in another case you are not and that is working because 'hadoop' must be in set in you $PATH environment variable.
The command hadoop dfs ls is equal to ls command in the hdfs file system. We can treat that as the ls command in linux/unix.
When you are log in as hadoop user
just type hado in terminal and press TAB key you will get hadoop means your hadoop setup is working properly..
So your ~./bashrc file is also set properly..
it means that when you use this command from any directory structure
hadoop dfs -ls /
it will gives you list of all files which are present in hdfs
The hadoop executable is within the /bin/ folder so it is the exact same command as long as /bin/hadoop/ is set as a the 'hadoop' variable in $PATH. You can find the $PATH variable defined in the ~/.bashrc file. Try cat .bashrc from you root directory so you can take a look.
There is one simple concept in LINUX Operating System. You can say bashrc file is linked to the terminal. So when you open the terminal then it loads all the variable from the bashrc file. Any executable appended in the path variable defined in the bashrc file can be loaded from any directory you presently are.
Now to answer your question. Since hadoop fs -ls / was working fine for you so that means the executable is in your PATH variable. and for the latter case bin/hadoop fs -ls / you are manullay going to the folder where the executable is present. This is similar to running any executable in LINUX.
just set all the hadoop, yarn and other path in .bashrc file. it will run from anywhere.
standard hadoop>bin/hadoop fs -ls
see the below hadoop forum for more details on hadoop.
http://tekzak.com/forum/viewforum.php?f=2&sid=5d01e2e3c27aebc6e7ee95447ef328a4
The variable HADOOP_HOME might have been set with the bin path of hadoop binary. In such a case, both the above commands irrespective of from where the command hadoop is executed will work.
You will have to give the absolute path followed by bin/hadoop dfs -ls
Absolute_path/bin/hadoop dfs -ls
Because You set your hadoop path in .bashrc file of your hadoop user.SO you dont need to navigate your path to bin folder.The command which work from bin folder also works from root folder of current user.
Command hadoop fs -ls is to list all files and directories in root folder in your hadoop file system (HDFS) rather then your current file system. Without any modifications of files in root of HDFS, result should be the same using this command.
you have already added hadoop/bin in the path when you installed hadoop in your machine. Command would directed to the same path wherever you run it in your current system.
Therefore, you dont make any change in your HDFS and you use the same command. That is why you get exactly the same result.
Both the commands bin/hadoop dfs -ls and hadoop dfs -ls are working for you because you have the hadoop executable (/usr/local/hadoop/bin/hadoop) set in $PATH variable for the user "hduser".
To understand it further, you can open a Terminal, and remove the value ($HADOOP_HOME/bin or /usr/local/hadoop/bin) from $PATH using the export command in linux. If you do this, the second command(hadoop dfs -ls) won't work for that Terminal Session.
In your case hadoop is present at bin/ (within /usr/local/hadoop), so you may execute it as bin/hadoop from /usr/local/hadoop(which is the current location in above example).
You are also able to execute it directly without specifying the relative/absolute path as the location of hadoop is added to the PATH.
You may check this by running which hadoop and printing PATH(echo $PATH).
when you install Hadoop its binaries are added to /usr/bin folder.
any Binary in folder /bin, /sbin, /usr/bin are available from any path and user in UNIX. proof: 1: https://askubuntu.com/questions/571617/what-is-the-purpose-of-the-bin-directory
Just to add, there are differences between folders /bin, /sbin etc and differences are explained here (https://askubuntu.com/questions/308045/differences-between-bin-sbin-usr-bin-usr-sbin-usr-local-bin-usr-local)
Its because of bashrc linux env setup
1) export HADOOP_PREFIX=/usr/local/hadoop/
2) export PATH=$PATH:$HADOOP_PREFIX/bin
After doing this we need to run command
exec bash
Its quite likely that you have exported
$HADOOP_HOME/bin in PATH variable.
Like if it is EMR then it would be
export HADOOP_PREFIX=/usr/lib/hadoop
export PATH=$PATH:$HADOOP_PREFIX/bin
You can check the path and find out
It is because you have already exported hadoop path when you did install hadoop . Now you can either go to the exact hadoop path or just type hadoop from whereever you are. It will work both ways.
I will answer your question in 2 perspectives,
hduser#ubuntu:/usr/local/hadoop$ bin/hadoop dfs -ls
hduser#ubuntu:/usr/local/hadoop$ hadoop dfs -ls
How these 2 commands works in a same way?
It is because your environment is aware of hadoop commands. It is because your $PATH variable contains hadoop installation directory.
Why both returns the same result?
It is because you are trying to list the directory which is residing in hdfs. When you execute hadoop dfs -ls command it will list the current user's home directory items from hdfs. In your case it lists hduser user directory data.
Hope it answers your question.
Both the commands gives the same result because we will already give the home path while installing hadoop itself.
So it will work even if we give the original path and also will work if it is given directly.

hadoop single node cluster installation on ubuntu

I am completely new to Hadoop and I am trying to install Hadoop single node cluster on ubuntu but I am unable to figure out the reason I am unable to.I am following the tutorials in the following link "http://codesfusion.blogspot.in/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1"
Everything went smoothly but when I give the command "Hadoop version" I get the following error.
"/usr/local/hadoop/bin/hadoop: line 133: /usr/lib/jvm/jdk//bin/java: No such file or directory"
I also opened the same file and searched the entire file but could not find such a line at all .
my .bashrc
export JAVA_HOME=/usr/lib/jvm/jdk/
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
###end of paste
After that I opened hadoop-env.sh and pasted this ,the java home
export JAVA_HOME=/usr/lib/jvm/jdk/
Later I re-logged in and checked the hadoop version I am getting this error
"/usr/local/hadoop/bin/hadoop: line 133: /usr/lib/jvm/jdk//bin/java: No such file or directory"
I also cross verified that particular file but there is no line as such .Anybody kindly help me with this since I am new to this.
I found the solution.
First remove / from the end /usr/lib/jvm/jdk/ in bot bashrc and hadoop-env.sh
navigate to /usr/lib/jvm/jdk/bin
see if it has java folder or not. If its not there then check if u have made the correct soft link.
You must create a soft link for folder that has java in it so check before this command:
$ cd /usr/lib/jvm
$ ln -s java-7-openjdk-amd64 jdk
in above step as u might have seen in the tutorial change as following
$ cd /usr/lib/jvm
$ ln -s java-7-openjdk-amd64/ jdk
the 7 here is dependent on verion of jdk u have so check that and change accordingly.
I have jdk 6 so i changed for java-6-**
hope it works
This is error due to $JAVA_HOME variable. Change this variable path. you will be free from the error.
go to .bashrc using this command
vim ~/.bashrc
Change the JAVA_HOME variable
export JAVA_HOME=/usr/lib/jvm/jdk
export PATH=PATH:$PATH/bin
if you have jdk 8 then replace jdk with java-8-oracle .
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export PATH=PATH:$PATH/bin
restart you terminal and check the java command first, then hadoop command.

Hadoop: JAVA_HOME is not set

Like everyone else in the world, I'm following this hadoop tutorial. I get to the point where I format HDFS, and I get this:
user#linux01:~$ sudo $HADOOP_INSTALL/bin/hadoop namenode -format
Error: JAVA_HOME is not set.
Well that's funny, I set JAVA_HOME in my /etc/profiles.
user#linux01:~$ tail -n 4 /etc/profile
export JAVA_HOME=/usr/local/jdk1.6.0_32/bin
export JDK_HOME=$JAVA_HOME
export PATH=$PATH:/usr/local/jdk1.6.0_32/bin
export HADOOP_INSTALL=/usr/local/hadoop/hadoop-1.0.3
Did I mess that up somehow?
user#linux01:~$ echo $JAVA_HOME
/usr/local/jdk1.6.0_32/bin
user#linux01:~$ ls $JAVA_HOME
appletviewer extcheck jar javac and so forth...
Seems to work. Maybe it absolutely has to be set in my hadoop-env.sh?
# The java implementation to use. Required.
export JAVA_HOME=$JAVA_HOME
Lazy, yeah, but I still get "JAVA_HOME is not set" with or without this comment. I'm running low on ideas. Anyone see anything I'm missing?
Thank you #Chris Shain and #Chris White for your hints. I was running hadoop as su, and su doesn't automatically know about the environmental variables I set. I logged in as my hadoop user (I had chown'd the hadoop install directory to this user), and was able to format the hdfs.
Secondary problem: When I tried to start Hadoop, NameNode and JobTracker started successfully but DataNode, SecondaryNameNode, and TaskTracker failed to start. I dug in a little bit. NameNode and JobTracker are started via hadoop-daemon.sh, but DataNode, SecondaryNameNode, and TaskTracker are started by hadoop-daemon*s*.sh. The resolution here was to properly set JAVA_HOME in conf/hadoop-env.sh.
First a general note: If you are working on CYGWIN, please be logged in to your system as an Administrator.
I faced this problem of Java_Home not found while executing namenode -format. Below is what i did to fix it
Reinstalled JDK out of program files in a location where there was no space in the folder name. For example: D:/Installations/JDK7
Got into the bin folder of hadoop (version 1.2.1) installation and edited the "hadoop" configuration file. This is the file which has no file extension.
Searched for java_home variable
Just before the first instance of variable $JAVA_HOME I added this line:
export JAVA_HOME=/cygdrive/D/Installations/JDK7/
This is how it looks now:
fi
export JAVA_HOME=/cygdrive/D/Installations/JDK7/
# some Java parameters
if [ "$JAVA_HOME" != "" ]; then
#echo "run java in $JAVA_HOME"
JAVA_HOME=$JAVA_HOME
fi
Note: /cygdrive/ has to preceed your jdk installation path. Colon after the drive letter is not needed and the path should have forward slash.
Additionally, I did set JAVA_HOME in the system env properties. Just to be double sure.
Run the program and it will work now.
Blockquote
Try to use short name to avoid a space in a path.
"C:\Program Files" should have short name C:\Progra~1 (you can verify it using DOS dir command or entering it into address bar in file explorer).
Set your JAVA_HOME this way:
export JAVA_HOME="/cygdrive/c/Progra~1/Java/jdk1.7.0_10"
Answered by user2540312

Resources