hadoop file system list my own root directory - hadoop

I met a very wired situation when I try to install single node hadoop yarn 2.2.0 on my mac. I follow the tutorial on this link: http://raseshmori.wordpress.com/2012/09/23/install-hadoop-2-0-1-yarn-nextgen/.
When I start the hadoop, and jps to check the status, it shows: (which means normal, I think)
5552 Jps
7162 ResourceManager
7512 Jps
7243 NodeManager
6962 DataNode
7060 SecondaryNameNode
6881 NameNode
However, after enter
hadoop fs -ls /
The files lists are the files in my own root but not the hadoop file system root. There must be some error when I set the hadoop that mix my own fs with the hdfs. Could any one give me a hint about it?

Use the following command for accessing HDFS
hadoop fs -ls hdfs://localhost:9000/
Or
Populate ${HADOOP_CONF_DIR}/core-site.xml as follows. If your doing so even without specifying hdfs:// URI you will be able to access HDFS.
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
Add the following line at the starting of the file $HOME/yarn/hadoop-2.0.1-alpha/libexec/hadoop-config.sh
export HADOOP_CONF_DIR=$HOME/yarn/hadoop-2.0.1-alpha/etc/hadoop

Related

Namenode service unstable in hadoop 1.2.1

I have setup a hadoop cluster with 1 namenode and 1 datanode (using hadoop version 1.2.1) but when I start both nodes, the namenode service dies (does not appear in list of running java processes) within seconds (datanode service remains up). Can anyone please help me with the reason?
I have tried - removing the temporary files and then re-formatting the namenode before starting the namenode again but that did not help.
I have attached the screenshots of my core-site.xml and hdfs-site.xml entries for both my namenode and datanodes.
Please let me know the reason if possible.
hadoop version and location screenshot
core-site.xml of namenode
hdfs-site.xml of namenode
No errors in formatting namenode
jps listing and unstable namenode
hdfs-site.xml of datanode
namenode log

No datanode running in hadoop 2.9.2

I'm very new to hadoop, so I've started following the hadoop 2.9.2 getting started. When I run the command
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar grep input output 'dfs[a-z.]+'
it returns a success, but when I look at the output/part-r-00000.txt file, which is meant to show the result, it is empty, even though the input directory contains the .xml files of etc/hadoop as it is supposed to.
I've started the whole process over and over again, reading all the logs, in order to understand where the error might be. Anyway, when I run the bin/hdfs namenode -format, it shows me this error:
ERROR common.Util: Syntax error in URI file://path to temp_directory/dfs/name. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in authority at index 7: file://path to temp_directory/dfs/name
at java.base/java.net.URI$Parser.fail(URI.java:2915)
at java.base/java.net.URI$Parser.parseAuthority(URI.java:3249)
at java.base/java.net.URI$Parser.parseHierarchical(URI.java:3160)
at java.base/java.net.URI$Parser.parse(URI.java:3116)
at java.base/java.net.URI.<init>(URI.java:600)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1466)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1511)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1480)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1137)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1614)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
and also this occurs when I run bin/hdfs dfs -put etc/hadoop input:
WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/federico/input/hadoop/capacity-scheduler.xml._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
it seems pretty clear that there are no datanodes running. So, assumed this situation, how can I initialize a datanode to make things work, and how do I know if my datanode is running as it is expected to?
EDIT: I've tried to follow some suggestion fro different users experiencing a similar problem and tihs error came out:
WARN org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/dfs/data
java.io.FileNotFoundException: File file:/dfs/data does not exist
and thus the datanode creation fails. How do I deal with it?
Please update you hdfs-site.xml as follows where dfs.datanode.data.dir value should be set as per your expectations. You can find this file in /etc/hadoop under Hadoop installation directory.
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/Users/myname/data/hdfs/data</value>
</property>
</configuration>
Use similar paths for linux as /home/myname/data/hdfs/data

What's the standard way to create files in your hdfs filesystem?

I learned that I have to configure the NameNode and DataNode dir in hdfs-site.xml. So that's my hdfs-site.xml configuration on the NameNode:
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file://usr/local/hadoop-2.6.0/hadoop_data/hdfs/namenode</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
</configuration>
I did almost the same on my DataNode and changed dfs.namenode to dfs.datanode.
Then I formatted the filesystem via
hadoop namenode -format
Everything seems to be finished without an error.
Then I wanted to create a directory in my HDFS filesystem by using:
hdfs dfs -mkdir test
And I got an error:
mkdir: `test': No such file or directory
What did I miss or what's the common process from formatting to creating files/directories with HDFS?
Well, it's so easy.
hdfs dfs -mkdir /test
was created successfully.
hdfs dfs -put myFile /test/myFile
works as well.
Create a directory:
hdfs dfs -mkdir directoryName
Create a new file in directory
hdfs dfs -touchz directoryName/Newfilename
Write into newly created file in HDFS
nano filename
Save it Cntr+X Y
Read the newly created file from HDFS
nano fileName
Or
hdfs dfs -cat directoryName/fileName
HDFS is a non POSIX compliant file systems so you can't edit files directly inside of HDFS, however you can Copy a file from your local system to HDFS using following command:
hdfs dfs -put /path/in/source/system/filename /path/in/HDFS/system/destination
If you want to create multiple sub-directories then you should also use -p flag:
hdfs dfs -mkdir -p /test/another_test/one_more_test

Couldn't start hadoop datanode normally

i am trying to install hadoop 2.2.0 i am getting following kind of error while starting dataenode services please help me resolve this issue.Thanks in Advance.
2014-03-11 08:48:16,406 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/prassanna/usr/local/hadoop/yarn_data/hdfs/datanode/in_use.lock acquired by nodename 3627#prassanna-Studio-1558
2014-03-11 08:48:16,426 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-611836968-127.0.1.1-1394507838610 (storage id DS-1960076343-127.0.1.1-50010-1394127604582) service to localhost/127.0.0.1:9000
java.io.IOException: Incompatible clusterIDs in /home/prassanna/usr/local/hadoop/yarn_data/hdfs/datanode: namenode clusterID = CID-fb61aa70-4b15-470e-a1d0-12653e357a10; datanode clusterID = CID-8bf63244-0510-4db6-a949-8f74b50f2be9
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:662)
2014-03-11 08:48:16,427 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-611836968-127.0.1.1-1394507838610 (storage id DS-1960076343-127.0.1.1-50010-1394127604582) service to localhost/127.0.0.1:9000
2014-03-11 08:48:16,532 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-611836968-127.0.1.1-1394507838610 (storage id DS-1960076343-127.0.1.1-50010-1394127604582)
2014-03-11 08:48:18,532 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-03-11 08:48:18,534 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-03-11 08:48:18,536 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/**********************************
SHUTDOWN_MSG: Shutting down DataNode at prassanna-Studio-1558/127.0.1.1
Make sure you are ready with correct configuration and right path.
This is a link for Running Hadoop on ubuntu.
I have used this link to setup hadoop in my machine and it works fine.
That simply shows that the datanode tried to startup but took some exception and died.
Please check the datanode log under the logs folder in the hadoop installation folder (unless you changed that config) for exceptions. It usually points to a configuration issue of some kind, esp. network settings (/etc/hosts) related but there are quite a few possibilities.
Refer this,
1.Check JAVA_HOME---
readlink -f $(which java)
/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
2.If JAVA is not available install by command
sudo apt-get install defalul-jdk
than run 1. and check on terminal
java -version
javac -version
3.Configure SSH
Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your local machine if you want to use Hadoop on it (which is what we want to do in this short tutorial). For our single-node setup of Hadoop, we therefore need to configure SSH access to localhost for the user .
sudo apt-get install ssh
sudo su hadoop
ssh-keygen -t rsa -P “”
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
ssh localhost
Download and extract hadoop-2.7.3(Chosse dirrectory having read write permisson)
Set Environment Variable
sudo gedit .bashrc
source .bashrc
Setup Configuration Files
The following files will have to be modified to complete the Hadoop setup:
~/.bashrc (Already done)
(PATH)/etc/hadoop/hadoop-env.sh
(PATH)/etc/hadoop/core-site.xml
(PATH)/etc/hadoop/mapred-site.xml.template
(PATH)/etc/hadoop/hdfs-site.xm
gedit (PATH)/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
gedit (PATH)/etc/hadoop/core-site.xml:
The (HOME)/etc/hadoop/core-site.xml file contains configuration properties that Hadoop uses when starting up.
This file can be used to override the default settings that Hadoop starts with.
($ sudo mkdir -p /app/hadoop/tmp)
Open the file and enter the following in between the <configuration></configuration> tag:
gedit /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
(PATH)/etc/hadoop/mapred-site.xml
By default, the (PATH)/etc/hadoop/ folder contains (PATH)/etc/hadoop/mapred-site.xml.template file which has to be renamed/copied with the name mapred-site.xml:
cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
The mapred-site.xml file is used to specify which framework is being used for MapReduce.
We need to enter the following content in between the <configuration></configuration> tag:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
(PATH)/etc/hadoop/hdfs-site.xml
The (PATH)/etc/hadoop/hdfs-site.xml file needs to be configured for each host in the cluster that is being used.
It is used to specify the directories which will be used as the namenode and the datanode on that host.
Before editing this file, we need to create two directories which will contain the namenode and the datanode for this Hadoop installation.
This can be done using the following commands:
sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode
sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode
Open the file and enter the following content in between the <configuration></configuration> tag:
gedit (PATH)/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>
Format the New Hadoop Filesystem
Now, the Hadoop file system needs to be formatted so that we can start to use it. The format command should be issued with write permission since it creates current directory under /usr/local/hadoop_store/ folder:
bin/hadoop namenode -format
or
bin/hdfs namenode -format
HADOOP SETUP IS DONE
Now start the hdfs
start-dfs.sh
start-yarn.sh
CHECK URL: http://localhost:50070/
FOR STOPPING HDFS
stop-dfs.sh
stop-yarn.sh

Namenode not getting started

I was using Hadoop in a pseudo-distributed mode and everything was working fine. But then I had to restart my computer because of some reason. And now when I am trying to start Namenode and Datanode I can find only Datanode running. Could anyone tell me the possible reason of this problem? Or am I doing something wrong?
I tried both bin/start-all.sh and bin/start-dfs.sh.
I was facing the issue of namenode not starting. I found a solution using following:
first delete all contents from temporary folder: rm -Rf <tmp dir> (my was /usr/local/hadoop/tmp)
format the namenode: bin/hadoop namenode -format
start all processes again:bin/start-all.sh
You may consider rolling back as well using checkpoint (if you had it enabled).
hadoop.tmp.dir in the core-site.xml is defaulted to /tmp/hadoop-${user.name} which is cleaned after every reboot. Change this to some other directory which doesn't get cleaned on reboot.
Following STEPS worked for me with hadoop 2.2.0,
STEP 1 stop hadoop
hduser#prayagupd$ /usr/local/hadoop-2.2.0/sbin/stop-dfs.sh
STEP 2 remove tmp folder
hduser#prayagupd$ sudo rm -rf /app/hadoop/tmp/
STEP 3 create /app/hadoop/tmp/
hduser#prayagupd$ sudo mkdir -p /app/hadoop/tmp
hduser#prayagupd$ sudo chown hduser:hadoop /app/hadoop/tmp
hduser#prayagupd$ sudo chmod 750 /app/hadoop/tmp
STEP 4 format namenode
hduser#prayagupd$ hdfs namenode -format
STEP 5 start dfs
hduser#prayagupd$ /usr/local/hadoop-2.2.0/sbin/start-dfs.sh
STEP 6 check jps
hduser#prayagupd$ $ jps
11342 Jps
10804 DataNode
11110 SecondaryNameNode
10558 NameNode
In conf/hdfs-site.xml, you should have a property like
<property>
<name>dfs.name.dir</name>
<value>/home/user/hadoop/name/data</value>
</property>
The property "dfs.name.dir" allows you to control where Hadoop writes NameNode metadata.
And giving it another dir rather than /tmp makes sure the NameNode data isn't being deleted when you reboot.
Open a new terminal and start the namenode using path-to-your-hadoop-install/bin/hadoop namenode
The check using jps and namenode should be running
Why do most answers here assume that all data needs to be deleted, reformatted, and then restart Hadoop?
How do we know namenode is not progressing, but taking lots of time.
It will do this when there is a large amount of data in HDFS.
Check progress in logs before assuming anything is hung or stuck.
$ [kadmin#hadoop-node-0 logs]$ tail hadoop-kadmin-namenode-hadoop-node-0.log
...
016-05-13 18:16:44,405 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 117/141 transactions completed. (83%)
2016-05-13 18:16:56,968 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 121/141 transactions completed. (86%)
2016-05-13 18:17:06,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 122/141 transactions completed. (87%)
2016-05-13 18:17:38,321 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 123/141 transactions completed. (87%)
2016-05-13 18:17:56,562 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 124/141 transactions completed. (88%)
2016-05-13 18:17:57,690 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 127/141 transactions completed. (90%)
This was after nearly an hour of waiting on a particular system.
It is still progressing each time I look at it.
Have patience with Hadoop when bringing up the system and check logs before assuming something is hung or not progressing.
In core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/yourusername/hadoop/tmp/hadoop-${user.name}
</value>
</property>
</configuration>
and format of namenode with :
hdfs namenode -format
worked for hadoop 2.8.1
If anyone using hadoop1.2.1 version and not able to run namenode, go to core-site.xml, and change dfs.default.name to fs.default.name.
And then format the namenode using $hadoop namenode -format.
Finally run the hdfs using start-dfs.sh and check for service using jps..
Did you change conf/hdfs-site.xml dfs.name.dir?
Format namenode after you change it.
$ bin/hadoop namenode -format
$ bin/hadoop start-all.sh
If you facing this issue after rebooting the system, Then below steps will work fine
For workaround.
1) format the namenode: bin/hadoop namenode -format
2) start all processes again:bin/start-all.sh
For Perm fix: -
1) go to /conf/core-site.xml change fs.default.name to your custom one.
2) format the namenode: bin/hadoop namenode -format
3) start all processes again:bin/start-all.sh
Faced the same problem.
(1) Always check for the typing mistakes in the configuring the .xml files, especially the xml tags.
(2) go to bin dir. and type ./start-all.sh
(3) then type jps , to check if processes are working
Add hadoop.tmp.dir property in core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/yourname/hadoop/tmp/hadoop-${user.name}</value>
</property>
</configuration>
and format hdfs (hadoop 2.7.1):
$ hdfs namenode -format
The default value in core-default.xml is /tmp/hadoop-${user.name}, which will be deleted after reboot.
Try this,
1) Stop all hadoop processes : stop-all.sh
2) Remove the tmp folder manually
3) Format namenode : hadoop namenode -format
4) Start all processes : start-all.sh
If you kept default configurations when running hadoop the port for the namenode would be 50070. You will need to find any processes running on this port and kill them first.
Stop all running hadoop with : bin/stop-all.sh
check all processes running in port 50070
sudo netstat -tulpn | grep :50070 #check any processes running in
port 50070, if there are any the / will
appear at the RHS of the output.
sudo kill -9 <process_id> #kill_the_process.
sudo rm -r /app/hadoop/tmp #delete the temp folder
sudo mkdir /app/hadoop/tmp #recreate it
sudo chmod 777 –R /app/hadoop/tmp (777 is given for this example purpose only)
bin/hadoop namenode –format #format hadoop namenode
bin/start-all.sh #start-all hadoop services
Refer this blog
For me the following worked after I changed the directory of the namenode
and datanode in hdfs-site.xml
-- before executing the following steps stop all services with stop-all.sh or in my case I used the stop-dfs.sh to stop the dfs
On the new configured directory, for every node (namenode and datanode), delete every folder/files inside it (in my case a 'current' directory).
delete the Hadoop temporary directory: $rm -rf /tmp/haddop-$USER
format the Namenode: hadoop/bin/hdfs namenode -format
start-dfs.sh
After I followed those steps my namenode and datanodes were alive using the new configured directory.
I ran $hadoop namenode to start namenode manually at foreground.
From the logs I figured out that 50070 is ocuupied, which was defaultly used by dfs.namenode.http-address. After configuring dfs.namenode.http-address in hdfs-site.xml, everything went well.
I got the solution just share with you that will work who got the errors:
1. First check the /home/hadoop/etc/hadoop path, hdfs-site.xml and
check the path of namenode and datanode
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
2.Check the permission,group and user of namenode and datanode of the particular path(/home/hadoop/hadoopdata/hdfs/datanode), and check if there are any problems in all of them and if there are any mismatch then correct it. ex .chown -R hadoop:hadoop in_use.lock, change user and group
chmod -R 755 <file_name> for change the permission
After deleting a resource managers' data folder, the problem is gone.
Even if you have formatting cannot solve this problem.
If your namenode is stuck in safemode you can ssh to namenode, su hdfs user and run the following command to turn off safemode:
hdfs dfsadmin -fs hdfs://server.com:8020 -safemode leave
Instead of formatting namenode, may be you can use the below command to restart the namenode. It worked for me:
sudo service hadoop-master restart
hadoop dfsadmin -safemode leave
I was facing the same issue of namenode not starting with Hadoop-3.2.1**** version. I did the steps to resolve the issue:
Delete the contents from temporary folder from the name node directory. In my case the "current" directory made by root user: rm -rf (dir name)
Format the namenode: hdfs namenode -format
start the processes again:start-dfs.sh
Point #1 has change in the hdfs-site.xml file.
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///opt/hadoop/node-data/hdfs/namenode</value>
</property>
I ran into the same thing after a restart.
for hadoop-2.7.3 all I had to do was format the namenode:
<HadoopRootDir>/bin/hdfs namenode -format
Then a jps command shows
6097 DataNode
755 RemoteMavenServer
5925 NameNode
6293 SecondaryNameNode
6361 Jps

Resources