Having issue setting up Hadoop - installation

The issue I'm having is that when I run bin/hadoop fs -ls it prints out all the files of the local directory that I'm in and not the files in hdfs (which currently should be none). Here's how I set everything up:
I've downloaded and unzipped all the 0.20.2 files into /home/micah/hadoop-install/. I've edited my conf/hdfs-site.xml with the following settings and created the appropriate directories:
<configuration>
<property>
<name>fs.default.name</name>
<value>localhost:9000</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/micah/hdfs/data</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/micah/hdfs/name</value>
</property>
</configuration>
I then ran bin/hadoop namenode -format followed by bin/start-dfs.sh.

Try this:
#http://www.mail-archive.com/common-user#hadoop.apache.org/msg00407.html
rm -r /tmp/hadoop****
build $HADOOP_HOME
echo export JAVA_HOME=$JAVA_HOME >> $HADOOP_HOME/conf/hadoop-env.sh
echoThenRun "$HADOOP_HOME/bin/stop-all.sh"
echoThenRun "$HADOOP_HOME/bin/hadoop namenode -format"

I had a similar issue and found that my HDFS data directory permissions were wrong.
Removing group write privileges with chmod -R g-w from the data directory fixed the problem.

Thanks , Doing the following resolved my issue
rm -r /tmp/hadoop****
build $HADOOP_HOME
echo export JAVA_HOME=$JAVA_HOME >> $HADOOP_HOME/conf/hadoop-env.sh
echoThenRun "$HADOOP_HOME/bin/stop-all.sh"
echoThenRun "$HADOOP_HOME/bin/hadoop namenode -format"

Related

Hadoop - java.net.ConnectException: Connection refused

I want connect to hdfs (in localhost) and i have a error:
Call From despubuntu-ThinkPad-E420/127.0.1.1 to localhost:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I follow all the steps in other posts, but i dont solve my problem. I use hadoop 2.7 and this is configurations:
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/despubuntu/hadoop/name/data</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
I type /usr/local/hadoop/bin/hdfs namenode -format and
/usr/local/hadoop/sbin/start-all.sh
But when i type "jps" the result is:
10650 Jps
4162 Main
5255 NailgunRunner
20831 Launcher
I need help...
Make sure that DFS which is set to port 9000 in core-site.xml is actually started. You can check with jps command. You can start it with sbin/start-dfs.sh
I guess that you didn't set up your hadoop cluster correctly please follow these steps :
Step1: begin with setting up .bashrc:
vi $HOME/.bashrc
put the following lines at the end of the file: (change the hadoop home as yours)
# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun
# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"
# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
hadoop fs -cat $1 | lzop -dc | head -1000 | less
}
# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
step 2 : edit hadoop-env.sh as following:
# The java implementation to use. Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun
step 3 : Now create a directory and set the required ownerships and permissions
$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp
step 4 : edit core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
step 5 : edit mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
</property>
step 6 : edit hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
finally format your hdfs (You need to do this the first time you set up a Hadoop cluster)
$ /usr/local/hadoop/bin/hadoop namenode -format
hope this will help you
I got the same issue. You can see Name node, DataNode, Resource manager and Task manager daemons are running when you type. So just do start-all.sh then all daemons start running and now you can access HDFS.
First check is if java processes are working or not by typing jps command on command line. On running jps command following processes are mandatory to run-->>
DataNode
jps
NameNode
SecondaryNameNode
If following processes are not running then first start the name node by using following command-->>
start-dfs.sh
This worked out for me and removed the error you stated.
I was getting similar error. Upon checking I found that my namenode service was in stopped state.
check status of the namenode sudo status hadoop-hdfs-namenode
if its not in started/running state
start namenode service sudo start hadoop-hdfs-namenode
Do keep in mind that it takes time before name node service becomes fully functional after restart. It reads all the hdfs edits in memory. You can check progress of this in /var/log/hadoop-hdfs/ using command tail -f /var/log/hadoop-hdfs/{Latest log file}

What's the standard way to create files in your hdfs filesystem?

I learned that I have to configure the NameNode and DataNode dir in hdfs-site.xml. So that's my hdfs-site.xml configuration on the NameNode:
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file://usr/local/hadoop-2.6.0/hadoop_data/hdfs/namenode</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
</configuration>
I did almost the same on my DataNode and changed dfs.namenode to dfs.datanode.
Then I formatted the filesystem via
hadoop namenode -format
Everything seems to be finished without an error.
Then I wanted to create a directory in my HDFS filesystem by using:
hdfs dfs -mkdir test
And I got an error:
mkdir: `test': No such file or directory
What did I miss or what's the common process from formatting to creating files/directories with HDFS?
Well, it's so easy.
hdfs dfs -mkdir /test
was created successfully.
hdfs dfs -put myFile /test/myFile
works as well.
Create a directory:
hdfs dfs -mkdir directoryName
Create a new file in directory
hdfs dfs -touchz directoryName/Newfilename
Write into newly created file in HDFS
nano filename
Save it Cntr+X Y
Read the newly created file from HDFS
nano fileName
Or
hdfs dfs -cat directoryName/fileName
HDFS is a non POSIX compliant file systems so you can't edit files directly inside of HDFS, however you can Copy a file from your local system to HDFS using following command:
hdfs dfs -put /path/in/source/system/filename /path/in/HDFS/system/destination
If you want to create multiple sub-directories then you should also use -p flag:
hdfs dfs -mkdir -p /test/another_test/one_more_test

hadoop namenode,datanode,secondarynamenode are not starting up

I just downloaded hadoop-0.20 tar and extracted. And i set JAVA_HOME and HADOOP_HOME. I modified core-site.xml, hdfs-site.xml and mapred-site.xml.
I started services.
jps
jps
JobTracker
TaskTracker
I check the logs. It says
2015-02-11 18:07:52,278 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = scspn0022420004.lab.eng.btc.netapp.in/10.72.40.68
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr 9 05:18:40 UTC 2009
************************************************************/
2015-02-11 18:07:52,341 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.NullPointerException
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:175)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:955)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:964)
2015-02-11 18:07:52,346 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at scspn0022420004.lab.eng.btc.netapp.in/10.72.40.68
************************************************************/
What i am mistaking?
My Conf files as follows:
core-site
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8020</value>
</property>
</configuration>
hdfs-site
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<!-- Immediately exit safemode as soon as one DataNode checks in.
On a multi-node cluster, these configurations must be removed. -->
<property>
<name>dfs.safemode.extension</name>
<value>0</value>
</property>
<property>
<name>dfs.safemode.min.datanodes</name>
<value>1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/lib/hadoop-hdfs/cache/${user.name}</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/name</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/namesecondary</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/data</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
</configuration>
Any idea?
This is what i see in the console while starting start-dfs.sh
localhost: starting secondarynamenode, logging to /root/hadoop/hadoop-0.20.0/bin/../logs/hadoop-root-secondarynamenode- hostname.out
localhost: Exception in thread "main" java.lang.NullPointerException
localhost: at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init> (SecondaryNameNode.java:115)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
I guess that you didn't set up your hadoop cluster correctly please follow these steps :
Step1: begin with setting up .bashrc:
vi $HOME/.bashrc
put the following lines at the end of the file: (change the hadoop home as yours)
# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun
# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"
# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
hadoop fs -cat $1 | lzop -dc | head -1000 | less
}
# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
step 2 : edit hadoop-env.sh as following:
# The java implementation to use. Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun
step 3 : Now create a directory and set the required ownerships and permissions
$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp
step 4 : edit core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
step 5 : edit mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
</property>
step 6 : edit hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
finally format your hdfs (You need to do this the first time you set up a Hadoop cluster)
$ /usr/local/hadoop/bin/hadoop namenode -format
hope this will help you
I don't use 0.20.0 version, but are you sure the key in core-site.xml is fs.defaultFS?
In core-default.xml seems to be named fs.default.name.

Incorrect configuration: namenode address dfs.namenode.rpc-address is not configured

I am getting this error when I try and boot up a DataNode. From what I have read, the RPC paramters are only used for a HA configuration, which I am not setting up (I think).
2014-05-18 18:05:00,589 INFO [main] impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(572)) - DataNode metrics system shutdown complete.
2014-05-18 18:05:00,589 INFO [main] datanode.DataNode (DataNode.java:shutdown(1313)) - Shutdown complete.
2014-05-18 18:05:00,614 FATAL [main] datanode.DataNode (DataNode.java:secureMain(1989)) - Exception in secureMain
java.io.IOException: Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
at org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddresses(DFSUtil.java:840)
at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:151)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:745)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:278)
My files look like:
[root#datanode1 conf.cluster]# cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode:8020</value>
</property>
</configuration>
cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>/hdfs/data</value>
</property>
<property>
<name>dfs.permissions.superusergroup</name>
<value>hadoop</value>
</property>
</configuration>
I am using the latest CDH5 distro.
Installed Packages
Name : hadoop-hdfs-datanode
Arch : x86_64
Version : 2.3.0+cdh5.0.1+567
Release : 1.cdh5.0.1.p0.46.el6
Any helpful advice on how to get past this?
EDIT: Just use Cloudera manager.
I too was facing the same issue and finally found that there was a space in fs.default.name value. truncating the space fixed the issue. The above core-site.xml doesn't seem to have space so the issue may be different from what i had. my 2 cents
These steps solved the problem for me:
export HADOOP_CONF_DIR = $HADOOP_HOME/etc/hadoop
echo $HADOOP_CONF_DIR
hdfs namenode -format
hdfs getconf -namenodes
./start-dfs.sh
check the core-site.xml under $HADOOP_INSTALL/etc/hadoop dir. Verify that the property fs.default.name is configured correctly
Obviously,your core-site.xml has configure error.
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode:8020</value>
</property>
Your <name>fs.defaultFS</name> setting as <value>hdfs://namenode:8020</value>,but your machine hostname is datanode1.So you just need change namenode to datanode1 will be OK.
I had the exact same issue. I found a resolution by checking the environment on the Data Node:
$ sudo update-alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.my_cluster 50
$ sudo update-alternatives --set hadoop-conf /etc/hadoop/conf.my_cluster
Make sure that the alternatives are set correctly on the Data Nodes.
Configuring the full host name in core-site.xml, masters and slaves solved the issue for me.
Old: node1 (failed)
New: node1.krish.com (Succeed)
creating dfs.name.dir and dfs.data.dir directories and configuring full hostname in core-site.xml, masters & slaves is solved my issue
In my situation, I fixed by change /etc/hosts config to lower case.
in my case, I have wrongly set HADOOP_CONF_DIR to an other Hadoop installation.
Add to hadoop-env.sh:
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop/
This type of problem mainly arises is there is a space in the value or name of the property in any one of the following files-
core-site.xml, hdfs-site.xml, mapred-site.xml, yarn-site.xml
just make sure you did not put any spaces or (changed the line) in between the opening and closing name and value tags.
Code:
<property>
<name>dfs.name.dir</name> <value>file:///home/hadoop/hadoop_tmp/hdfs/namenode</value>
<final>true</final>
</property>
I was facing the same issue, formatting HDFS solved my issue. Don't format HDFS if you have important meta data.
Command for formatting HDFS: hdfs namenode -format
When namenode was not working
After formatting HDFS
Check your '/etc/hosts' file:
There must be a line like below: (if not, so add that)
namenode 127.0.0.1
Replace 127.0.01 with your namenode IP.
Add the below line in hadoop-env.cmd
set HADOOP_HOME_WARN_SUPPRESS=1

Namenode not getting started

I was using Hadoop in a pseudo-distributed mode and everything was working fine. But then I had to restart my computer because of some reason. And now when I am trying to start Namenode and Datanode I can find only Datanode running. Could anyone tell me the possible reason of this problem? Or am I doing something wrong?
I tried both bin/start-all.sh and bin/start-dfs.sh.
I was facing the issue of namenode not starting. I found a solution using following:
first delete all contents from temporary folder: rm -Rf <tmp dir> (my was /usr/local/hadoop/tmp)
format the namenode: bin/hadoop namenode -format
start all processes again:bin/start-all.sh
You may consider rolling back as well using checkpoint (if you had it enabled).
hadoop.tmp.dir in the core-site.xml is defaulted to /tmp/hadoop-${user.name} which is cleaned after every reboot. Change this to some other directory which doesn't get cleaned on reboot.
Following STEPS worked for me with hadoop 2.2.0,
STEP 1 stop hadoop
hduser#prayagupd$ /usr/local/hadoop-2.2.0/sbin/stop-dfs.sh
STEP 2 remove tmp folder
hduser#prayagupd$ sudo rm -rf /app/hadoop/tmp/
STEP 3 create /app/hadoop/tmp/
hduser#prayagupd$ sudo mkdir -p /app/hadoop/tmp
hduser#prayagupd$ sudo chown hduser:hadoop /app/hadoop/tmp
hduser#prayagupd$ sudo chmod 750 /app/hadoop/tmp
STEP 4 format namenode
hduser#prayagupd$ hdfs namenode -format
STEP 5 start dfs
hduser#prayagupd$ /usr/local/hadoop-2.2.0/sbin/start-dfs.sh
STEP 6 check jps
hduser#prayagupd$ $ jps
11342 Jps
10804 DataNode
11110 SecondaryNameNode
10558 NameNode
In conf/hdfs-site.xml, you should have a property like
<property>
<name>dfs.name.dir</name>
<value>/home/user/hadoop/name/data</value>
</property>
The property "dfs.name.dir" allows you to control where Hadoop writes NameNode metadata.
And giving it another dir rather than /tmp makes sure the NameNode data isn't being deleted when you reboot.
Open a new terminal and start the namenode using path-to-your-hadoop-install/bin/hadoop namenode
The check using jps and namenode should be running
Why do most answers here assume that all data needs to be deleted, reformatted, and then restart Hadoop?
How do we know namenode is not progressing, but taking lots of time.
It will do this when there is a large amount of data in HDFS.
Check progress in logs before assuming anything is hung or stuck.
$ [kadmin#hadoop-node-0 logs]$ tail hadoop-kadmin-namenode-hadoop-node-0.log
...
016-05-13 18:16:44,405 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 117/141 transactions completed. (83%)
2016-05-13 18:16:56,968 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 121/141 transactions completed. (86%)
2016-05-13 18:17:06,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 122/141 transactions completed. (87%)
2016-05-13 18:17:38,321 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 123/141 transactions completed. (87%)
2016-05-13 18:17:56,562 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 124/141 transactions completed. (88%)
2016-05-13 18:17:57,690 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 127/141 transactions completed. (90%)
This was after nearly an hour of waiting on a particular system.
It is still progressing each time I look at it.
Have patience with Hadoop when bringing up the system and check logs before assuming something is hung or not progressing.
In core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/yourusername/hadoop/tmp/hadoop-${user.name}
</value>
</property>
</configuration>
and format of namenode with :
hdfs namenode -format
worked for hadoop 2.8.1
If anyone using hadoop1.2.1 version and not able to run namenode, go to core-site.xml, and change dfs.default.name to fs.default.name.
And then format the namenode using $hadoop namenode -format.
Finally run the hdfs using start-dfs.sh and check for service using jps..
Did you change conf/hdfs-site.xml dfs.name.dir?
Format namenode after you change it.
$ bin/hadoop namenode -format
$ bin/hadoop start-all.sh
If you facing this issue after rebooting the system, Then below steps will work fine
For workaround.
1) format the namenode: bin/hadoop namenode -format
2) start all processes again:bin/start-all.sh
For Perm fix: -
1) go to /conf/core-site.xml change fs.default.name to your custom one.
2) format the namenode: bin/hadoop namenode -format
3) start all processes again:bin/start-all.sh
Faced the same problem.
(1) Always check for the typing mistakes in the configuring the .xml files, especially the xml tags.
(2) go to bin dir. and type ./start-all.sh
(3) then type jps , to check if processes are working
Add hadoop.tmp.dir property in core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/yourname/hadoop/tmp/hadoop-${user.name}</value>
</property>
</configuration>
and format hdfs (hadoop 2.7.1):
$ hdfs namenode -format
The default value in core-default.xml is /tmp/hadoop-${user.name}, which will be deleted after reboot.
Try this,
1) Stop all hadoop processes : stop-all.sh
2) Remove the tmp folder manually
3) Format namenode : hadoop namenode -format
4) Start all processes : start-all.sh
If you kept default configurations when running hadoop the port for the namenode would be 50070. You will need to find any processes running on this port and kill them first.
Stop all running hadoop with : bin/stop-all.sh
check all processes running in port 50070
sudo netstat -tulpn | grep :50070 #check any processes running in
port 50070, if there are any the / will
appear at the RHS of the output.
sudo kill -9 <process_id> #kill_the_process.
sudo rm -r /app/hadoop/tmp #delete the temp folder
sudo mkdir /app/hadoop/tmp #recreate it
sudo chmod 777 –R /app/hadoop/tmp (777 is given for this example purpose only)
bin/hadoop namenode –format #format hadoop namenode
bin/start-all.sh #start-all hadoop services
Refer this blog
For me the following worked after I changed the directory of the namenode
and datanode in hdfs-site.xml
-- before executing the following steps stop all services with stop-all.sh or in my case I used the stop-dfs.sh to stop the dfs
On the new configured directory, for every node (namenode and datanode), delete every folder/files inside it (in my case a 'current' directory).
delete the Hadoop temporary directory: $rm -rf /tmp/haddop-$USER
format the Namenode: hadoop/bin/hdfs namenode -format
start-dfs.sh
After I followed those steps my namenode and datanodes were alive using the new configured directory.
I ran $hadoop namenode to start namenode manually at foreground.
From the logs I figured out that 50070 is ocuupied, which was defaultly used by dfs.namenode.http-address. After configuring dfs.namenode.http-address in hdfs-site.xml, everything went well.
I got the solution just share with you that will work who got the errors:
1. First check the /home/hadoop/etc/hadoop path, hdfs-site.xml and
check the path of namenode and datanode
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
2.Check the permission,group and user of namenode and datanode of the particular path(/home/hadoop/hadoopdata/hdfs/datanode), and check if there are any problems in all of them and if there are any mismatch then correct it. ex .chown -R hadoop:hadoop in_use.lock, change user and group
chmod -R 755 <file_name> for change the permission
After deleting a resource managers' data folder, the problem is gone.
Even if you have formatting cannot solve this problem.
If your namenode is stuck in safemode you can ssh to namenode, su hdfs user and run the following command to turn off safemode:
hdfs dfsadmin -fs hdfs://server.com:8020 -safemode leave
Instead of formatting namenode, may be you can use the below command to restart the namenode. It worked for me:
sudo service hadoop-master restart
hadoop dfsadmin -safemode leave
I was facing the same issue of namenode not starting with Hadoop-3.2.1**** version. I did the steps to resolve the issue:
Delete the contents from temporary folder from the name node directory. In my case the "current" directory made by root user: rm -rf (dir name)
Format the namenode: hdfs namenode -format
start the processes again:start-dfs.sh
Point #1 has change in the hdfs-site.xml file.
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///opt/hadoop/node-data/hdfs/namenode</value>
</property>
I ran into the same thing after a restart.
for hadoop-2.7.3 all I had to do was format the namenode:
<HadoopRootDir>/bin/hdfs namenode -format
Then a jps command shows
6097 DataNode
755 RemoteMavenServer
5925 NameNode
6293 SecondaryNameNode
6361 Jps

Resources