I am new to Hadoop and I'm going to configure the hadoop cluster. The Version of Hadoop is 3.1.3. I want to set the NameNode, DataNode, NodeManager on host hadoop102, DataNode, ResourceNode, NodeManager on host hadoop103, and SecondaryNameNode, DataNode, NodeManager on hadoop104
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop102:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop-3.1.3/data</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop102:9870</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop104:9868</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop103</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
workers
hadoop102
hadoop103
hadoop104
I upload the test file from host hadoop102 with the command
hadoop fs -put $HADOOP_HOME/wcinput/word.txt /input
Why the file is only available on hadoop102? I think the file should be copied into hadoop103, hadoop104 in the local file system.
File Information
You need to know that HDFS is not like replicated file system, so if you put one file to HDFS does not mean that it will be placed on data nodes as files (under / filesystem for example).
HDFS splits the file into blocks, and these blocks are replicated on your cluster and configured by replication factor.
When you run -copyFromLocal or hdfs put what does perform is just split the file into blocks and send these blocks in replicated fashion.
So if one node goes down. you can still retrieve your file.
But where's my file? the file will not be in your machines' local filesystem. It will be stored on data nodes.
How can you configure the number of replicas?
You can setup dfs.replication to 3 in hdfs-site.xml
and you set number of replica for a file:
hadoop fs –setrep –w 3 /my/file
You can change the replication factor of all the files under a directory.
hadoop fs –setrep –w 3 -R /my/dir
Related
I have Hadoop 3.2.1 installed on Ubuntu 16.04lts and my cluster has 18 datanodes and 1 master.
After running:
$ start-dfs.sh
$ start-yarn.sh
$ jps
On master I get the following:
ResourceManager
NameNode
SecondaryNameNodecode
jps
And on datanodes:
DataNode
jps
All the nodes seems to be live:
NameNode Overview Web Page
But when I reach the Cluster overview, none of my datanodes seems to be active:
Cluster Overview
My configurations files:
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-3.2.1/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop-master:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/hadoop-3.2.1/data/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/hadoop-3.2.1/data/datanode</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
The namenode and datanode directories exists on every host (master and datanodes)
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services </name>
<value> mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
</configuration>
Also I have configured hadoop-env.sh for JAVA_HOME Path and all the other variables are in .bashrc file (also in every host).
I have modified the /etc/hosts file to include all the hosts with their IPs and hostnames and finally I have also modified the workers file to include all the IPs of the datanodes.
The first time I have formatted the NameNode, the directories for the hdfs-site.xml was wrong (I had the datanode dir twice), so hdfs make its own directories under /tmp/hdfs/ (if I remember correctly). But I fixed this with formating again the NameNode with the corect directories.
I have installed Pseudo Distributed mode Hadoop 2.7.3 in Mac & did all Configuration which is specified in Plural Sight. I Copied Csv file from Local to hdfs. But next day when i searched for files , it is not present in hdfs and removed automatically. Is there any other conf setting so that my files are not loss?
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Thanks,
Add these properties to hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/username/hadoop-dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/username/hadoop-dfs/data</value>
</property>
The metadata and data blocks are stored under /tmp by default as it is the value of hadoop.tmp.dir. The contents inside /tmp are deleted on reboot.
After adding these properties, format the namenode and start the services.
I've been trying to find how to increase capacity of hdfs in Hadoop 2.7.2 with spark 2.0.0.
I read this link.
But I don't understand it. Here is my core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>hadoop_eco/hadoop/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://com1:9000</value>
</property>
</configuration>
and hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>hadoop_eco/hadoop/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>hadoop_eco/hadoop/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
When I run spark with 1 namenode and 10 datanodes, I got this error message:
org.apache.hadoop.hdfs.StateChange: DIR* completeFile:
/user/spark/_temporary/0/_temporary/attempt_201611141313_0001_m_000052_574/part-00052
is closed by DFSClient_NONMAPREDUCE_1638755846_140
I couldn't identify this error, but it may related to lack of disk capacity.
My configured capacity (hdfs) is 499.76GB and each datanode's capacity is 49.98GB.
So, is there a method to increase capacity of hdfs?
I solved it.
It is so easy to change the capacity of hdfs.
I tried to change hdfs-site.xml
<property>
<name>dfs.datanode.data.dir</name>
<value>file://"your directory path"</value>
</property>
and use this command line
hadoop namenode -format
stop-all.sh
start-all.sh
finally check your capacity of hdfs using hdfs dfsadmin -report
I get the error
Cannot create directory /home/hadoop/hadoopinfra/hdfs/namenode/current
While trying to install hadoop on my local Mac.
What could be the reason for this? Just for reference, I'm putting my xml files down below:
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/namenode </value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/datanode </value>
</property>
</configuration>
core-site.xml:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/Cellar/hadoop/hdfs/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
I think my problem lies in my hdfs-site.xml file, but I'm not sure how to pinpoint/change it.
I'm using this tutorial, and "hadoop" in the file path is replaced by my username.
Possible error: misconfiguration of the hdfs-site.xml file
This happened to me when I was following a setup tutorial. The contents of the hdfs-site.xml for me was
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/data/nameNode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/data/dataNode</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Only then I realized that the text hadoop in the above file corresponds to the user name, where in my case, it had to replaced with hduser. When both occurrences of hadoop was replaced with hduser, the hdfs namenode -format command worked fine.
I had this problem too and it was a permission problem. I just did:
sudo chmod 777 /home/hadoop/hadoopinfra/hdfs/namenode/
and works!
In the step where you need to verify the hadoop installation, instead of 'hdfs namenode -format' use '/usr/local/hadoop/bin/hdfs namenode -format'
Found this answer from:
hadoop java.io.IOException: while running namenode -format
If you are not using any other distro than native hadoop, then add the current user to hadoop group and retry formatting the namenode.
sudo usermod -a -G hadoop <current-username>
In case of using thirdparty hadoop distros such Cloudera, Hortonworks or MapR, switch to root user and again switch to hdfs user then try formatting the namenode will succeed.
$ sudo -i
$ su - hdfs
$ hdfs namenode -format
Try the Hadoop command with sudo
What is the right configuration of hdfs-site.xml file while configuring Hadoop.
On all the websites I see this :
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
I used the same configuration but was unable to start the datanode.
later I changed the configuration of datanode to
<name>dfs.datanode.name.dir</name>
instead of
<name>dfs.datanode.data.dir</name>
and it worked.
which one is right name.dir or data.dir?
because all the websites say the data.dir but that does not work in my case.
Thanks guys.
The options are dfs.namenode.name.dir and dfs.datanode.data.dir. The first defines the directory where namenode information will be held. Second defines where datanode information will be held.
If the node is a namenode, you need the namenode folder configuration. If it's a datanode, you need the datanode folder configuration. If it's a single-node cluster, you need both, as the node acts as both namenode and datanode.
If that doesn't work, what is the error message you get? Have you formatted the namenode?