Memory for Namenode(s) in Hadoop - hadoop

Environment: The production cluster has 2 name-nodes (active and standby namely) and the nodes are SAS drives in Raid-1 configuration. These nodes have nothing but the master services (NN and Standby NN) running on each. They have a Ram of 256GB while the data nodes (where most of the processing happens) are set with only 128GB.
My Question: Why does Hadoop’s master nodes have such high Ram and why not the Datanodes when most of the processing is done where the data is available.?
P.S. As per hadoop thumb rule, we only require 1GB for every 1million files.

The Namenode stores all file references from all the datanodes in memory.
The datanode process doesn't need much memory, it's the YARN NodeManagers that do

Related

HDP cluster with RAID?

What is your experience with RAID1 on HDP cluster?
I have in my mind two options:
Setup RAID 1 for master and zoo nodes, and don't use RAID at all on slave nodes like kafka brokers, hbase regionservers and yarn nodemanager's.
Even if I loose one slave node, I will have two other replicas.
In my opinion, RAID will only slow down my cluster.
Despite everything, setup everything using RAID 1.
What do you think about it? What is you experience with HDP and RAID?
What do you think about using RAID 0 for slave nodes?
I'd recommend no RAID at all on Hadoop hosts. There is one caveat, in that if you are running services like Oozie and the Hive metastore that use a relational DB behind the scenes, raid may well make sense on the DB host.
On a master node, assuming you have Namenode, zookeeper etc - generally the redundancy is built into the service. For namenodes, all the data is stored on both namenodes. For Zookeeper, if you lose one node, then the other two nodes have all the information.
Zookeeper likes fast disks - ideally dedicate a full disk to zookeeper. If you have namenode HA, give the namenode edits directory and each journal node a dedicated disk too.
For the slave nodes, the datanode will write across all disks, effectively striping the data anyway. Each 'write' is at most the HDFS block size, so if you were writing a large file, you could get 128MB on disk 1, then the next 128MB on disk 2 etc.

Hadoop multi node cluster

I am a newbie to Hadoop. Please correct me if I am asking nonsense and help me to solve this problem :).
I installed and configured a two node hadoop cluster (yarn).
Master node : 2TB HDD, 4GB RAM
Slave node : 500GB HDD, 4GB RAM
Datanode:
Master node only (Not keeping replicated data in Slave node)
Map/Reduce :
Master node & Slave node.
Out of 10TB data, I uploaded 2TB to Master node(Data node). I am using slave nodes for Map/Reduce only (to use 100% CPU of slave node for running queries).
My questions:
If I add a new 2TB HDD to master node and I want to upload 2TB more to master node, how can I use both the HDD(data in Old HDD and New HDD in master)? Is there any way to give multiple HDD path in hdfs-site.xml?
Do I need to add 4TB HDD in slave node(With all the data in master) to use 100% of CPU of slave? Or Do slave can access data from master and run Map/Reduce jobs?
If I add 4TB to slave and uploading data to hadoop. Will that make any replication in master(duplicates)? Can I get access to all data in the primary HDD of master and primary HDD of slave? Do the queries uses 100% CPU of both nodes if I am doing this?
As a whole, If I have a 10TB data. what is the proper way to configure Hadoop two node cluster? what specification(for master and datanode) should I use to run the Hive queries fast?
I got stuck. I really need your suggestions and help.
Thanks a ton in advance.
Please find the answers below:
provide a comma-separated list of directories in hdfs-site.xml . source https://www.safaribooksonline.com/library/view/hadoop-mapreduce-cookbook/9781849517287/ch02s05.html
No. you don't need to add HDD on slave to use 100% CPU. Under the current configuration the node manager running on slave will read data from data node running on master (over network). This is not efficient in terms of data locality, but it doesn't affect the processing throughput. It will add additional latency due to network transfer.
No. The replication factor (number of copies to be stored) is independent of number of data nodes. The default replication factor can be changed hdfs-site.xml using the property dfs.replication. You can also configure this on per file basis.
You will need at least 10GB of storage across you cluster (all the data node combined, with replication factor 1). For a production system I would recommend a replication factor 3 (to handle node failure), that is 10*3 = 30GB storage across at least 3 nodes. Since 10GB is very small in Hadoop terms, have 3 nodes each with 2 or 4 core processor and 4 to 8 GB memory. Configur this as - node1: name node + data node + node manager, node2: resource manager + data node + node manager, node3: data node + node manager.

Data division on Addition of node to distributed System

Suppose I am having a distributed networks of computer in which i have say 1000 storage nodes.
Now if a new node is added, what should be done?
Meaning the data now should get equally divided into 1001 nodes ?
Also will the answer change if nodes range is 10 instead of 1000.
The client machine first splits the file into block Say block A, Block B then client machine interact with NameNode to asks the location to place these blocks (Block A Block B).NameNode gives a list of datanodes to the clinet to write the data. NameNode generally choose nearest datanode from network for this.
Then client choose first datanode from those list and write the first block to the datanode and datanode replicates the block to another datanodes. NameNode keeps the information about files and their associated blocks.
HDFS will not move blocks from old datanodes to new datanodes to balance the cluster if a datanode added in hadoop cluster.To do this, you need to run the balancer.
The balancer program is a Hadoop daemon that redistributes blocks by moving them
from over utilized datanodes to underutilized datanodes, while adhering to the block replica placement policy that makes data loss unlikely by placing block replicas on different racks. It moves blocks until the cluster is deemed to be balanced, which means that the utilization of every datanode (ratio of used space on the node to total capacity of the node) differs from the utilization of the cluster (ratio of used space on the cluster to total capacity of the cluster) by no more than a given threshold percentage.
Reference: Hadoop Definitive Guide 3rd edition Page No 350
As a hadoop admin you should schedule balance job at once in a day to balance blocks on hadoop cluster.
Useful link related to balancer:
http://www.swiss-scalability.com/2013/08/hadoop-hdfs-balancer-explained.html
http://www.cloudera.com/content/cloudera/en/documentation/cdh4/latest/CDH4-Installation-Guide/cdh4ig_balancer.html

Size of the NAMENODE and DATANODE in hadoop

What is the size of the NAMENODE and DATANODE and wheather the block and datanode is different or not in hadoop
if input file is 200mb size then how many datanode will be created and how many blocks will created.
NameNode, SecondaryNameNode, JobTracker, DataNode are components in a hadoop cluster.
Blocks are how data is stored on the HDFS.
Datanode can be simply seen as a hard disk(lot of memory - TBs) with a ram and a processor on the network.
Just like you can store many files on a hard disk, you can do that hear as well.
Now if you want to store a 200mb file on a Hadoop-
v1.0 system or lower, it will be split into 4 blocks of 64mb size
v2.0 System or higher, it will be split into 2 blocks of 128mb size
After this small explanation, I'd revert you to #charles_Babbage's suggestion to go and start with a book or tutorials on youtube.

Hadoop Cluster Failover

I have some questions about the Hadoop Cluster datanode failover:
1: What happen the link is down between the namenode and a datanode
(or between 2 datanodes) when the hadoop cluster is processing some data?
Does Hadoop cluster have any OOTB to recover this problem?
2: What happen one datanode is down when the hadoop cluster is processing
some data?
Also, another question is about the hadoop cluster hardware configuration. Let's say we will use our hadoop cluster to process 100GB log files each day, how many datanodes do we need to set up? And for each datanode hardware configuration(e.g. CPU, RAM, Hardisk)?
1: What happen the link is down between the namenode and a datanode
(or between 2 datanodes) when the hadoop cluster is processing some data?
Does Hadoop cluster have any OOTB to recover this problem?
NN will not receive any heartbeat from that node and hence consider it as dead. In such a case the task running on that node will be scheduled on some other node having that data.
2: What happen one datanode is down when the hadoop cluster is processing
some data?
Same as above.
For the second part of your question :
It totally depends on your data and the kind of processing you are going to perform and a few other things. 100G is not a suitable candidate for MR processing on the first place. But, if you still need it any decent machine would be sufficient enough to process 100G data.
As a thumb rule you can consider :
RAM : 1G RAM for each 1 million HDFS blocks+some additional for other things.
CPU : Based totally on your needs.
Disk : 3 times your datasize(if replication factor =3)+some additional space for stuff like temporary files, other apps etc. JBOD is preferable.
Frankly speaking the process is a lot more involved. I would strongly suggest you to go through this link in order to get a proper idea.
I would start with a cluster with 5 machines :
1 * Master(NN+JT) -
Disk : 3 * 1TB hard disks in a JBOD configuration (1 for the OS, 2 for the FS image)
CPU : 2 quad core CPUs, running at least 2-2.5GHz
RAM : 32 GB of RAM
3 * Slaves(DN+TT) -
Disk : 3 * 2 TB hard disks in a JBOD (Just a Bunch Of Disks) configuration
CPU : 2 quad core CPUs, running at least 2-2.5GHz
RAM : 16 GB of RAM
1 * SNN -
I would keep it same as master machine.
Depending on whether the namenode or datanode is down, the job will be rewired to different machines. HDFS was specifically designed for this. Yes, it's definitely out of the box.
If there are more datanodes available then the job is transferred.
100GB is not large enough to justify using hadoop. Don't use hadoop unless you absolutely need to.

Resources