Hadoop multi node cluster - hadoop

I am a newbie to Hadoop. Please correct me if I am asking nonsense and help me to solve this problem :).
I installed and configured a two node hadoop cluster (yarn).
Master node : 2TB HDD, 4GB RAM
Slave node : 500GB HDD, 4GB RAM
Datanode:
Master node only (Not keeping replicated data in Slave node)
Map/Reduce :
Master node & Slave node.
Out of 10TB data, I uploaded 2TB to Master node(Data node). I am using slave nodes for Map/Reduce only (to use 100% CPU of slave node for running queries).
My questions:
If I add a new 2TB HDD to master node and I want to upload 2TB more to master node, how can I use both the HDD(data in Old HDD and New HDD in master)? Is there any way to give multiple HDD path in hdfs-site.xml?
Do I need to add 4TB HDD in slave node(With all the data in master) to use 100% of CPU of slave? Or Do slave can access data from master and run Map/Reduce jobs?
If I add 4TB to slave and uploading data to hadoop. Will that make any replication in master(duplicates)? Can I get access to all data in the primary HDD of master and primary HDD of slave? Do the queries uses 100% CPU of both nodes if I am doing this?
As a whole, If I have a 10TB data. what is the proper way to configure Hadoop two node cluster? what specification(for master and datanode) should I use to run the Hive queries fast?
I got stuck. I really need your suggestions and help.
Thanks a ton in advance.

Please find the answers below:
provide a comma-separated list of directories in hdfs-site.xml . source https://www.safaribooksonline.com/library/view/hadoop-mapreduce-cookbook/9781849517287/ch02s05.html
No. you don't need to add HDD on slave to use 100% CPU. Under the current configuration the node manager running on slave will read data from data node running on master (over network). This is not efficient in terms of data locality, but it doesn't affect the processing throughput. It will add additional latency due to network transfer.
No. The replication factor (number of copies to be stored) is independent of number of data nodes. The default replication factor can be changed hdfs-site.xml using the property dfs.replication. You can also configure this on per file basis.
You will need at least 10GB of storage across you cluster (all the data node combined, with replication factor 1). For a production system I would recommend a replication factor 3 (to handle node failure), that is 10*3 = 30GB storage across at least 3 nodes. Since 10GB is very small in Hadoop terms, have 3 nodes each with 2 or 4 core processor and 4 to 8 GB memory. Configur this as - node1: name node + data node + node manager, node2: resource manager + data node + node manager, node3: data node + node manager.

Related

Memory for Namenode(s) in Hadoop

Environment: The production cluster has 2 name-nodes (active and standby namely) and the nodes are SAS drives in Raid-1 configuration. These nodes have nothing but the master services (NN and Standby NN) running on each. They have a Ram of 256GB while the data nodes (where most of the processing happens) are set with only 128GB.
My Question: Why does Hadoop’s master nodes have such high Ram and why not the Datanodes when most of the processing is done where the data is available.?
P.S. As per hadoop thumb rule, we only require 1GB for every 1million files.
The Namenode stores all file references from all the datanodes in memory.
The datanode process doesn't need much memory, it's the YARN NodeManagers that do

HDP cluster with RAID?

What is your experience with RAID1 on HDP cluster?
I have in my mind two options:
Setup RAID 1 for master and zoo nodes, and don't use RAID at all on slave nodes like kafka brokers, hbase regionservers and yarn nodemanager's.
Even if I loose one slave node, I will have two other replicas.
In my opinion, RAID will only slow down my cluster.
Despite everything, setup everything using RAID 1.
What do you think about it? What is you experience with HDP and RAID?
What do you think about using RAID 0 for slave nodes?
I'd recommend no RAID at all on Hadoop hosts. There is one caveat, in that if you are running services like Oozie and the Hive metastore that use a relational DB behind the scenes, raid may well make sense on the DB host.
On a master node, assuming you have Namenode, zookeeper etc - generally the redundancy is built into the service. For namenodes, all the data is stored on both namenodes. For Zookeeper, if you lose one node, then the other two nodes have all the information.
Zookeeper likes fast disks - ideally dedicate a full disk to zookeeper. If you have namenode HA, give the namenode edits directory and each journal node a dedicated disk too.
For the slave nodes, the datanode will write across all disks, effectively striping the data anyway. Each 'write' is at most the HDFS block size, so if you were writing a large file, you could get 128MB on disk 1, then the next 128MB on disk 2 etc.

Can a slave node have multiple blocks of the same file in hadoop?

Say I have a hadoop cluster where one node is the Master node and the other is a Data node. The slave node is an 8-core machine just to make sure there are enough cores to process jobs parallelly. Can i still split the file into say 3 blocks and have the slave node store all the three blocks separately on it. In other words, "if we want to utilize all the slave nodes in a hadoop cluster", then is there a 1:1 relation between number of slave nodes and the maximum number of blocks of a file? If yes, then in such a case how would the map-reduce work. Will the master node fire three map jobs to the slave node and have each mapper pick up each block on the slave node?
My question can be seen in a different way. If we have a 1GB file on a cluster with 3 data nodes then how do the 64 MB blocks get divided and how are they distributed between the three nodes?
The second question seems to be more understandable for me so I will take that first.
From HDFS Perspective:
With 64MB block size a 1GB file consists from 16 blocks, blocks are being stored somewhat randomly between DataNodes, if you have more from them as the replication factor, but you can expect an even distribution between the nodes, if you do not load the data from one of the DNs. If you do, that DN will hold a replica from all the blocks, and other DNs will hold the remaining replicas distributed sort of evenly (still randomly placed). So yes, if you have a file consists from 16 blocks, and only 3 DN with a replication factor of 3 all 3 DNs will hold all 16 blocks for example.
From YARN's perspective when you run the MapReduce job:
YARN tries to find a container on a node for a mapper that has the data locally, there is a configurable wait time for a free container on such nodes before YARN starts up the mapper on a node that does not have the data.
YARN does not rely on physical cores directly, you can configure the number of virtual cores and the amount of memory a container uses, and based on these values YARN will allocate the amount of available containers in a NodeManager.
Further reading on YARN tuning on Cloudera Engineering blog
However:
From the first part of the question as I understand you want to achieve paralellism by defining the block size to split your data files.
MapReduce does not care about HDFS blocks, it has its own abstraction to split the input, it is called InputSplit. InputSplits are feeded to the mappers, by the InputFormat. Also InputSplits are defining the place where the split is available locally so that YARN can find a container that is on a node that has the split on local data storage. I suggest to check the API, and the available implementations of InputFormat, as they most likely suit your needs, however if they are not, then you can still write your own implementation, and specify it via the job configuration.

Hadoop - Difference between single large node and multiple small node

I am new in hadoop. I am wondering are there any different between single node and multi-node, if both have same computing power.
For example, there is one server with 4 cores CPU and 32 GB RAM to setup a single node hadoop. On the other hand, there are four servers with one core CPE (same clock rate with the "big" server) and 8GB RAM to setup a 4 node hadoop cluster. Which setup would be better?
It is always better to have more number of data nodes instead of one powerful node. Basically we need to configure Name node with high configuration and data nodes can have less configurations.
So, if you have single node, all the demons will be running in the same node with single JVM, where as if you have multiple nodes the demons can run parallelly (sequentially) with multiple JVMs.

Hadoop Cluster Failover

I have some questions about the Hadoop Cluster datanode failover:
1: What happen the link is down between the namenode and a datanode
(or between 2 datanodes) when the hadoop cluster is processing some data?
Does Hadoop cluster have any OOTB to recover this problem?
2: What happen one datanode is down when the hadoop cluster is processing
some data?
Also, another question is about the hadoop cluster hardware configuration. Let's say we will use our hadoop cluster to process 100GB log files each day, how many datanodes do we need to set up? And for each datanode hardware configuration(e.g. CPU, RAM, Hardisk)?
1: What happen the link is down between the namenode and a datanode
(or between 2 datanodes) when the hadoop cluster is processing some data?
Does Hadoop cluster have any OOTB to recover this problem?
NN will not receive any heartbeat from that node and hence consider it as dead. In such a case the task running on that node will be scheduled on some other node having that data.
2: What happen one datanode is down when the hadoop cluster is processing
some data?
Same as above.
For the second part of your question :
It totally depends on your data and the kind of processing you are going to perform and a few other things. 100G is not a suitable candidate for MR processing on the first place. But, if you still need it any decent machine would be sufficient enough to process 100G data.
As a thumb rule you can consider :
RAM : 1G RAM for each 1 million HDFS blocks+some additional for other things.
CPU : Based totally on your needs.
Disk : 3 times your datasize(if replication factor =3)+some additional space for stuff like temporary files, other apps etc. JBOD is preferable.
Frankly speaking the process is a lot more involved. I would strongly suggest you to go through this link in order to get a proper idea.
I would start with a cluster with 5 machines :
1 * Master(NN+JT) -
Disk : 3 * 1TB hard disks in a JBOD configuration (1 for the OS, 2 for the FS image)
CPU : 2 quad core CPUs, running at least 2-2.5GHz
RAM : 32 GB of RAM
3 * Slaves(DN+TT) -
Disk : 3 * 2 TB hard disks in a JBOD (Just a Bunch Of Disks) configuration
CPU : 2 quad core CPUs, running at least 2-2.5GHz
RAM : 16 GB of RAM
1 * SNN -
I would keep it same as master machine.
Depending on whether the namenode or datanode is down, the job will be rewired to different machines. HDFS was specifically designed for this. Yes, it's definitely out of the box.
If there are more datanodes available then the job is transferred.
100GB is not large enough to justify using hadoop. Don't use hadoop unless you absolutely need to.

Resources