I am trying to estimate the resources based on the data size.
Is there a thumb rule for deciding the number of required data nodes based on the data size?
No not really. Usually in a typical Hadoop cluster, there is one DataNode per node.
Sorry for the short answer but thats it! :)
Just keep in mind that Hadoop prefers to deal with a small number of huge files.
Keep in mind that data (by default) is replicated 3 times (original copy + 2 more). That is, if you have 15TB of data, you will need at least 45TB of disk space to accommodate for the replicas.
Replicas cannot be on the same node, so you would need at least 3 Datanodes with 15TB of storage, assuming default configuration.
Related
I found similar question
Hadoop HDFS is not distributing blocks of data evenly
but my ask is when replication factor = 1
I still want to understand why HDFS is not evenly distributing file blocks across the cluster nodes? This will result in data skew from start, when I load/run dataframe ops on such files. Am I missing something?
Even if replication factor is one, files are still split and stored in multiples of the HDFS block size. Block placement is on best effort, AFAIK, not purely balanced; replication placement of 3 picks a random node, then another node on the same rack, then another node off rack at random
You'll need to clarify how large your files are and where you are looking to see if data is being split
Note: not all file formats are splittable
If I have 6 data nodes, is it faster to turn replication to 6 so all the data is replicated across all my nodes so the cluster can split up queries (say in hive) without having to move data around? I believe that if you have a replication of 3 and you put a 300GB file into HDFS, it splits it just across 3 of the data nodes and then when the 6 nodes need to be used for a query it has to move data around to the other 3 nodes that the data doesn't exist on, causing slower responses.. is that accurate?
I understand your means, you are talking about the data-locality. Generally speaking, the data-locality can reduce the run time, because it can save the time that block transmission by network. But in fact, if you don't open the "HDFS Short-Circuit Local Reads"(default it is off, please visit here), the MapTask will also read the block by the TCP protocol, it means by network, even if block and MapTask both on the same node.
Recently, I optimize hadoop and HDFS, we use SSD to instead the HDD disk, but we found the effect is not good and time is not shorter.Because the disk is not the bottleneck and network load is not heavy. According to the result, we conclude the cpu is very heavy. If you want you know the hadoop cluster situation clearly, I advise you to use ganglia to monitoring the cluster, it can help you to analysis your cluster bottleneck.please see here.
At last, hadoop is a very large and complicated system, the disk performance, cpu performance, network bandwidth, parameters values and also, there are many factor to consider. If you want to save time, you have much work to do, not just the replication factor.
I have a 2 node hadoop (1 is the master/slave and another slave) setup and 4 input files each of size 1GB.
When i set dfs.replicate to 2, then the entire data is copied over to both the nodes which is understandable. But my question is that, how do i see an improved performance (almost twice as better) over a single node setup since in the 2 node case, map-reduce will still run over the complete data set on both the systems along with the added overhead of channeling the inputs from 2 mappers to reducers.
Also when i set the replication as 1, the entire data exists only on the master node which is also understandable to avoid ethernet overhead. But even in this case, i see a performance improvement compared to single node setup which i find confusing, since map-reduce runs on local data sets, this scenario should essentially be similar to single node setup with one map-reduce program running on master node on the entire data set ??
Can someone help me understand what i am missing here ???
Thanks
Pawan
Pawan,
In the two node case the map reduce job will not run on entire dataset. MapReduce operates in HDFS blocks which will be of size 64 MB or more based on your configuration. Your 1 GB is split into blocks and distributed on the cluster nodes. some of these blocks are processed on node 1 and the other on node 2 but no duplications. The replication factor only increases the availability of data and more tolerance towards node failures. It will not duplicate the tasks.
resultantly what's happening is, from the processing perspective the data is split between the node 1 and node 2 and being processed. Which means, if your are utilizing your processing power fully and rightly, your are doubling your speed theoritically.
Cheers
Rags
I am facing a strange problem due to Hadoop's crazy data distribution and management. one or two of my data nodes are completely filled up due to Non-DFS usage where as the others are almost empty. Is there a way I can make the non-dfs usage more uniform?
[I have already tried using dfs.datanode.du.reserved but that doesn't help either]
Example for the prob: I have 16 data nodes with 10 GB space each. Initially, each of the nodes have approx. 7 GB free space. When I start a job for processing 5 GB of data (with replication factor=1), I expect the job to complete successfully. But alas! when I monitor the job execution, I see suddenly one node runs out of space because the non-dfs usage is approx 6-7 GB and then it retries and another node now runs out of space. I don't really want to have higher retries because that's won't give the performance metric I am looking for.
Any idea how can I fix this issue.
It sounds like your input isn't being split up properly. You may want to choose a different InputFormat or write your own to better fit your data set. Also make sure that all your nodes are listed in your NameNode's slaves file.
Another problem can be serious data skew - case when big part of data is going to one reducer. You may need to create you own partitioner to solve it.
You can not restrict non-dfs usage, as far as I know. I would suggest to identify what exactly input file (or its split) cause the problem. Then you probably will be able to find solution.
Hadoop MR built under assumption that single split processing can be done using single node resources like RAM or disk space.
If I copy data from local system to HDFS, сan I be sure that it is distributed evenly across the nodes?
PS HDFS guarantee that each block will be stored at 3 different nodes. But does this mean that all blocks of my files will be sorted on same 3 nodes? Or will HDFS select them by random for each new block?
If your replication is set to 3, it will be put on 3 separate nodes. The number of nodes it's placed on is controlled by your replication factor. If you want greater distribution then you can increase the replication number by editing the $HADOOP_HOME/conf/hadoop-site.xml and changing the dfs.replication value.
I believe new blocks are placed almost randomly. There is some consideration for distribution across different racks (when hadoop is made aware of racks). There is an example (can't find link) that if you have replication at 3 and 2 racks, 2 blocks will be in one rack and the third block will be placed in the other rack. I would guess that there is no preference shown for what node gets the blocks in the rack.
I haven't seen anything indicating or stating a preference to store blocks of the same file on the same nodes.
If you are looking for ways to force balancing data across nodes (with replication at whatever value) a simple option is $HADOOP_HOME/bin/start-balancer.sh which will run a balancing process to move blocks around the cluster automatically.
This and a few other balancing options can be found in at the Hadoop FAQs
Hope that helps.
You can open HDFS Web UI on port 50070 of Your namenode. It will show you the information about data nodes. One thing you will see there - used space per node.
If you do not have UI - you can look on the space used in the HDFS directories of the data nodes.
If you have a data skew, you can run rebalancer which will solve it gradually.
Now with Hadoop-385 patch, we can choose the block placement policy, so as to place all blocks of a file in the same node (and similarly for replicated nodes). Read this blog about this topic - look at the comments section.
Yes, Hadoop distributes data per block, so each block would be distributed separately.