Does HDFS needs 3 times the data space? - hadoop

I was attending a course on Hadoop and MapReduce on Udacity.com and the instructor mentioned that In HDFS to reduce the point of failures each block is replicated 3 times in Database. Is it true for real?? Does it mean that If I have 1 petabytes of Logs I will need 3 Petabytes of Storage?? Beacuse that will cost me more

Yes, is true, HDFS requires space for each redundant copy and requires copies to achieve failure tolerance and data locality during processing.
But this is not necessarily true about MapReduce, which can run on other file systems like S3 or Azure blobs for instance. It is HDFS that requires the 3 copies.

By default, HDFS conf parameter dfs.replication is set with value 3. That allow fault tolerance, disponibility, etc... (All parameters of HDFS here)
But in install time, you could set the parameter in 1, and HDFS don't make replicas of your data. With dfs.replication=1, 1 petabyte is storaged in the same space amount.

Yes that's true. So say if you have say 4 machines with datanodes running on them, then by default replication will happen in other two machines at random as well. If you don't want that, you can switch it to 1 by setting dfs.replication property in hdfs-site.xml

This is because HDFS replicates data when you store it. The default replication factor for hdfs is 3, which you can find in hdfs-site.xml file under dfs.replication property. You can set this value to 1 or 5 as per your requirement.
Data replication is much useful as if some node particularly goes down, you will have the copy of data available on other node/nodes for processing.

Related

Is there any way to set replication factor for data block invidually? I am using a single node hadoop cluster on windows 10

Lets Say i have 1 large file. Hdfs splits that file in 5 blocks. I want to change replication factor for block 1. How do i do that?
You cannot replicate a file more than the running number of datanodes. Blocks also cannot be replicated individually.

Data Storage in Hadoop cluster

This is a question from a hadoop book and the answer i thougt was 200 but that is not correct.Can anyone explain?
Assume that there are 50 nodes in your Hadoop cluster with a total of 200 TB (4 TB per node) of raw disk space allocated HDFS storage. Assuming Hadoop's default configuration, how much data will you be able to store?
HDFS has the default replication level set to 3, therefore, each of your data would have 3 copies in HDFS unless specified clearly at the time of creation.
Therefore, under the default HDFS configuration, you could only store 200/3 TB of actual data.

hadoop replication factor confusion

We have 3 settings for hadoop replication namely:
dfs.replication.max = 10
dfs.replication.min = 1
dfs.replication = 2
So dfs.replication is default replication of files in hadoop cluster until a hadoop client is setting it manually using "setrep".
and a hadoop client can set max replication up to dfs.replication.mx.
dfs.relication.min is used in two cases:
During safe mode, it checks whether replication of blocks is upto dfs.replication.min or not.
dfs.replication.min are processed synchronously. and remaining dfs.replication-dfs.replication.min are processed asynchronously.
So we have to set these configuration on each node (namenode+datanode) or only on client node?
What if setting for above three settings vary on different datanodes?
Replication factor can’t be set for any specific node in cluster, you can set it for entire cluster/directory/file. dfs.replication can be updated in running cluster in hdfs-sie.xml.
Set the replication factor for a file- hadoop dfs -setrep -w <rep-number> file-path
Or set it recursively for directory or for entire cluster- hadoop fs -setrep -R -w 1 /
Use of min and max rep factor-
While writing the data to datanode it is possible that many datanodes may fail. If the dfs.namenode.replication.min replicas written then the write operation succeed. Post to write operation the blocks replicated asynchronously until it reaches to dfs.replication level.
The max replication factor dfs.replication.max is used to set the replication limit of blocks. A user can’t set the blocks replication more than the limit while creating the file.
You can set the high replication factor for blocks of popular file to distribute the read load on the cluster.

When I store files in HDFS, will they be replicated?

I am new to Hadoop.
When I store Excel files using hadoop -fs put commoad, it is stored in HDFS.
Replication factor is 3.
My question is: Does it take 3 copies and store them into 3 nodes each?
Here is a comic for HDFS working.
https://docs.google.com/file/d/0B-zw6KHOtbT4MmRkZWJjYzEtYjI3Ni00NTFjLWE0OGItYTU5OGMxYjc0N2M1/edit?pli=1
Does it take 3 copies and store them into 3 nodes each.
answer is: NO
Replication is done in pipelining
that is it copies some part of file to datanode1 and then copies to datanode2 from datanode1 and to datanode3 from datanode1
http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html#Replication+Pipelining
see here for Replication Pipelining
Your HDFS Client (hadoop fs in this case) will be given the block names and datanode locations (the first being the closest location if the NameNode can determine this from the rack awareness script) of where to store these files by the NameNode.
The client then copies the blocks to the closest Data node. The data node is then responsible for copying the block to a second datanode (preferably on another rack), where finally the second will copy to the third (on the same rack as the third).
So your client will only copy data to one of the data nodes, and the framework will take care of the replication between datanodes.
It will store the original file to one (or more in case of large files) blocks. These blocks will be replicated to two other nodes.
Edit: My answer applies to Hadoop 2.2.0. I have no experience with prior versions.
Yes it will be replicated in 3 nodes (maximum upto 3 nodes).
The Hadoop Client is going to break the data file into smaller “Blocks”, and place those blocks on different machines throughout the cluster. The more blocks you have, the more machines that will be able to work on this data in parallel. At the same time, these machines may be prone to failure, so it is safe to insure that every block of data is on multiple machines at once to avoid data loss.
So each block will be replicated in the cluster as its loaded. The standard setting for Hadoop is to have (3) copies of each block in the cluster. This can be configured with the dfs.replication parameter in the file hdfs-site.xml.
And replicating data is not a drawback of Hadoop at all, in fact it is an integral part of what makes Hadoop effective. Not only does it provide you with a good degree of fault tolerance, but it also helps in running your map tasks close to the data to avoid putting extra load on the network (read about data locality).
Yes it make n(replications factor) number copies in hdfs
use this command to find out the location of file, find #rack it is stored, what is the block name on all racks
hadoop fsck /path/to/your/directory -files -blocks -locations -racks
Use this command to load data into hdfs with replication
hadoop fs -Ddfs.replication=1 -put big.file /tmp/test1.file
and -Ddfs.replication=1 you can define number of replication copy will created while to loading data into hdfs

How does HDFS replicate when there are less nodes than the replication factor?

For instance, if a Hadoop cluster consisted of 2 DataNodes and the HDFS replication factor is set at the default of 3, what is the default behavior for how the files are replicated?
From what I've read, it seems that HDFS bases it on rack awareness, but for cases like this, does anyone know how it is determined?
It will consider the blocks as under-replicated and it will keep complaining about that and it will permanently try to bring them to the expected replication factor.
The HDFS system has a parameter (replication factor - by default 3) which tells the namenode how replicated each block should be (in the default case, each block should be replicated 3 times all over the cluster, according to the given replica placement strategy). Until the system manages to replicate each block as many times as specified by the replication factor, it will keep trying to do that.

Resources