Hadoop Mapreduce HDFS block split - hadoop

My question is that I have a text file with 100 words in it separated by space and I need to do a word count program.
So, when my name node splits the file into HDFS blocks, how can we be assured that the splitting is done at the end of the word only?
I.e., if I have my 50th word in the text file as Hadoop, what if while splitting it into 64MB blocks, the storage of the current block might reach 64MB at the centre of the word Hadoop and thus one block contains 'had' and the other 'oop' in some other block.
Sorry if the question might sound silly, but please provide the answer.Thanks .

Your answer to this is inputsplit.
As HDFS does not know the content of the file. While storing data into multiple blocks, last record of each block might be broken. The first part of the record may be in one block and the last part of the same record might be in some other block.
To solve this type of problems in blocks MapReduce uses the concept of Input Splits.
‘Block’ is nothing but the physical division of data having size of 128MB distributed across multiple Data Nodes whereas ‘Input Split’ is a logical division of data.
While running MapReduce program the number of mapper depends upon the number of input splits and while processing input split includes the location of next block which contains the broken record.
Above diagram shows that there are three HDFS blocks and last part of Block-1 data is stored in Block-2. In this case input split will get the location of Block-2 to retrieve the broken record.
hadoopchannel

Related

Hadoop HDFS: Read/Write parallelism?

Couldn't find enough information on internet so asking here:
Assuming I'm writing a huge file to disk, hundreds of Terabytes, which is a result of mapreduce (or spark or whatever). How would mapreduce write such a file to HDFS efficiently (potentially parallel?) which could be read later in a parallel way as well?
My understanding is that HDFS is simply block based (128MB e.g.). so in order to write the second block, you must have wrote the first block (or at least determine what content will go to block 1). Let's say it's a CSV file, it is quite possible that a line in the file will span two blocks -- how could we read such CSV to different mapper in mapreduce? Does it have to do some smart logic to read two blocks, concat them and read the proper line?
Hadoop uses RecordReaders and InputFormats as the two interfaces which read and understand bytes within blocks.
By default, in Hadoop MapReduce each record ends on a new line with TextInputFormat, and for the scenario where just one line crosses the end of a block, the next block must be read, even if it's just literally the \r\n characters
Writing data is done from reduce tasks, or Spark executors, etc, in that each task is responsible for writing only a subset of the entire output. You'll generally never get a single file for non-small jobs, and this isn't an issue because the input arguments to most Hadoop processing engines are meant to scan directories, not point at single files

relation between splits and blocks in hadoop mapreduce

I'm trying to understand how mapreduce's InputFormat class is supposed to be working with respect to splits and blocks.
If I understand correctly:
a hdfs file is split into blocks. Each block is guaranteed to be on a specific machine, but 2 blocks of the same file can be on 2 different machines.
a split roughly defines an hdfs file and a starting byte offset + ending byte offset (or can it be a starting line number and ending line number?)
each mapper will receive a split and an instance of a RecordReader with an instance of a specific split. Defining N splits mean that N mappers and RecordReaders are created, one for each split
however a split size doesnt need to be divisible by the blocksize, or the other way around
Is that correct?
However, is the RecordReader instance mandated by the API to strictly process / work on the data inside its split ? or is it allowed to read data outside its split bounds ? Can it still always read any part of the file even if it has to go beyond the current block ? (so potentially the following of the file is on another machine)
In essence, are the splits only a "hint" for the recordreader?
Because if this is not the case and the splits are strict, it seems to me impossible to process a simple file where each record has non-fixed size.

Does hadoop create InputSplits parallely

I have a large text file of size around 13gb. I want to process the file using Hadoop. I know that hadoop uses FileInputFormat to create InputSplits which are assigned to mapper tasks. I want to know if hadoop creates these InputSplits sequentially or in parallel. I mean does it read the large text file sequentially on a single host and create split files which are then distributed to datanodes, or does it read chunks of say 50mb in parallel?
Does hadoop replicate the big file on multiple hosts before splitting it up?
Is it recommended that I split up the file into 50mb chunks to speed up the processing? There are many questions on appropriate split size for mapper tasks but not the exact split process itself.
Thanks
InputSplits are created in the client side and it just a logical representation of the file in the sense it would only contain the file path,start and end offset values(calculated from linerecordreader initialize function). So calculating this logical rep. will not take much time so need to split your chunks the real execution happens at the mapper end where the execution is done in a parallel way. Then the client places the inputsplits into hdfs and jobtracker takes it from there and depending on the splits it allocates a tasktracker. Now here one mapper execution is not dependent on the other. The second mapper knows very well that where it has to start processing that split, so the mapper executions are done in parallel.
I suppose you want to process the file using MapReduce not Hadoop. Hadoop is a platform which provide tools to process and store large size data.
When you store the file in HDFS (Hadoop filesystem) it splits the file into multiple blocks. The size of the block is defined in hdfs-site.xml file as dfs.block.size. For example, if dfs.block.size=128 then your input file will be split into 128MB blocks. This is how HDFS store the data internally. For user it is always as a single file.
When you provide the input file (stored in HDFS) to MapReduce, it launches mapper task for each block/split of the file. This is default behavior.
you need not to split the file in chunks, just store the file in HDFS and it will the desired for you.
First let us understand what is meant by input split.
When your text file is divided into blocks of 128 MB size (default) by hdfs, assume that 10th line of the file is divided and first half of the is in first block and the other half is in second block. But when you submit a Map Program, hadoop understands that the last line of 1st block (which becomes input split here) is not complete. So it carries the second half of the 10th line to first input split. Which implies,
1) 1st input split = 1st Block + 2nd part of 10th line from 2nd block
2) 2nd input split = 2nd Block - 2nd part of 10th line from 2nd block.
This is an inbuilt process of hadoop and you cannot change or set the size of input split. The block size of hadoop v2 is by default 128 MB. You can increase during installation but you cannot decrease it.

Split size vs Block size in Hadoop

What is relationship between split size and block size in Hadoop? As I read in this, split size must be n-times of block size (n is an integer and n > 0), is this correct? Is there any must in relationship between split size and block size?
In HDFS architecture there is a concept of blocks. A typical block size used by HDFS is 64 MB. When we place a large file into HDFS it chopped up into 64 MB chunks(based on default configuration of blocks), Suppose you have a file of 1GB and you want to place that file in HDFS, then there will be 1GB/64MB =
16 split/blocks and these block will be distribute across the DataNodes. These blocks/chunk will reside on a different different DataNode based on your cluster configuration.
Data splitting happens based on file offsets. The goal of splitting of file and store it into different blocks, is parallel processing and fail over of data.
Difference between block size and split size.
Split is logical split of the data, basically used during data processing using Map/Reduce program or other dataprocessing techniques on Hadoop Ecosystem. Split size is user defined value and you can choose your own split size based on your volume of data(How much data you are processing).
Split is basically used to control number of Mapper in Map/Reduce program. If you have not defined any input split size in Map/Reduce program then default HDFS block split will be considered as input split.
Example:
Suppose you have a file of 100MB and HDFS default block configuration is 64MB, then it will chopped in 2 split and occupy 2 blocks. Now you have a Map/Reduce program to process this data but you have not specified any input split then based on the number of blocks(2 block) input split will be considered for the Map/Reduce processing and 2 mapper will get assigned for this job.
But suppose, you have specified the split size(say 100MB) in your Map/Reduce program then both blocks(2 block) will be considered as a single split for the Map/Reduce processing and 1 Mapper will get assigned for this job.
Suppose, you have specified the split size(say 25MB) in your Map/Reduce program then there will be 4 input split for the Map/Reduce program and 4 Mapper will get assigned for the job.
Conclusion:
Split is a logical division of the input data while block is a physical division of data.
HDFS default block size is default split size if input split is not specified.
Split is user defined and user can control split size in his Map/Reduce program.
One split can be mapping to multiple blocks and there can be multiple split of one block.
The number of map tasks (Mapper) are equal to the number of splits.
Assume we have a file of 400MB with consists of 4 records(e.g : csv file of 400MB and it has 4 rows, 100MB each)
If the HDFS Block Size is configured as 128MB, then the 4 records will not be distributed among the blocks evenly. It will look like this.
Block 1 contains the entire first record and a 28MB chunk of the second record.
If a mapper is to be run on Block 1, the mapper cannot process since it won't have the entire second record.
This is the exact problem that input splits solve. Input splits respects logical record boundaries.
Lets Assume the input split size is 200MB
Therefore the input split 1 should have both the record 1 and record 2. And input split 2 will not start with the record 2 since record 2 has been assigned to input split 1. Input split 2 will start with record 3.
This is why an input split is only a logical chunk of data. It points to start and end locations with in blocks.
If the input split size is n times the block size, an input split could fit multiple blocks and therefore less number of Mappers needed for the whole job and therefore less parallelism. (Number of mappers is the number of input splits)
input split size = block size is the ideal configuration.
Hope this helps.
The Split creation depends on the InputFormat being used. The below diagram explains how FileInputFormat's getSplits() method decides the splits for two different files. Note the role played by the Split Slope (1.1).
The corresponding Java source that does the split is:
The method computeSplitSize() above expands to Max(minSize, min(maxSize, blockSize)), where min/max size can be configured by setting mapreduce.input.fileinputformat.split.minsize/maxsize

fs -put (or copyFromLocal) and data type awareness

If I upload a text file of size 117MB to HDFS using hadoop fs -put filename, I can see that one datanode contains a filepart of size 64.98MB (the default file split size) and another data node contains a filepart of size 48.59MB.
My question is whether this split position was calculated in a data aware way (recognising somehow that the file is text and thus splitting the file at "\n", for example).
I realise that InputFileFormat can be used to tell running jobs how to split the file in an intelligent way but as I didn't specify the file type in the fs -put command, I was wondering if (and if so how) an intelligent splitting would be done in this case.
Ellie
I think you are mixing up 2 things here, following 2 types of splitting are completely separate:
Splitting files into HDFS blocks
Splitting files to be distributed to the mappers
And, no, split position wasn't calculated in a data aware way.
Now, by default if you are using FileInputFormat, then these both types of splitting kind-of overlaps (and hence are identical).
But you can always have a custom way of splitting for the second point above(or even have no splitting at all, i.e. have one complete file go to a single mapper).
Also you can change the hdfs block size independent of the way your InputFormat is splitting input data.
Another important point to note here is that, while the files are actually broken physically when getting stored in HDFS, but for the split in order to distribute to mappers, there is no actual physical split of files, rather it is only logical split.
Taking example from here :
Suppose we want to load a 110MB text file to hdfs. hdfs block size and
Input split size is set to 64MB.
Number of mappers is based on number of Input splits not number of hdfs block splits.
When we set hdfs block to 64MB, it is exactly 67108864(64*1024*1024) bytes. I mean it doesn't matter the file will
be split from middle of the line.
Now we have 2 input split (so two maps). Last line of first block and first line of second block is not meaningful. TextInputFormat is
responsible for reading meaningful lines and giving them to map jobs.
What TextInputFormat does is:
In second block it will seek to second line which is a complete line and read from there and gives it to second mapper.
First mapper will read until the end of first block and also it will process the (last incomplete line of first block + first
incomplete line of second block).
Read more here.

Resources