Does hadoop create InputSplits parallely - hadoop

I have a large text file of size around 13gb. I want to process the file using Hadoop. I know that hadoop uses FileInputFormat to create InputSplits which are assigned to mapper tasks. I want to know if hadoop creates these InputSplits sequentially or in parallel. I mean does it read the large text file sequentially on a single host and create split files which are then distributed to datanodes, or does it read chunks of say 50mb in parallel?
Does hadoop replicate the big file on multiple hosts before splitting it up?
Is it recommended that I split up the file into 50mb chunks to speed up the processing? There are many questions on appropriate split size for mapper tasks but not the exact split process itself.
Thanks

InputSplits are created in the client side and it just a logical representation of the file in the sense it would only contain the file path,start and end offset values(calculated from linerecordreader initialize function). So calculating this logical rep. will not take much time so need to split your chunks the real execution happens at the mapper end where the execution is done in a parallel way. Then the client places the inputsplits into hdfs and jobtracker takes it from there and depending on the splits it allocates a tasktracker. Now here one mapper execution is not dependent on the other. The second mapper knows very well that where it has to start processing that split, so the mapper executions are done in parallel.

I suppose you want to process the file using MapReduce not Hadoop. Hadoop is a platform which provide tools to process and store large size data.
When you store the file in HDFS (Hadoop filesystem) it splits the file into multiple blocks. The size of the block is defined in hdfs-site.xml file as dfs.block.size. For example, if dfs.block.size=128 then your input file will be split into 128MB blocks. This is how HDFS store the data internally. For user it is always as a single file.
When you provide the input file (stored in HDFS) to MapReduce, it launches mapper task for each block/split of the file. This is default behavior.
you need not to split the file in chunks, just store the file in HDFS and it will the desired for you.

First let us understand what is meant by input split.
When your text file is divided into blocks of 128 MB size (default) by hdfs, assume that 10th line of the file is divided and first half of the is in first block and the other half is in second block. But when you submit a Map Program, hadoop understands that the last line of 1st block (which becomes input split here) is not complete. So it carries the second half of the 10th line to first input split. Which implies,
1) 1st input split = 1st Block + 2nd part of 10th line from 2nd block
2) 2nd input split = 2nd Block - 2nd part of 10th line from 2nd block.
This is an inbuilt process of hadoop and you cannot change or set the size of input split. The block size of hadoop v2 is by default 128 MB. You can increase during installation but you cannot decrease it.

Related

Blocks in Mapreduce

I have very important question cause I must make a presentation about map-reduce.
My Question is:
I have read that the file in map-reduce is divided into blocks and every blocks is replicated in 3 different nodes. the block can be 128 MB is this Block the input file? i mean this 128 MB block will be Splitting into parts and every part will go to single map? if yes so this 128 MB will be divided into Which Size?
or the File breaks into blocks and this blocks is the input for mapper
I'm little bit confused.
Could you see the photo and tell me which one is right.
Here HDFS File is divided into blocks and every singel block 128. MB will be as input for 1 Map
Here the HDFS file Is A Block and this 128 M.B will be splitting and every part will be input for 1 Map
Let's say you have a file of 2GB and you want to place that file in HDFS, then there will be 2GB/128MB = 16 blocks and these block will be distributed across the different DataNodes.
Data splitting happens based on file offsets. The goal of splitting the file and store it into different blocks, is parallel processing and fail over of data.
Split is logical split of the data, basically used during data processing using Map/Reduce program or other data-processing techniques in Hadoop. Split size is user defined value and one can choose his own split size based on the volume of data(How much data you are processing).
Split is basically used to control number of Mapper in Map/Reduce program. If you have not defined any input split size in Map/Reduce program then default HDFS block split will be considered as input split. (i.e., Input Split = Input Block. So 16 mappers will be triggered for a 2 GB file). If Split size is defined as 100 MB (lets say), then 21 Mappers will be triggered (20 Mappers for 2000MB and 21st Mapper for 48MB).
Hope this clears your doubt.
HDFS stores the file as blocks and each block is 128Mb in size (default).
Mapreduce processes this HDFS file. Each mapper processes a block (input split).
So, to answer your question, 128 Mb is a single block size which will not be further split.
Note : input split size used in mapreduce context is logical split, whereas the split size mentioned in the HDFS is physical split.

Hadoop HDFS: Read/Write parallelism?

Couldn't find enough information on internet so asking here:
Assuming I'm writing a huge file to disk, hundreds of Terabytes, which is a result of mapreduce (or spark or whatever). How would mapreduce write such a file to HDFS efficiently (potentially parallel?) which could be read later in a parallel way as well?
My understanding is that HDFS is simply block based (128MB e.g.). so in order to write the second block, you must have wrote the first block (or at least determine what content will go to block 1). Let's say it's a CSV file, it is quite possible that a line in the file will span two blocks -- how could we read such CSV to different mapper in mapreduce? Does it have to do some smart logic to read two blocks, concat them and read the proper line?
Hadoop uses RecordReaders and InputFormats as the two interfaces which read and understand bytes within blocks.
By default, in Hadoop MapReduce each record ends on a new line with TextInputFormat, and for the scenario where just one line crosses the end of a block, the next block must be read, even if it's just literally the \r\n characters
Writing data is done from reduce tasks, or Spark executors, etc, in that each task is responsible for writing only a subset of the entire output. You'll generally never get a single file for non-small jobs, and this isn't an issue because the input arguments to most Hadoop processing engines are meant to scan directories, not point at single files

Hadoop Mapreduce HDFS block split

My question is that I have a text file with 100 words in it separated by space and I need to do a word count program.
So, when my name node splits the file into HDFS blocks, how can we be assured that the splitting is done at the end of the word only?
I.e., if I have my 50th word in the text file as Hadoop, what if while splitting it into 64MB blocks, the storage of the current block might reach 64MB at the centre of the word Hadoop and thus one block contains 'had' and the other 'oop' in some other block.
Sorry if the question might sound silly, but please provide the answer.Thanks .
Your answer to this is inputsplit.
As HDFS does not know the content of the file. While storing data into multiple blocks, last record of each block might be broken. The first part of the record may be in one block and the last part of the same record might be in some other block.
To solve this type of problems in blocks MapReduce uses the concept of Input Splits.
‘Block’ is nothing but the physical division of data having size of 128MB distributed across multiple Data Nodes whereas ‘Input Split’ is a logical division of data.
While running MapReduce program the number of mapper depends upon the number of input splits and while processing input split includes the location of next block which contains the broken record.
Above diagram shows that there are three HDFS blocks and last part of Block-1 data is stored in Block-2. In this case input split will get the location of Block-2 to retrieve the broken record.
hadoopchannel

How to Hadoop Map Reduce entire file

I've played around with various streamin map reduce word count examples where Hadoop/Hbase appears to take a large file and break it (at a line break) equally between the nodes. Then it submits each line of the partial document to the map portion of my code. My question is when I have lots of little unstructured and semi-structured documents, how do I get Hadoop to submit the entire document to my map code?
File split are caluculated by the InputFormat.getSplits. So for the each input file it gets number of splits and each split is submitted to a mapper. Now based on the InputFormat Mapper will process the input split.
We have different types of Input Formats consider for example TextInputFormat which will take text files as input and for each split, it supplies line offset as key and entire line as value to map method in Mapper. Similarly for other InputFormats.
Now if you have many small files, say each file is less than the block size. Then each file will be supplied to a different mapper. If the file size exceeds the block size then it will be split into two blocks and executed on two blocks.
Consider an example where input files each are 1MB and you have 64 such files. Also assume that your block size is 64MB.
Now you will have 64 mappers kicked off for each file.
Consider you have 100 MB file and you have 2 such files.
Now your 100 MB file will be split into 64MB + 36MB and 4 mappers will be kicked off.

fs -put (or copyFromLocal) and data type awareness

If I upload a text file of size 117MB to HDFS using hadoop fs -put filename, I can see that one datanode contains a filepart of size 64.98MB (the default file split size) and another data node contains a filepart of size 48.59MB.
My question is whether this split position was calculated in a data aware way (recognising somehow that the file is text and thus splitting the file at "\n", for example).
I realise that InputFileFormat can be used to tell running jobs how to split the file in an intelligent way but as I didn't specify the file type in the fs -put command, I was wondering if (and if so how) an intelligent splitting would be done in this case.
Ellie
I think you are mixing up 2 things here, following 2 types of splitting are completely separate:
Splitting files into HDFS blocks
Splitting files to be distributed to the mappers
And, no, split position wasn't calculated in a data aware way.
Now, by default if you are using FileInputFormat, then these both types of splitting kind-of overlaps (and hence are identical).
But you can always have a custom way of splitting for the second point above(or even have no splitting at all, i.e. have one complete file go to a single mapper).
Also you can change the hdfs block size independent of the way your InputFormat is splitting input data.
Another important point to note here is that, while the files are actually broken physically when getting stored in HDFS, but for the split in order to distribute to mappers, there is no actual physical split of files, rather it is only logical split.
Taking example from here :
Suppose we want to load a 110MB text file to hdfs. hdfs block size and
Input split size is set to 64MB.
Number of mappers is based on number of Input splits not number of hdfs block splits.
When we set hdfs block to 64MB, it is exactly 67108864(64*1024*1024) bytes. I mean it doesn't matter the file will
be split from middle of the line.
Now we have 2 input split (so two maps). Last line of first block and first line of second block is not meaningful. TextInputFormat is
responsible for reading meaningful lines and giving them to map jobs.
What TextInputFormat does is:
In second block it will seek to second line which is a complete line and read from there and gives it to second mapper.
First mapper will read until the end of first block and also it will process the (last incomplete line of first block + first
incomplete line of second block).
Read more here.

Resources