million of map outputs for one map input. is it efficient? - hadoop

so, what I do is, I map a text file as a whole record (wholefileinputformat), do some processing, then write the output line by line with context.write. The problem is, it turns out to be not efficient at all. One map task produces several millions of outputs and I get Heap Memory error. Is there any other way of doing this?
map-input(file_name, file_contents_as_Text)
String output = process(file_contents)
for(each line in output)
context.write(line, some_value)

Related

Very long lines of input in Ruby

I have a Ruby script that performs some substitutions on the output of mysqldump.
The input can have very long lines (hundreds of MB), because a single line can represent a multi-row INSERT statement for all the data in a table. The mysqldump utility can be coerced to produce one INSERT statement per row, but I don't have control of every client.
My script naively expects IO#each_line to control memory usage:
$stdin.each_line do |line|
next if options[:entity_excludes].any? { |entity| line =~ /^(DROP TABLE IF EXISTS|INSERT INTO) `custom_#{entity}(s|_meta)`/ }
line.gsub!(/^CREATE TABLE `/, "CREATE TABLE IF NOT EXISTS `")
line.gsub!('{{__OPF_SITEURL__}}', siteurl) if siteurl
$stdout.write(line)
end
I've already seen input with maximum line length over 400MB, and this translates directly into process resident memory.
Are there libraries for Ruby that allow text transforms on an input stream using buffers instead of relying on line-delimited input?
This was marked as a duplicate of a simpler question. But there's quite a bit more to this. You need to keep track of multiple buffers and test for application of transforms even when they apply across a buffer boundary. It's easy to get wrong, which is why I'm hoping a library already exists.

How to set split size equal one line in Hadoop's MapReduce Streaming?

Goal: each node, having a copy of the matrix, reads the matrix, calculates some value via mapper(matrix, key), and emits <key, value>
I'm, trying to use mapper written in python via streaming. There are no reducers.
Essentially, I'm trying to do the task similar to https://hadoop.apache.org/docs/current/hadoop-streaming/HadoopStreaming.html#How_do_I_process_files_one_per_map
Approach: I generated an input file (tasks) in the following format (header just for reference):
/path/matrix.csv 0
/path/matrix.csv 0
... 99
Then I run (hadoop streaming) mapper on this tasks. Mapper parses line to get the arguments - filename, key; then mapper reads the matrix by filename and calculates the value associated with the key; then emits <key, value>.
Problem: current approach works and produces correct results, but it does so in one mapper, since the input file size is mere 100 lines of text, and it is not getting split into several mappers. How do I force such split despite small input size?
I realized that instead of doing several mappers and no reducers I could do just the exact opposite. Now my architecture is as follows:
thin mapper simply reads the input parameters and emits key, value
fat reducers read the files and execute algorithm with received key, then emit results
set -D mapreduce.job.reduces=10 to change the parallelization level
It was silly (mistaken) approach, but the correct one was not obvious either.

Is it possible to know the serial number of the block of input data on which map function is currently working?

I am a novice in Hadoop and here I have the following questions:
(1) As I can understand, the original input file is split into several blocks and distributed over the network. Does a map function always execute on a block in its entirety? Could there be more than one map functions executing on data in a single block?
(2) Is there any way that it can be learned, from within the map function, which section of the original input text the mapper is currently working on? I would like to get something like a serial number, for instance, for each block starting from the first block of the input text.
(3) Is it possible to make the splits of the input text in such a way that each block has a predefined word count? If possible then how?
Any help would be appreciated.
As I can understand, the original input file is split into several blocks and distributed over the network. Does a map function always execute on a block in its entirety? Could there be more than one map functions executing on data in a single block?
No. A block(split to be precise) gets processed by only one mapper.
Is there any way that it can be learned, from within the map function, which section of the original input text the mapper is currently working on? I would like to get something like a serial number, for instance, for each block starting from the first block of the input text.
You can get some valuable info, like the file containing split's data, the position of the first byte in the file to process. etc, with the help of FileSplit class. You might find it helpful.
Is it possible to make the splits of the input text in such a way that each block has a predefined word count? If possible then how?
You can do that by extending FileInputFormat class. To begin with you could do this :
In your getSplits() method maintain a counter. Now, as you read the file line by line keep on tokenizing them. Collect each token and increase the counter by 1. Once the counter reaches the desired value, emit the data read upto this point as one split. Reset the counter and start with the second split.
HTH
If you define a small max split size you can actually have multiple mappers processing a single HDFS block (say 32mb max split for a 128 MB block size - you'll get 4 mappers working on the same HDFS block). With the standard input formats, you'll typically never see two or more mappers processing the same part of the block (the same records).
MapContext.getInputSplit() can usually be cast to a FileSplit and then you have the Path, offset and length of the file being / block being processed).
If your input files are true text flies, then you can use the method suggested by Tariq, but note this is highly inefficient for larger data sources as the Job Client has to process each input file to discover the split locations (so you end up reading each file twice). If you really only want each mapper to process a set number of words, you could run a job to re-format the text files into sequence files (or another format), and write the records down to disk with a fixed number of words per file (using Multiple outputs to get a file per number of words, but this again is inefficient). Maybe if you shared the use case as for why you want a fixed number of words, we can better understand your needs and come up with alternatives

Loading file split input of hadoop map function into a data structure

I have designed a system where each map function is suppose to load its input (file split containing multiple CSV records) into a data structure and process them rather than processing line by line. There will be multiple Mapppers since I will be processing millions of records hence one mapper is totally inefficient.
I see from the example in WordCount, that the map function is reading line by line. Almost as of the map function is invoked for each line from the split it receives. I believe the input to this map should be the complete lines itself instead of sending it one line at a time.
Reduce function has other tasks at hand. So I guess, the map function could be tweaked to do the task its assigned. Is there a workaround?
From what you explain I understand that your map input is not single line but some structure built from the several lines. In this case - you should create your own InputFormat which will convert the input stream (from the split) to the sequence of objects of YourDataStructure cluss. And Your mapper will accept these YourDataStructure objects.
If the whole script is actually structure you want to process - I would suggest to do all the logic in mapper in one trick - you should know when there is last line in split. It can be done by inheriting TextInputFormat and tweak it to indicate you that there is last line. Then you build your structure in mapper, line by line, and when last line is indicated - do the job.

Hadoop read multiple lines at a time

I have a file in which a set of every four lines represents a record.
eg, first four lines represent record1, next four represent record 2 and so on..
How can I ensure Mapper input these four lines at a time?
Also, I want the file splitting in Hadoop to happen at the record boundary (line number should be a multiple of four), so records don't get span across multiple split files..
How can this be done?
A few approaches, some dirtier than others:
The right way
You may have to define your own RecordReader, InputSplit, and InputFormat. Depending on exactly what you are trying to do, you will be able to reuse some of the already existing ones of the three above. You will likely have to write your own RecordReader to define the key/value pair and you will likely have to write your own InputSplit to help define the boundary.
Another right way, which may not be possible
The above task is quite daunting. Do you have any control over your data set? Can you preprocess it in someway (either while it is coming in or at rest)? If so, you should strongly consider trying to transform your dataset int something that is easier to read out of the box in Hadoop.
Something like:
ALine1
ALine2 ALine1;Aline2;Aline3;Aline4
ALine3
ALine4 ->
BLine1
BLine2 BLine1;Bline2;Bline3;Bline4;
BLine3
BLine4
Down and Dirty
Do you have any control over the file sizes of your data? If you manually split your data on the block boundary, you can force Hadoop to not care about records spanning splits. For example, if your block size is 64MB, write your files out in 60MB chunks.
Without worrying about input splits, you could do something dirty: In your map function, add your new key/value pair into a list object. If the list object has 4 items in it, do processing, emit something, then clean out the list. Otherwise, don't emit anything and move on without doing anything.
The reason why you have to manually split the data is that you are not going to be guaranteed that an entire 4-row record will be given to the same map task.
Another way (easy but may not be efficient in some cases) is to implement the FileInputFormat#isSplitable(). Then the input files are not split and are processed one per map.
import org.apache.hadoop.fs.*;
import org.apache.hadoop.mapred.TextInputFormat;
public class NonSplittableTextInputFormat extends TextInputFormat {
#Override
protected boolean isSplitable(FileSystem fs, Path file) {
return false;
}
}
And as orangeoctopus said
In your map function, add your new key/value pair into a list object. If the list object has 4 items in it, do processing, emit something, then clean out the list. Otherwise, don't emit anything and move on without doing anything.
This has some overhead for the following reasons
Time to process the largest file drags the job completion time.
A lot of data may be transferred between the data nodes.
The cluster is not properly utilized, since # of maps = # of files.
** The above code is from Hadoop : The Definitive Guide

Resources