Hadoop 0.2: How to read outputs from TextOutputFormat? - hadoop

My reducer class produces outputs with TextOutputFormat (the default OutputFormat given by Job). I like to consume this outputs after the MapReduce job complete to aggregate the outputs. In addition to this, I like to write out the aggregated information with TextInputFormat so that the output from this process can be consumed by the next iteration of MapReduce task. Can anyone give me an example on how to write & read with TextFormat? By the way, the reason why I am using TextFormat, rather Sequence, is the interoperability. The outputs should be consumed by any software.

Don't rule out sequence files just yet; they make it fast and easy to chain MapReduce jobs, and you can use "hadoop fs -text filename" to output them in text format if you need them that way for other things.
But, back to your original question: to use TextInputFormat, set it as the input format in the Job, and then use TextInputFormat.setInputPaths to specify what files it should use as input. The key to your mapper should be a LongWritable, and the value a Text.
For using TextOutputFormat as output, set it as the output format in the Job, and then use TextOuputFormat.setOutputPath to specify the output path. Your reducers (or mappers, if it is a map-only job) need to use NullWritable as the type of the output key to get just the text representation of the values one per line, or otherwise each line will be the text representation of the key and the value separated by a tab (by default, you can change this by setting "mapred.textoutputformat.separator" to a different separator).

Related

How to set split size equal one line in Hadoop's MapReduce Streaming?

Goal: each node, having a copy of the matrix, reads the matrix, calculates some value via mapper(matrix, key), and emits <key, value>
I'm, trying to use mapper written in python via streaming. There are no reducers.
Essentially, I'm trying to do the task similar to https://hadoop.apache.org/docs/current/hadoop-streaming/HadoopStreaming.html#How_do_I_process_files_one_per_map
Approach: I generated an input file (tasks) in the following format (header just for reference):
/path/matrix.csv 0
/path/matrix.csv 0
... 99
Then I run (hadoop streaming) mapper on this tasks. Mapper parses line to get the arguments - filename, key; then mapper reads the matrix by filename and calculates the value associated with the key; then emits <key, value>.
Problem: current approach works and produces correct results, but it does so in one mapper, since the input file size is mere 100 lines of text, and it is not getting split into several mappers. How do I force such split despite small input size?
I realized that instead of doing several mappers and no reducers I could do just the exact opposite. Now my architecture is as follows:
thin mapper simply reads the input parameters and emits key, value
fat reducers read the files and execute algorithm with received key, then emit results
set -D mapreduce.job.reduces=10 to change the parallelization level
It was silly (mistaken) approach, but the correct one was not obvious either.

Understanding TextInputFormat in Hadoop

I am from Mainframes background and trying to understand different input formats in Hadoop. Can anyone please explain the all below three input formats:
– TextInputFormat
- KeyValueInputFormat
- SequenceFileInputFormat
TextInputFormat: It reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the mapper.
The TextInputFormat works as An InputFormat for plain text files. Files are broken into lines. Either linefeed or carriage-return are used to signal end of line. Keys are the position in the file, and values are the line of text..
KeyValueTextInputFormat(old KeyValueInputFormat): This format also treats each line of input as a separate record. Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to the mapper.
While the TextInputFormat treats the entire line as the value, the KeyValueInputFormat breaks the line itself into the key and value by searching for a tab character.
SequenceFileInputFormat: reads special binary files that are specific to Hadoop. These files include many features designed to allow data to be rapidly read into Hadoop mappers.
Sequence files are block-compressed and provide direct serialization and deserialization of several arbitrary data types (not just text). Sequence files can be generated as the output of other MapReduce tasks and are an efficient intermediate representation for data that is passing from one MapReduce job to anther.

Hadoop Streaming Job with no input file

Is it possible to execute a Hadoop Streaming job that has no input file?
In my use case, I'm able to generate the necessary records for the reducer with a single mapper and execution parameters. Currently, I'm using a stub input file with a single line, I'd like to remove this requirement.
We have 2 use cases in mind.
1)
I want to distribute the loading of files into hdfs from a network location available to all nodes. Basically, I'm going to run ls in the mapper and send the output to a small set of reducers.
We are going to be running fits leveraging several different parameter ranges against several models. The model names do not change and will go to the reducer as keys while the list of tests to run is generated in the mapper.
According to the docs this is not possible. The following are required parameters for execution:
input directoryname or filename
output directoryname
mapper executable or JavaClassName
reducer executable or JavaClassName
It looks like providing a dummy input file is the way to go currently.

Hadoop with supercsv

I have to process data in very large text files(like 5 TB in size). The processing logic uses supercsv to parse through the data and run some checks on it. Obviously as the size is quite large, we planned on using hadoop to take advantage of parallel computation. I install hadoop on my machine and I start off to write the mapper and reducer classes and I am stuck. Because the map requires a key value pair, so to read this text file I am not sure what should be the key and value in this particular scenario. Can someone help me out with that.
My thought process is something like this (let me know if I am correct)
1) Read the file using superCSV and hadoop generate the supercsv beans for each chunk of file in hdfs.(I am assuming that hadoop takes care of splitting the file)
2) For each of these supercsvbeans run my check logic.
Is the data newline-separated? i.e., if you just split the data on each newline character, will each chunk always be a single, complete record? This depends on how superCSV encodes text, and on whether your actual data contains newline characters.
If yes:
Just use TextInputFormat. It provides you with (I think) the byte offset as the map key, and the whole line as the value. You can ignore the key, and parse the line using superCSV.
If no:
You'll have to write your own custom InputFormat - here's a good tutorial: http://developer.yahoo.com/hadoop/tutorial/module5.html#fileformat. The specifics of exactly what the key is and what the value is don't matter too much to the mapper input; just make sure one of the two contains the actual data that you want. You can even use NullWritable as the type for one of them.

Control the Reducer Result Output File/Bucket

I have an application where I would like to make my reducers (I have several of them for a map/reduce job) to record their outputs into different files on the HDFS depending on the key coming to them for processing. So if the reducer sees a key of say type A, apply the reduce the logic but tell Hadoop to put the result into the hdfs file belonging to type A result and so on. Obviously multiple reducers can will be outputting different portion of the type A result and each reducer can end up working on any type like A or B but tell hadoop to write the result into the type A bucket or something
Is this possible?
MultipleOutputs is almost what you are looking for (assuming you are at least at version 0.21). In my own work I have used a clone of this class modified to be more flexible about naming conventions to send output to different folders/files based on anything I want, including aspects of the input records (keys or values). As is, the class has some draconian restrictions on what names you can give to the outputs.

Resources