Write sequence file using mapreduce and org.apache.hadoop.fs. differences? - hadoop

I see example of writing sequence file into hdfs using either org.apache.hadoop.fs package or mapreduce. My questions are :
What are the differences?
Is the end result, I mean the sequence file written in HDFS with both methods come up to be the same?
I only tried the org.apache.hadoop.fs to write sequence file, when I tried to use hadoop fs -text to view result, I see the "key" still attached in each record/block? Would it be the same if I used mapreduce to produce the sequence file? I rather not to see the "key"
How does one decide which method to use to write sequence file into HDFS?

For the sequence file you will write your content including the object i.e your own custom Object. While text file is just a string as each line.

The Apache Hadoop Wiki states that "SequenceFile is a flat file consisting of binary key/value pairs". The Wiki shows the actual file format, that includes the key. Note that SequenceFiles support multiple formats, such as "Uncompressed", "Record Compressed", and "Block Compressed". Additionally there are various compression codecs that can be used. Since the file format and compression information is stored in the file header, applications (such as Mapper and Reducer tasks) can easily determine how to correctly process the files.
In the image below you can see that the append() method on the org.apache.hadoop.io.SequenceFile.Writer class requires both a key and a value:
Also keep in mind that both the MapReduce Mapper and Reducer ingest and emit key-value pairs. So having the key stored in the SequenceFile allows Hadoop top operate very efficiently with these types of files.
So in a nutshell:
SequenceFiles will always contain a "key" in addition to the "value".
Two SequenceFiles containing the same data are not necessarily exactly the same in terms of size or actual bytes. It all depends on whether compression is used, the type of compression, and the compression codec.
The method you use to create SequenceFiles and add them to HDFS, largely depends on what you are trying to achieve and accomplish. SequenceFiles are typically a means to efficiently accomplish a particular goal, they are rarely the end result.

Related

Hadoop streaming with multiple input files

I want to build an inverted index from a set of files with Hadoop using the Streaming API. The documentation always refers to using a file whose lines have the entries to the mapper to be fed. But in this case, I have multiple input files, and I need the mappers to process only one file at a time. Is there a way to accomplish that. For preprocessing reasons, I need the input to be like this, and I cannot have the input in the classic line = key, value format that the documentation refers.
By default a mapper only processes one file, unless you use an input class that allow combine inputs like CombineFileInputFormat.
Then, if you have 10 files you will end with 10 mappers and each of them will process only one file. If you are only using mappers (not reducers) that will end in 10 outputs files (one for each mapper).
In the other side, if you have enough big splittable files, it is possible that one file be processed by several mappers at the same time.

How to decode a binary file which must be decoded using an external binary in one shot?

I have a large number of input files in a proprietary binary format. I need to turn them into rows for further processing. Each file must be decoded in one shot by an external binary (i.e. files must not be concatenated or split).
Options that I'm aware of:
Force single file load, extend RecordReader, use DistributedCache to run the decoder via RecordReader
Force single file load, RecordReader returns single file, use hadoop streaming to decode each file
It looks however like [2] will not work since pig will concatenate records before sending them to the STREAM operator (i.e. it will send multiple records).
[1] seems doable, just a little more work.
Is there a better way?
Seems like Option 1 that you mentioned is the most viable option. In addition to extending RecordReader, appropriate InputFormat should be extended and override the isSplitable() to return false

How to use "typedbytes" or "rawbytes" in Hadoop Streaming?

I have a problem that would be solved by Hadoop Streaming in "typedbytes" or "rawbytes" mode, which allow one to analyze binary data in a language other than Java. (Without this, Streaming interprets some characters, usually \t and \n, as delimiters and complains about non-utf-8 characters. Converting all my binary data to Base64 would slow down the workflow, defeating the purpose.)
These binary modes were added by HADOOP-1722. On the command line that invokes a Hadoop Streaming job, "-io rawbytes" lets you define your data as a 32-bit integer size followed by raw data of that size, and "-io typedbytes" lets you define your data as a 1-bit zero (which means raw bytes), followed by a 32-bit integer size, followed by raw data of that size. I have created files with these formats (with one or many records) and verified that they are in the right format by checking them with/against the output of typedbytes.py. I've also tried all conceivable variations (big-endian, little-endian, different byte offsets, etc.). I'm using Hadoop 0.20 from CDH4, which has the classes that implement the typedbytes handling, and it is entering those classes when the "-io" switch is set.
I copied the binary file to HDFS with "hadoop fs -copyFromLocal". When I try to use it as input to a map-reduce job, it fails with an OutOfMemoryError on the line where it tries to make a byte array of the length I specify (e.g. 3 bytes). It must be reading the number incorrectly and trying to allocate a huge block instead. Despite this, it does manage to get a record to the mapper (the previous record? not sure), which writes it to standard error so that I can see it. There are always too many bytes at the beginning of the record: for instance, if the file is "\x00\x00\x00\x00\x03hey", the mapper would see "\x04\x00\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\x00\x03hey" (reproducible bits, though no pattern that I can see).
From page 5 of this talk, I learned that there are "loadtb" and "dumptb" subcommands of streaming, which copy to/from HDFS and wrap/unwrap the typed bytes in a SequenceFile, in one step. When used with "-inputformat org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat", Hadoop correctly unpacks the SequenceFile, but then misinterprets the typedbytes contained within, in exactly the same way.
Moreover, I can find no documentation of this feature. On Feb 7 (I e-mailed it to myself), it was briefly mentioned in the streaming.html page on Apache, but this r0.21.0 webpage has since been taken down and the equivalent page for r1.1.1 has no mention of rawbytes or typedbytes.
So my question is: what is the correct way to use rawbytes or typedbytes in Hadoop Streaming? Has anyone ever gotten it to work? If so, could someone post a recipe? It seems like this would be a problem for anyone who wants to use binary data in Hadoop Streaming, which ought to be a fairly broad group.
P.S. I noticed that Dumbo, Hadoopy, and rmr all use this feature, but there ought to be a way to use it directly, without being mediated by a Python-based or R-based framework.
Okay, I've found a combination that works, but it's weird.
Prepare a valid typedbytes file in your local filesystem, following the documentation or by imitating typedbytes.py.
Use
hadoop jar path/to/streaming.jar loadtb path/on/HDFS.sequencefile < local/typedbytes.tb
to wrap the typedbytes in a SequenceFile and put it in HDFS, in one step.
Use
hadoop jar path/to/streaming.jar -inputformat org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat ...
to run a map-reduce job in which the mapper gets input from the SequenceFile. Note that -io typedbytes or -D stream.map.input=typedbytes should not be used--- explicitly asking for typedbytes leads to the misinterpretation I described in my question. But fear not: Hadoop Streaming splits the input on its binary record boundaries and not on its '\n' characters. The data arrive in the mapper as "rawdata" separated by '\t' and '\n', like this:
32-bit signed integer, representing length (note: no type character)
block of raw binary with that length: this is the key
'\t' (tab character... why?)
32-bit signed integer, representing length
block of raw binary with that length: this is the value
'\n' (newline character... ?)
If you want to additionally send raw data from mapper to reducer, add
-D stream.map.output=typedbytes -D stream.reduce.input=typedbytes
to your Hadoop command line and format the mapper's output and reducer's expected input as valid typedbytes. They also alternate for key-value pairs, but this time with type characters and without '\t' and '\n'. Hadoop Streaming correctly splits these pairs on their binary record boundaries and groups by keys.
The only documentation on stream.map.output and stream.reduce.input that I could find was in the HADOOP-1722 exchange, starting 6 Feb 09. (Earlier discussion considered a different way to parameterize the formats.)
This recipe does not provide strong typing for the input: the type characters are lost somewhere in the process of creating a SequenceFile and interpreting it with the -inputformat. It does, however, provide splitting at the binary record boundaries, rather than '\n', which is the really important thing, and strong typing between the mapper and the reducer.
We solved the binary data issue using hexaencoding the data at split level when streaming down data to the Mapper. This would utilize and increase the Parallel efficiency of your operation instead of first tranforming your data before processing on a node.
Apparently there is a patch for a JustBytes IO mode for streaming, that feeds a whole input file to the mapper command:
https://issues.apache.org/jira/browse/MAPREDUCE-5018

hadoop - How can i use data in memory as input format?

I'm writing a mapreduce job, and I have the input that I want to pass to the mappers in the memory.
The usual method to pass input to the mappers is via the Hdfs - sequencefileinputformat or Textfileinputformat. These inputformats need to have files in the fdfs which will be loaded and splitted to the mappers
I cant find a simple method to pass, lets say List of elemnts to the mappers.
I find myself having to wrtite these elements to disk and then use fileinputformat.
any solution?
I'm writing the code in java offcourse.
thanks.
Input format is not have to load data from the disk or file system.
There are also input formats reading data from other systems like HBase or (http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapred/TableInputFormat.html) where data is not implied to sit on the disk. It only is implied to be available via some API on all nodes of the cluster.
So you need to implement input format which splits data in your own logic (as soon as there is no files it is your own task) and to chop the data into records .
Please note that your in memory data source should be distributed and run on all nodes of the cluster. You will also need some efficient IPC mechanism to pass data from your process to the Mapper process.
I would be glad also to know what is your case which leads to this unusual requirement.

Hadoop: Mapping binary files

Typically in a the input file is capable of being partially read and processed by Mapper function (as in text files). Is there anything that can be done to handle binaries (say images, serialized objects) which would require all the blocks to be on same host, before the processing can start.
Stick your images into a SequenceFile; then you will be able to process them iteratively, using map-reduce.
To be a bit less cryptic: Hadoop does not natively know anything about text and not-text. It just has a class that knows how to open an input stream (hdfs handles sticthing together blocks on different nodes, to make them appear as one large files). On top of that, you have an Reader and an InputFormat that knows how to determine where in that stream records start, where they end, and how to find the beginning of the next record if you are dropped somewhere in the middle of the file. TextInputFormat is just one implementation, which treats newlines as record delimiter. There is also a special format called a SequenceFile that you can write arbitrary binary records into, and then get them back out. Use that.

Resources