Hadoop - The Definitive Guide says
If you want to log binary types, plain text isn’t a suitable format.
My Questions is 1. Why not? 2. What are binary types?
and further:
Hadoop’s SequenceFileclass fits the bill in this situation, providing
a persistent data structure for binary key-value pairs. To use it as a
logfile format, you would choose a key, such as timestamp represented
by a LongWritable, and the value is a Writable that represents the
quantity being logged.
Why text file can't be used and sequence file is required?
On the same page, it was quoted that:
For some applications, you need a specialized data structure to hold your data. For doing
MapReduce-based processing, putting each blob of binary data into its own file doesn’t
scale, so Hadoop developed a number of higher-level containers for these situations.
e.g. Assume that you are uploading images in facebook and you have to remove duplicate images. You can't store image in textformat. What you can do : get MD5SUM of image file and if MD5SUM already exists in the system, just discard insertion of duplicate image. In your text file, you can simply have "Date:" and "Number of images uploaded". Image can be stored out side of HDFS system like CDN network or at some other web server
Related
I have a bunch of Parquet files containing data where each row has the form [key, data1, data2, data3,...]. I need to know in which file a certain key is located, without actually opening each file and searching. Is it possible to get this from the Parquet metadata?
The keys are formatted as strings.
I already tried accessing the metadata using PyArrow, but didn't get the data I wanted.
Short answer is no.
Longer answer: Parquet has two types of metadata that help in eliminating data, min/max statistics and optionally BloomFilters. With these two you can definitively determine if a file does not contain your key, but can't determine if 100% does (unless your key happens to be a min/max value). Pyarrow currently only really exposes row group statistics and doesn't support BloomFilter reading/writing at all.
Also, if the key is of low enough cardinality then dictionary encoding might be used to encode the column. If all data in a column is dictionary encoded, the it might be possible through some lower level APIs (likely not pyarrow) to retrieve the dictionaries and scan them instead of the entire file.
If you are in control of the writing process then sorting data based on key/limiting the number of keys per file would help make these methods even more efficient.
I know different file formats in Hadoop ? By default hadoop uses text input format. what is advantage/disadvantage of using text input format.
What is advantage/disadvantage of avro over text input format.
Also please help me understand use case for different file formats(Avro, Sequence, TextInput, RCFile ).
I believe there are no advantages of Text as default other than its contents are human readable and friendly. You could easily view contents by issuing Hadoop fs -cat .
The disadvantages with Text format are
It takes more resources on disk, so would impact the production job efficiency.
Writing/Parsing the text records take more time
No option to maintain data types incase the text is composed from multiple columns.
The Sequence , Avro , RCFile format have very significant advantages over Text format.
Sequence - The key/value objects are directly stored in the binary format through the Hadoop's native serialization process by implementing Writable interface. The data types of the columns are very well maintained, and parsing the records with relevant data type also done easily. Obvoiusly it takes lesser space compared with Text due to the binary format.
Avro - Its a very compact binary storage format for hadoop key/value pairs, Reads/writes records through Avro serialization/deserialization. It is very similar to Sequence file format but also provides Language interoperability and cell versioning.
You may choose Avro over Sequence only if u need cell versioning or the data to be stored will used by few other applications written in different languages other than Java.Avro files can be processed by any languages like C, Ruby, Python, PHP, Java wherein Sequence files are specific only for Java.
RCFile - The Record Columnar File format is column oriented and it is a Hive specific storage format designed to make hive to support faster data load, reduce storage space.
Apart from this you may also consider the ORC and the Parquet file formats.
I try to...figure that case in Hadoop.
What is best file format Avro or SequenceFile, in case storing images in HDFS and process them after, with Python?
SequenceFile are key-value oriented, so I think that Avro files will work better?
I use SequenceFile to store images in HDFS and it works well. Both Avro and SequenceFile are binary file formats, hence they can store images efficiently. As a keys in SequenceFile I usually use the original image file names.
SequenceFile's are used in many image processing products, such as OpenIMAJ. You can use existing tools for working with images in SequenceFile's, for example OpenIMAJ SequenceFileTool.
In addition, you can take a look at HipiImageBundle. This is a special format provided by HIPI (Hadoop Image Processing Interface). In my experience, HipiImageBundle has better performance, than the SequenceFile. But in can be used only by HIPI.
If you don't have large number of files (less than 1M), you can try to store them without packaging in one big file and use CombineFileInputFormat to speedup processing.
I never use Avro to store images and I don't know about any project that use it.
I have very large number of small files to be stored in HDFS. Based on the file name I want to store them in different data nodes. This way I can achieve file names starting with certain alphabets to go into specific data nodes. How to do this in Hadoop?
Not a very good choice. Reasons :
Hadoop is not very good at handling very large number of small files.
Storing one complete file in a single node is against one of the fundamental principles of HDFS, distributed storage.
I would like to know what benefit will you get with this approach.
In response to your comment :
HDFS doesn't do any kind of sorting like HBase does. When you put a file into HDFS, it gets split into small blocks first and then gets stored(each block on a different node). So there is nothing like sending a whole file to a single node. Your file(blocks) reside on multiple nodes.
What you could do is create a directory hierarchy as per you needs and store files in those directories(in case your intention is to fetch the files directly based on their location). For example,
/dirA
/dirA/A.txt
/dirA/B.txt
/dirB
/dirB/P.txt
/dirB/Q.txt
/dirC
/dirC/Y.txt
/dirC/Z.txt
But, if you really want to send the blocks of a particular file to some specific nodes then you need to implement your own block placement policy and which is not very easy. See this for more details.
I have a use case to upload some tera-bytes of text files as sequences files on HDFS.
These text files have several layouts ranging from 32 to 62 columns (metadata).
What would be a good way to upload these files along with their metadata:
creating a key, value class per text file layout and use it to create and upload as sequence files ?
create SequenceFile.Metadata header in each file being uploaded as sequence file individually ?
Any inputs are appreciated !
Thanks
I prefer storing meta data with the data and then designing your application to be meta data driven, as opposed to embedding meta data in the design or implementation of your application which then means updates to metadata require updates to your app. Ofcourse there are limits to how far you can take a metadata driven application.
You can embed the meta data with the data such as by using an encoding scheme like JSON or you could have the meta data along side the data such as having records in the SeqFile specifically for describing meta data perhaps using reserved tags for the keys so as to given metadata its own namespace separate from the namespace used by the keys for the actual data.
As for the recommendation of whether this should be packaged into separate Hadoop files, bare in mind that Hadoop can be instructed to split a file into Splits (input for map phase) via configuration settings. Thus even a single large SeqFile can be processed in parallel by several map tasks. The advantage of having a single hdfs file is that it more closely resembles the unit of containment of your original data.
As for the recommendation about key types (i.e. whether to use Text vs. binary), consider that the key will be compared against other values. The more compact the key, the faster the comparison. Thus if you can store a dense version of the key that would be preferable. Likewise, if you can structure the key layout so that the first bytes are typically NOT the same then it will also help performance. So, for instance, serializing a Java class as the key would not be recommended because the text stream begins with the package name of your class which is likely to be the same as every other class and thus key in the file.
If you want data and its metadata bundled together, then AVRO format is the appropriate one. It allows schema evolution also.
The simplest thing to do is to make the keys and values of the SequenceFiles Text. Pick a meaningful field from your data to make the Key, the data itself is the value as a Text. SequenceFiles are designed for storing key/value pairs, if that's not what your data is then don't use a SequenceFile. You could just upload unprocessed text files and input those to Hadoop.
For best performance, do not make each file terabytes in size. The Map stage of Hadoop runs one job per input file. You want to have more files than you have CPU cores in your Hadoop cluster. Otherwise you will have one CPU doing 1 TB of work and a lot of idle CPUs. A good file size is probably 64-128MB, but for best results you should measure this yourself.