Save and Process huge amount of small files with spark - hadoop

I'm new in big data! I have some questions about how to process and how to save large amount of small files(pdf and ppt/pptx) in spark, on EMR Clusters.
My goal is to save data(pdf and pptx) into HDFS(or in some type of datastore from cluster) then extract content from this file from spark and save it in elasticsearch or some relational database.
I had read the problem of small files when save data in HDFS. What is the best way to save large amount of pdf & pptx files (maxim size 100-120 MB)? I had read about Sequence Files and HAR(hadoop archive) but none of them I don't understand how exactly it's works and i don't figure out what is the best.
What is the best way to process this files? I understood that some solutions could be FileInputFormat or CombineFileInputFormat but again I don't know how exactly it's works. I know that can't run every small file on separated task because the cluster will be put in the bottleneck case.
Thanks!

If you use Object Stores (like S3) instead of HDFS then there is no need to apply any changes or conversions to your files and you can have them each as a single object or blob (this also means they are easily readable using standard tools and needn't be unpacked or reformatted with custom classes or code).
You can then read the files using python tools like boto (for s3) or if you are working with spark using the wholeTextFile or binaryFiles command and then making a BytesIO (python) / ByteArrayInputStream (java) to read them using standard libraries.
2) When processing the files, you have the distinction between items and partitions. If you have a 10000 files you can create 100 partitions containing 100 files each. Each file will need to anyways be processed one at a time since the header information is relevant and likely different for each file.

Meanwhile, I found some solutions for that small files problem in HDFS. I can use the following approaches:
HDFS Federation help us to distribute the load of namenodes: https://hortonworks.com/blog/an-introduction-to-hdfs-federation/
HBase could be also a good alternative if your files size is not too large.
There are practical limits to the size of values (e.g., storing 10-50MB objects in HBase would probably be too much to ask); search the mailing list for conversations on this topic. All rows in HBase conform to the Data Model, and that includes versioning. Take that into consideration when making your design, as well as block size for the ColumnFamily.
https://hbase.apache.org/book.html
Apache Ozone which is object storage like S3 but is on-premises. At the time of writing, from what I know, Ozone is not production ready. https://hadoop.apache.org/ozone/

Related

Sync files on hdfs having same size but varies in contents

i am trying to sync files from one hadoop clutster to another using distcp and airbnb reair utility, but both of them are not working as expected.
if file size is same on source and destination both of them fails to update it even if file content are been changed(checksum also varies) unless overwrite option is not used.
I need to keep sync data of around 30TB so every time loading complete dataset is not feasible.
Could anyone please suggest how can i bring two dataset in sync if file size is same(count in source is changed) and have varied checksum.
The way DistCp handles syncing between files that are the same size but having different contents is by comparing its so-called FileChecksum. The FileChecksum was first introduced in HADOOP-3981, mostly for the purpose of being used in DistCp. Unfortunately, this has the known shortcoming of being incompatible between different storage implementations, and even incompatible between HDFS instances that have different internal block/chunk settings. Specifically, that FileChecksum bakes in the structure of having, for example, 512-bytes-per-chunk and 128MB-per-block.
Since GCS doesn't have the same notions of "chunks" or "blocks", there's no way for it to have any similar definition of a FileChecksum. The same is also true of all other object stores commonly used with Hadoop; the DistCp documentation appendix discusses this fact under "DistCp and Object Stores".
That said, there's a neat trick that can be done to define a nice standardized representation of a composite CRC for HDFS files that is mostly in-place compatible with existing HDFS deployments; I've filed HDFS-13056 with a proof of concept to try to get this added upstream, after which it should be possible to make it work out-of-the-box against GCS, since GCS also supports file-level CRC32C.

Gathering heterogeneous data with hadoop

We have a system, including some Oracle and Microsoft SQL DBMS, that get data from some different sources and in different formats, stores and process it. "Different formats" means files: dbf, xls and others, including binary formats (images), which are imported to DBMS with different tools, and direct access to the databases. I want to isolate all the incoming data and store it "forever" and want to get them later by source and creation time. After some studies I want to try hadoop ecosystem, but not quite sure, if it's an adequate solution for this goal. And what parts of ecosystem should I use? HDFS alone, Hive, may be something else? Could you give me a piece of advise?
I assume you want to store the files that contain the data -- effectively a searchable file archive.
The files themselves can just be stored in HDFS ... or you may find a system like Amazon's S3 cheaper and more flexible. As you store the files, you could manage the other data about the data, namely: location, source, and creation time by appending to another file -- a simple tab-separated file or several other formats supported by Hadoop make this easy.
You can manage and query the file with Hive or other SQL-on-Hadoop tools. In effect, you're creating a simple file system with special attributes, so the trick would be to make sure that each time you write a file, you also write the metadata. You may have to handle cases like write failures, what happens when you delete, rename, or move files (I know, you say "never").
Your solution might be simpler depending on your needs, you may find that storing the data in subdirectories within HDFS (or AWS S3) is even simpler. Perhaps if you wanted to store DBF files from source "foo", and XLS files from "bar" created on December 1, 2015, you could simply create a directory structure like
/2015/12/01/foo/dbf/myfile.dbf
/2015/12/01/bar/xls/myexcel.xls
This solution has the advantage of being self-maintaining -- the file path stores the metadata which makes it very portable and simple, requiring nothing more than a shell script to implement.
I don't think there's any reason to make the solution more complicated than necessary. Hadoop or S3 are both fine for long-term, high-durability storage and for querying. My company has found that storing the information about the file in Hadoop (which we use for many other purposes) and storing the files themselves on AWS S3 is far simpler, more easily secured and much cheaper.
There are various things that you may want to do, each with their own solution. If more than 1 use case is relevant for you, you probably want to implement multiple solutions in parallel.
1. Store files for use
If you want to store files in a way that they can be picked up efficiently (distributed), the solution is simple: Put the files on hdfs
2. Store the information for use
If you want to use the information, rather than storing the files you should be interested in storing the information in a way that they can be picked up efficiently. The general solution here would be: Parse the files in a lossles way and store their information in a database
You may find that storing information in (partitioned) ORC files can be nice for this. You can do this with Pive, Pig or even UDFs (e.g. python) in Pig.
3. Keep the files for the future
In this case you would mostly care about preserving the files, and not so much about ease of access. Here the recommended solution is: Store compressed files with proper backups
Note that the replication that hdfs does is to deal more efficiently with data (and hardware issues). Just having your data on hdfs does NOT mean that it is backed up.

Copy VSAM dataset to flat file for Hadoop [duplicate]

I have files in Mainframe. I want these data to be pushed to Hadoop(HDFS)/HIVE.
I can use Sqoop for the Mainframe DB2 database and import it to HIVE, but what about files (like COBOL,VASM etc.)
Is there any custom flume source that I can write or some alternative tool to use here?
COBOL is a programming language, not a file format. If what you need is to export files produced by COBOL programs, you can use the same technique as if those files were produced by C, C++, Java, Perl, PL/I, Rexx, etc.
In general, you will have three different data sources: flat files, VSAM files, and a DBMS such as DB2 or IMS.
DMBSs have export utilities to copy the data into flat files. Keep in mind that data in DB2 will likely be normalized and thus you likely need the contents of related tables in order to make sense of the data.
VSAM files can be exported to flat files via the IDCAMS utility.
I would strongly suggest you get the files into a text format before transferring them to another box with a different code page. Trying to deal with mixed text (which must have its code page translated) and binary (which must not have its code page translated but which likely must be converted from big endian to little endian) is harder than doing the conversion up front.
The conversion can likely be done via the SORT utility on the mainframe. Mainframe SORT utilities tend to have extensive data manipulation functions. There are other mechanisms you could use (other utilities, custom code written in the language of your choice, purchased packages) but this is what we tend to do in these circumstances.
Once you have your flat files converted such that all data is text, you can transfer them to your Hadoop boxes via FTP or SFTP or FTPS.
This isn't an exhaustive coverage of the topic, but it will get you started.
Syncsort has been processing mainframe data for 40 years (approx 50% of mainframes already run the software) they have a specific product called DMX-H which can source mainframe data, handle the data type conversions, import the cobol copy books and load it directly into HDFS.
Syncsort also recently contributed a new feature enhancement to the Apache Hadoop core
I suggest you contact them at www.syncsort.com
They were showing this in a demo at a recent Cloudera roadshow.
Update for 2018:
There are a number of commercial products that help to move data from the mainframe to distributed platforms. Here is a list of ones that I have run into for those that are interested. All of them take data on Z as described in the question and will do some transformation and enable movement of the data to other platforms. Not an exact match, but, the industry has changed and the goal of moving data for analysis to other platforms is growing. Data Virtualization Manager provides the most robust tooling for transforming the data from what I've seen.
SyncSort IronStream
IBM Common Data Provider
Correlog
IBM Data Virtualization Manager
Why not : hadoop fs -put <what> <where>?
Transmission of cobol layout files can be done through above discussed options. However actual mapping them to Hive table is a complex task as cobol layout has complex formats as depending clause, variable length, etc.,
I have tried to create custom serde to achieve, although it is still in initial stages. But here is the link, which might give you some idea how to deserialize according to your requirements.
https://github.com/rbheemana/Cobol-to-Hive
Not pull, but push: use the Co:Z Launcher from Dovetailed Technologies.
For example (JCL excerpt):
//FORWARD EXEC PGM=COZLNCH
//STDIN DD *
hadoop fs -put <(fromfile /u/me/data.csv) /data/data.csv
# Create a catalog table
hive -f <(fromfile /u/me/data.hcatalog)
/*
where /u/me/data.csv (the mainframe-based data that you want in Hadoop) and /u/me/data.hcatalog (corresponding HCatalog file) are z/OS UNIX file paths.
For a more detailed example, where the data happens to be log records, see Extracting logs to Hadoop.
Cobrix might be able to solve it for you. It is an open-source COBOL data source for Spark and can parse the files you mentioned.

Hadoop Spark (Mapr) - AddFile how does it work

I am trying to understand how does hadoop work. Say I have 10 directory on hdfs, it contains 100s of file which i want to process with spark.
In the book - Fast Data Processing with Spark
This requires the file to be available on all the nodes in the cluster, which isn't much of a
problem for a local mode. When in a distributed mode, you will want to use Spark's
addFile functionality to copy the file to all the machines in your cluster.
I am not able to understand this, will spark create copy of file on each node.
What I want is that it should read the file which is present in that directory (if that directory is present on that node)
Sorry, I am bit confused , how to handle the above scenario in spark.
regards
The section you're referring to introduces SparkContext::addFile in a confusing context. This is a section titled "Loading data into an RDD", but it immediately diverges from that goal and introduces SparkContext::addFile more generally as a way to get data into Spark. Over the next few pages it introduces some actual ways to get data "into an RDD", such as SparkContext::parallelize and SparkContext::textFile. These resolve your concerns about splitting up the data among nodes rather than copying the whole of the data to all nodes.
A real production use-case for SparkContext::addFile is to make a configuration file available to some library that can only be configured from a file on the disk. For example, when using MaxMind's GeoIP Legacy API, you might configure the lookup object for use in a distributed map like this (as a field on some class):
#transient lazy val geoIp = new LookupService("GeoIP.dat", LookupService.GEOIP_MEMORY_CACHE | LookupService.GEOIP_CHECK_CACHE)
Outside your map function, you'd need to make GeoIP.dat available like this:
sc.addFile("/path/to/GeoIP.dat")
Spark will then make it available in the current working directory on all of the nodes.
So, in contrast with Daniel Darabos' answer, there are some reasons outside of experimentation to use SparkContext::addFile. Also, I can't find any info in the documentation that would lead one to believe that the function is not production-ready. However, I would agree that it's not what you want to use for loading the data you are trying to process unless it's for experimentation in the interactive Spark REPL, since it doesn't create an RDD.
addFile is only for experimentation. It is not meant for production use. In production you just open a file specified by a URI understood by Hadoop. For example:
sc.textFile("s3n://bucket/file")

How to pull data from Mainframe to Hadoop

I have files in Mainframe. I want these data to be pushed to Hadoop(HDFS)/HIVE.
I can use Sqoop for the Mainframe DB2 database and import it to HIVE, but what about files (like COBOL,VASM etc.)
Is there any custom flume source that I can write or some alternative tool to use here?
COBOL is a programming language, not a file format. If what you need is to export files produced by COBOL programs, you can use the same technique as if those files were produced by C, C++, Java, Perl, PL/I, Rexx, etc.
In general, you will have three different data sources: flat files, VSAM files, and a DBMS such as DB2 or IMS.
DMBSs have export utilities to copy the data into flat files. Keep in mind that data in DB2 will likely be normalized and thus you likely need the contents of related tables in order to make sense of the data.
VSAM files can be exported to flat files via the IDCAMS utility.
I would strongly suggest you get the files into a text format before transferring them to another box with a different code page. Trying to deal with mixed text (which must have its code page translated) and binary (which must not have its code page translated but which likely must be converted from big endian to little endian) is harder than doing the conversion up front.
The conversion can likely be done via the SORT utility on the mainframe. Mainframe SORT utilities tend to have extensive data manipulation functions. There are other mechanisms you could use (other utilities, custom code written in the language of your choice, purchased packages) but this is what we tend to do in these circumstances.
Once you have your flat files converted such that all data is text, you can transfer them to your Hadoop boxes via FTP or SFTP or FTPS.
This isn't an exhaustive coverage of the topic, but it will get you started.
Syncsort has been processing mainframe data for 40 years (approx 50% of mainframes already run the software) they have a specific product called DMX-H which can source mainframe data, handle the data type conversions, import the cobol copy books and load it directly into HDFS.
Syncsort also recently contributed a new feature enhancement to the Apache Hadoop core
I suggest you contact them at www.syncsort.com
They were showing this in a demo at a recent Cloudera roadshow.
Update for 2018:
There are a number of commercial products that help to move data from the mainframe to distributed platforms. Here is a list of ones that I have run into for those that are interested. All of them take data on Z as described in the question and will do some transformation and enable movement of the data to other platforms. Not an exact match, but, the industry has changed and the goal of moving data for analysis to other platforms is growing. Data Virtualization Manager provides the most robust tooling for transforming the data from what I've seen.
SyncSort IronStream
IBM Common Data Provider
Correlog
IBM Data Virtualization Manager
Why not : hadoop fs -put <what> <where>?
Transmission of cobol layout files can be done through above discussed options. However actual mapping them to Hive table is a complex task as cobol layout has complex formats as depending clause, variable length, etc.,
I have tried to create custom serde to achieve, although it is still in initial stages. But here is the link, which might give you some idea how to deserialize according to your requirements.
https://github.com/rbheemana/Cobol-to-Hive
Not pull, but push: use the Co:Z Launcher from Dovetailed Technologies.
For example (JCL excerpt):
//FORWARD EXEC PGM=COZLNCH
//STDIN DD *
hadoop fs -put <(fromfile /u/me/data.csv) /data/data.csv
# Create a catalog table
hive -f <(fromfile /u/me/data.hcatalog)
/*
where /u/me/data.csv (the mainframe-based data that you want in Hadoop) and /u/me/data.hcatalog (corresponding HCatalog file) are z/OS UNIX file paths.
For a more detailed example, where the data happens to be log records, see Extracting logs to Hadoop.
Cobrix might be able to solve it for you. It is an open-source COBOL data source for Spark and can parse the files you mentioned.

Resources