I currently have a process which reads files from AWS S3 and concatenates them using EMR.
The input files have the following format: 1 header row and 1 data row.
Fields are comma-separated and wrapped in double-quotes.
Example:
"header-field1","header-field2","header-field3",...
"data-field1","data-field2","data-field3",...
The files vary in size between 90 and 200 bytes.
The output file has the following format:
"header-field1","header-field2","header-field3",...
"file1-data-field1","file1-data-field2","file1-data-field3",...
"file2-data-field1","file2-data-field2","file2-data-field3",...
"file3-data-field1","file3-data-field2","file3-data-field3",...
....
My current approach uses a default mapper and a single reducer to concatenate all the data rows and prepend 1 header row at the top of the final output file.
Because I want to have a single header row in the output final, I was forced to use only 1 single reducer in my EMR job. This I feel, drastically increases run-time.
Early tests ran great with tens of files.
However, I am trying to scale this application to run for thousands of files with the final goal of concatenating 1 million.
My current process for 1000 files is still running after 30+ minutes, which is too long.
Do you have any suggestions on where I can improve my application to dramatically improve overall performance?
thank you.
Related
We plan to append data to our files in NEW_BLOCK mode. This gives us more flexibility as to DN status.
Now after running the process for days, we find our 2mb file has too many blocks.
Is there a way to merge the blocks of a file - say bring down the 100 blocks for a file to 4.
Writing parquet data can be done with something like the following. But if I'm trying to write to more than just one file and moreover wanting to output to multiple s3 files so that reading a single column does not read all s3 data how can this be done?
AvroParquetWriter<GenericRecord> writer =
new AvroParquetWriter<GenericRecord>(file, schema);
GenericData.Record record = new GenericRecordBuilder(schema)
.set("name", "myname")
.set("favorite_number", i)
.set("favorite_color", "mystring").build();
writer.write(record);
For example what if I want to partition by a column value so that all the data with favorite_color of red goes in one file and those with blue in another file to minimize the cost of certain queries. There should be something similar in a Hadoop context. All I can find are things that mention Spark using something like
df.write.parquet("hdfs:///my_file", partitionBy=["created_year", "created_month"])
But I can find no equivalent to partitionBy in plain Java with Hadoop.
In a typical Map-Reduce application, the number of output files will be the same as the number of reduces in your job. So if you want multiple output files, set the number of reduces accordingly:
job.setNumReduceTasks(N);
or alternatively via the system property:
-Dmapreduce.job.reduces=N
I don't think it is possible to have one column per file with the Parquet format. The internal structure of Parquet files is initially split by row groups, and only these row groups are then split by columns.
I'm running a pig script that does a series of joins and write using AvroStorage()
All is running well, and I am getting the data that I want... but it is being written to 845 avro files (~30kb each). This does not seem right at all... but I cannot seem to find any settings that I may have changed to go from my previous output of 1 large avro to 845 small avros (except adding another data source).
Would this change anything? And how can I get it back to one or two files??
Thanks!
A possibility is to change your block size. If you want to go back to less files, you can also try to use parquet. Transform your .avro files through a pig script and store it like a .parquet file this will reduce your 845 to less files.
But it isn't necessary to get back to less files except for a performance advantage.
The number of files written by MR job is defined by the number of reducers ran. You can use PARALLEL in Pig script to control the number of reducers.
If you are sure that the final data is small enough (comparable to your block size), you can add PARALLEL 1 to your JOIN statement to make sure that JOIN is translated to 1 reducers and thus writes output in only 1 file.
I solved that using SET pig.maxCombinedSplitSize 134217728;
with SET default_parallel 10; it may still output many small files depending on the PIG job.
Is it possible to have Pig process several small files with one mapper (assuming doing so will improve the speed of the job). We have an issue where there are thousands of small files in hdfs and pig creates hundreds of mappers. Is there a simple (full or partial) solution that Pig provides to address this issue?
You can make use of these properties to combine these multiple files into one file, so that they are processed by a single map :
pig.maxCombinedSplitSize – Specifies the size, in bytes, of data to be processed by a single map. Smaller files are combined untill this size is reached.
pig.splitCombination – Turns combine split files on or off (set to “true” by default).
This feature works with PigStorage without having to write any custom loader. More on this can be found here.
HTH
A common approach in Hadoop with a large number of small files is to aggregate them into large Sequence or Avro files and than use respective storage functions to read them.
For Pig and Avro take a look at AvroStorage
Imagine you have a big file stored in hdtf which contains structured data. Now the goal is to process only a portion of data in the file like all the lines in the file where second column value is between so and so. Is it possible to launch the MR job such that hdfs only stream the relevant portion of the file versus streaming everything to the mappers.
The reason is that I want to expedite the job speed by only working on the portion that I need. Probably one approach is to run a MR job to get create a new file but I am wondering if one can avoid that?
Please note that the goal is to keep the data in HDFS and I do not want to read and write from database.
HDFS stores files as a bunch of bytes in blocks, and there is no indexing, and therefore no way to only read in a portion of your file (at least at the time of this writing). Furthermore, any given mapper may get the first block of the file or the 400th, and you don't get control over that.
That said, the whole point of MapReduce is to distribute the load over many machines. In our cluster, we run up to 28 mappers at a time (7 per node on 4 nodes), so if my input file is 1TB, each map slot may only end up reading 3% of the total file, or about 30GB. You just perform the filter that you want in the mapper, and only process the rows you are interested in.
If you really need filtered access, you might want to look at storing your data in HBase. It can act as a native source for MapReduce jobs, provides filtered reads, and stores its data on HDFS, so you are still in the distributed world.
One answer is looking at the way that hive solves this problem. The data is in "tables" which are really just meta data about files on disk. Hive allows you to set columns on which a table is partitioned. This creates a separate folder for each partition so if you were partitioning a file by date you would have:
/mytable/2011-12-01
/mytable/2011-12-02
Inside of the date directory would be you actual files. So if you then ran a query like:
SELECT * FROM mytable WHERE dt ='2011-12-01'
Only files in /mytable/2011-12-01 would be fed into the job.
Tho bottom line is that if you want functionality like this you either want to move to a higher level language (hive/pig) or you need to roll your own solutions.
Big part of the processing cost - is data parsing to produce Key-Values to the Mapper. We create there (usually) one java object per value + some container. It is costly both in terms of CPU and garbage collector pressure
I would suggest the solution "in the middle". You can write input format which will read the input stream and skip non-relevant data in the early stage (for example by looking into few first bytes of the string). As a result you will read all data, but actually parse and pass to the Mapper - only portion of it.
Another approach I would consider - is to use RCFile format (or other columnar format), and take care that relevant and non relevant data will sit in the different columns.
If the files that you want to process have some unique attribute about their filename (like extension or partial filename match), you can also use the setInputPathFilter method of FileInputFormat to ignore all but the ones you want for your MR job. Hadoop by default ignores all ".xxx" and _xxx" files/dirs, but you can extend with setInputPathFilter.
As others have noted above, you will likely get sub-optimal performance out of your cluster doing something like this which breaks the "one block per mapper" paradigm, but sometimes this is acceptable. Can sometimes take more to "do it right", esp if you're dealing with a small amount of data & the time to re-architect and/or re-dump into HBase would eclipse the extra time required to run your job sub-optimally.