I am trying to understand how exactly the ALTER TABLE CONCATENATE in HIVE Works.
I saw this link How does Hive 'alter table <table name> concatenate' work? but all I got from this links is that for ORC Files, the merge happens at a stripe level.
I am looking for a detailed explanation of how CONCATENATE works. As an e.g I initially had 500 small ORC Files in the HDFS. I ran the Hive ALTER TABLE CONCATENATE and the files merged to 27 bigger files. Subsequent runs of CONCATENATE reduced the number of files to 16 and finally I ended up in two large files.( used version Hive 0.12 ) So I wanted to understand
How exactly CONCATENATE works? Does it looks at the existing number of files , as well as the size ? How will it determine the no: of output ORC files after concatenation?
Is there any known issues with using the Concatenate ? We are planning to run the concatenate one a day in the maintenance window
Is Using CTAS an alternative to concatenate and which is better? Note that my requirement is to reduce the no of ORC files (ingested through Nifi) without compromising performance of Read
Any help is appreciated and thanks in advance
Concatenated file size can be controlled with following two values:
set mapreduce.input.fileinputformat.split.minsize=268435456;
set hive.exec.orc.default.block.size=268435456;
These values should be set based on your HDFS/MapR-FS block size.
As commented by #leftjoin it is indeed the case that you can get different output files for the same underlying data.
This is discussed more in the linked HCC thread but the key point is:
Concatenation depends on which files are chosen first.
Note that having files of different sizes, should not be a problem in normal situations.
If you want to streamline your process, then depending on how big your data is, you may also want to batch it a bit before writing to HDFS. For instance, by setting the batch size in NiFi.
Related
My question is mostly theoretical, but i have some tables that already follow some sort of partition scheme, lets say my table is partitioned by day, but after working with the data for sometime we want to modifity to month partitions instead, i could easily recreare the table with the new partition definition and reinsert the data, is this the best approach? sounds slow when the data is huge, i have seen there are multiple alter commands in hive for partitions, is there one that can help me achieve what i need?
Maybe there is another choice of concatenating the files and then recreating the table with the new partition?
ALTER TABLE table_name [PARTITION (partition_key = 'partition_value' [, ...])] CONCATENATE;
If there are any relevant references they are appreciated as well.
If the files are in daily folders, you can not mount many daily folders into single month partition, for each month, files needs to be moved to month folder. You can not do it as a metadata only operation.
If you are good in shell scripting you can write loop in hadoop fs -ls <table location> | sort, in the loop save path into variable, check if substring including yyyy-MM is different from previous, then create yyyy-MM folder. For each row in a loop do copy everything into month location (hadoop fs -cp daily_location/* month_location/), all can be done in single loop.
If you are on S3 and using AWS-CLI commands, creating of folders is not necessary, just copy.
If there are too many small files, you may want to concatenate them in monthly folders, if it is ORC, you can execute ALTER TABLE PARTITION CONCATENATE. If not ORC, then better use Hive INSERT OVERWRITE, it will do all that for you, you can configure merge task and finally your files will be in optimal size. Additionally you can improve compression efficiency and make possible to use bloom filters and internal indexes(if it is ORC/Parquet) if you add distribute by partition_col sort by <keys used in filters/joins>, this will greatly reduce table size and improve queries performance.
So, better use Hive for this task because it gives you opportunity to improve data storage: change storage format, concatenate files, sort to reduce compressed size and make indices and bloom filters be really useful.
I will be having multiple small text files around size of 10KB, got confused where to store those files in HBase or in HDFS. what will be the optimized storage?
Because to store in HBase I need to parse it first then save it against some row key.
In HDFS I can directly create a path and save that file at that location.
But till now whatever I read, it says you should not have multiple small files instead create less big files.
But I can not merge those files, so I can't create big file out of small files.
Kindly suggest.
A large number of small files don´t fit very well with hadoop since each file is a hdfs block and each block require a one Mapper to be processed by default.
There are several options/strategies to minimize the impact of small files, all options require to process at least one time small files and "package" them in a better format. If you are planning to read these files several times, pre-process small files could make sense, but if you will use those files just one time then it doesn´t matter.
To process small files my sugesstion is to use CombineTextInputFormat (here an example): https://github.com/lalosam/HadoopInExamples/blob/master/src/main/java/rojosam/hadoop/CombinedInputWordCount/DriverCIPWC.java
CombineTextInputFormat use one Mapper to process several files but could require to transfer the files to a different DataNode to put files together in the DAtaNode where the map is running and could have a bad performance with speculative tasks but you can disable them if your cluster is enough stable.
Alternative to repackage small files are:
Create sequence files where each record contains one of the small files. With this option you will keep the original files.
Use IdentityMapper and IdentityReducer where the number of reducers are less than the number of files. This is the most easy approach but require that each line in the files be equals and independents (Not headers or metadata at the beginning of the files required to understand the rest of the file).
Create a external table in hive and then insert all the records for this table into a new table (INSERT INTO . . . SELECT FROM . . .). This approach have the same limitations than the option two and require to use Hive, the adventage is that you don´t require to write a MapReduce.
If you can not merge files like in option 2 or 3, my suggestion is to go with option 1
You could try using HAR archives: https://hadoop.apache.org/docs/r2.7.2/hadoop-archives/HadoopArchives.html
It's no problem with having many small different files. If for example you have a table in Hive with many very small files in hdfs, it's not optimal, better to merge these files into less big ones because when reading this table a lot of mappers will be created. If your files are completely different like 'apples' and 'employees' and can not be merged than just store them as is.
Question from a relative Hadoop/Hive newbie: How can I pass the contents of a Microsoft Word (binary) document as a parameter to a Hive function?
My goal is to be able to provide the full contents of a binary file (a Microsoft Word document in my particular use case) as a binary parameter to a UDTF. My initial approach has been to slurp the file's contents into a staging table and then provide it to the UDTF in a query later on, and this was how I attempted to build that staging table:
create table worddoc(content BINARY);
load data inpath '/path/to/wordfile' into table worddoc;
Unfortunately, there seem to be newlines in the Word document (or something acting enough like newlines) that results in the staging table having many rows instead of a single comprehensive blob, the latter of which is what I was hoping for. Is there some way of ensuring that the ingest doesn't get exploded into multiple rows? I've seen similar questions here on SO regarding other binary data like image files, so that is why I'm guessing it's the newlines that are tripping me up.
Failing all that, is there a way to skip storing the file's contents in an intermediary Hive table and just provide the content directly to the UDTF at invocation time? Nothing obvious jumped out during my search through Hive's built-in functions, but maybe I am missing something.
Version-wise, the environment is Hive 0.13.1 and Hadoop 1.2.1 (although upgrades to both are pending).
This is a hack-y workaround but what I ended up doing is this:
1) base64 encode the binary document and put the encoded file into HDFS
2) In Hive:
CREATE TABLE staging_table (content STRING);
LOAD DATA INPATH '/path/to/base64_encoded_file' INTO TABLE staging_table;
CREATE TABLE target_table (content BINARY);
INSERT INTO target_table SELECT unbase64(content) FROM staging_table;
Theoretically this should work for any arbitrary binary file that you'd want to squish into Hive this way. A gotcha to watch out for is to make sure your base64 encoding implementation produces a single-line file (my OS X base64 utility produces 1-line output, while the base64 utility in a CentOS 6 VM I was using produced hundreds of lines) - if it doesn't, you can manually glue it together before putting it into HDFS.
I'm running a pig script that does a series of joins and write using AvroStorage()
All is running well, and I am getting the data that I want... but it is being written to 845 avro files (~30kb each). This does not seem right at all... but I cannot seem to find any settings that I may have changed to go from my previous output of 1 large avro to 845 small avros (except adding another data source).
Would this change anything? And how can I get it back to one or two files??
Thanks!
A possibility is to change your block size. If you want to go back to less files, you can also try to use parquet. Transform your .avro files through a pig script and store it like a .parquet file this will reduce your 845 to less files.
But it isn't necessary to get back to less files except for a performance advantage.
The number of files written by MR job is defined by the number of reducers ran. You can use PARALLEL in Pig script to control the number of reducers.
If you are sure that the final data is small enough (comparable to your block size), you can add PARALLEL 1 to your JOIN statement to make sure that JOIN is translated to 1 reducers and thus writes output in only 1 file.
I solved that using SET pig.maxCombinedSplitSize 134217728;
with SET default_parallel 10; it may still output many small files depending on the PIG job.
Currently I am bringing into Hadoop around 10 tables from an EDW (Enterprise Data Warehouse), these tables are closely related to a Star Schema model. I'm usig Sqoop to bring all these tables across, resulting in 10 directories containing csv files.
I'm looking at what are some better ways to store these files before striking off MR jobs. Should I follow some kind of model or build an aggregate before working on MR jobs? I'm basically looking at how might be some ways of storing related data together.
Most things I have found by searching are storing trivial csv files and reading them with opencsv. I'm looking for something a bit more involved and not just for csv files. If moving towards another format works better than csv, then that is no problem.
Boils down to: How best to store a bunch of related data in HDFS to have a good experience with MR.
I suggest spending some time with Apache Avro.
With Sqoop v1.3 and beyond you can import data from your relational data sources as Avro files using a schema of your own design. What's nice about Avro is that it provides a lot of features in addition to being a serialization format...
It gives you data+schema in the same file but is compact and efficient for fast serialization. It gives you versioning facilities which are useful when bringing in updated data with a different schema. Hive supports it in both reading and writing and Map Reduce can use it seamlessly.
It can be used as a generic interchange format between applications (not just for Hadoop) making it an interesting option for a standard, cross-platform format for data exchange in your broader architecture.
Storing these files in csv is fine. Since you will be able to process these files using text output format and could also read it through hive using specific delimiter. You could change the delimiter if you do not like comma to pipe("|") that's what I do most of the time. Also you generally need to have large files in hadoop but if its large enough that you can partition these files and each file partition is in the size of few 100 gigs then it would be a good to partition these files into separate directory based on your partition column.
Also it would be better idea to have most of the columns in single table than having many normalized small tables. But that varies depending on your data size. Also make sure whenever you copy , move or create data you do all the constraint check on your applications as it will be difficult to make small changes in the table later on, you will need to modify the complete file for even small change.
Hive Partitioning and Bucketing concepts can be used to effectively used to put similar data together (not in nodes, but in files and folders) based on a particular column. Here are some nice tutorials for Partitioning and Bucketing.