Hive. Check stripe size for existing ORC storage - hadoop

I have two scripts which parse data from raw logs and write it into ORC tables in HIVE. One script creates more columns and another less. Both tables partitioned by date field.
As the result I have ORC tables with different sizes of files.
Table with larger number of columns consists of many small files (~4MB per file inside each partition) and tables with less columns consists of few large files (~250 MB per file inside each partition).
I suppose it happens because of stripe.size setting in ORC. But I don't know how to check size of stripe for existing table. Commands like "show create" and "describe" don't reveal any custom settings, it means that stripe size for tables should be equal to 256 MB.
I'm looking for any advice to check stripe.size for existing ORC table.
Or explanation how file size inside ORC tables depends on data in that tables.
P.s.It matters later when I'm reading from that tables with Map Reduce and there are small number of reducers for tables with big files.

Try the Hive ORC File Dump Utility: ORC File Dump Utility.

Related

Hive queries on external S3 table very slow

We have our dataset in s3 (parquet files) in the below format, data divided as multiple parquet files based on the row number
data1_1000000.parquet
data1000001_2000000.parquet
data2000001_3000000.parquet
...
Created hive table on top of it using,
CREATE EXTERNAL TABLE parquet_hive (
foo string
) STORED AS PARQUET
LOCATION 's3://myBucket/myParquet/';
Totally there 22000 parquet files and the size of the folder is nearly 300GB. When i run the count query on this table in Hive, it is taking 6 hours to return the result which is nearly 7 billion records. How can we make it faster? Can i create partition or index on the table or this is the time it usually take when pulling data from s3. Can anyone advice, what is wrong here.
Thanks.

How to combine multiple ORC files (belonging to each partition) in a Partitioned Hive ORC table into a single big ORC file

I have a partitioned ORC table in Hive. After loading the table with all possible partitions I get on HDFS - multiple ORC files i.e. each partition directory on HDFS has an ORC file in it. I need to combine all these ORC files under each partition to a single big ORC file for some use-case.
Can someone suggest me a way to combine these multiple ORC files (belonging to each partition) into a single big ORC file.
I've tried creating a new Non Partitioned ORC table from the Partitioned table.. It does reduce the number of files but not to a single file.
PS: Creating a table out of another one is a completely a map task and hence setting the number of reducers to 1 using the property 'set mapred.reduce.tasks=1;' doesn't help.
Thanks
You can use the CONCATENATE command to combine the small orc files. This can be done at table as well as partition level:
The syntax as per the orc documentation:
users can request an efficient merge of small ORC files together by
issuing a CONCATENATE command on their table or partition. The files
will be merged at the stripe level without reserialization.
ALTER TABLE istari [PARTITION partition_spec] CONCATENATE;

Increase write speed in hive for ORC files

Currently an insert overwrite table T1 select * from T2; will take around 100 minutes in my cluster. Table T1 is ORC formatted and T2 is text formatted. I am reading a 60 GB of text data from T2 and inserting into ORC table T1(10 GB after insertion). If i use text format for both tables insert will take around 50 min. In both cases what are the things we can do to improve write speed( I have large tables coming in) or any other suggestions??
I have recently derived an approach which splits the source file into partitions this takes around 6mins from text table to orc table in hive for 100GB data.
Approach below
Before inserting the file into text table
1.split the file into small partitions in unix location using split command
2.then remove the original file from the path and just keep the files splitted.
Inserting into text table
3.now load the data into text table
4.it will take some mins to load and u can see that there will be same number of partitions as you have done at unix level
Inserting into orc table
Ex: you have splitted the actual file into let say 20 partitions
then you would see 20 tasks/containers being run on the cluster to load into the orc table which is very much faster than the other
solutions which i came across
#despicable-me
That is probably a normal behaviour as when you write data from text to text - it just writes data line by line from one file into another. Text-to-ORC will do some more work besides of it. Comparing to the text-to-text operation, text-to-orc importing will perform additional bucket-partition operations and compression operations to you data. That is the resaon of your time impacts. ORC format gives two main benefits upon text format:
save of space due to compression
improve access time to work with the data
Usually the INSERT operation is a single time operation, while access operations will be very frequent. So it usually makes sence to spend some more time at the beginning on importing the data and then have a huge benefite in saving space due to optimized storage of the data and
in optimized access time to this data

Impact of Repeatedly Creating and Deleting Hive Table

I have an use case which required around 200 hive parquet table.
I need to load these parquet table from flat text files. But we can not directly load parquet table from flat text file.
So I am using following approach
Created a temporary managed text table.
Loaded temp table with text data.
Created external parquet table.
Loaded parquet table with text table using select query.
Dropped text file for temporary text table (but keep table in metastore).
As this approach is keeping temporary metadata (for 200 tables) in metastore. So I have second approach is that I will drop temporary text table too along with text files from hdfs. And next time re-create temporary table and delete once parquet get created.
Now, as I need to follow above steps for all 200 tables for every 2 hours, So will creating and deleting tables from metastore impact anything in cluster during production?
Which approach can impact production, keeping temporary metadata in metastore, creating and deleting tables (metadata) from hive metastore?
Which approach can impact production, keeping temporary metadata in
metastore, creating and deleting tables (metadata) from hive
metastore?
No, there is no impact, the backend of the HiveMetastore should be able to handle 200 * n Changes per hour easily. If you're unsure start with 50 tables and monitor the backend DB performance.

Creating an ORC file and not Hive table?

From what I googled around and found are ways of creating an ORC table using Hive but I want a an ORC file on which I can run my custom map-reduce job.
Also please let me know that the file created by Hive under the warehouse directory for my ORC table is a table file of ORC and not an actutal ORC file I can use? like: /user/hive/warehouse/tbl_orc/000000_0
[Wrap-up of the discussion]
a Hive table is mapped on a HDFS directory (or a list of
directories, if the table is partitioned)
all files in that directory use the same SerDe (ORC, Parquet, AVRO,
Text, etc.) and have the same column set; all together, they contain all the data available for that table
each file in that directory is the result of a previous MapReduce job
-- either a Hive INSERT, a Pig dataset saved via HCatalog, a Spark dataset saved via HiveContext... or any custom job that happens to
drop a file there, hopefully compliant with the table SerDe and
schema (retrieved via MetastoreClient Java API, or via HCatalog API,
whatever)
note that a single job with 3 reducers will probably create 3 new
files (and maybe 1 empty file + 1 small file + 1 big file!); and a
job with 24 mappers and no reducer will create 24 files, unless some
kind of "merge small files" post-processing step is enabled
note also that most file names give absolutely no information about
the way the file is encoded intenally, they are just sequence numbers
(i.e. the 5th job to add 12 files will typically create files 000004_0 to
000004_11)
All in all, processing an ORC fileset with a Java MapReduce program should be very similar to processing a Text fileset. You just have to provide the correct SerDe and the correct field mapping -- I think that the encryption algorithm is explicit in the files so the Serde handles it auto-magically at read time. Just remember that ORC files are not splittable at record level, but at stripe level (a stripe is a bunch of record stored in columnar format w/ tokenization and optional compression).
Of course, that will not give you access to ORC advanced features such a vectorization or stripe pruning (somewhat similar to "smart scan" in Oracle Exadata).

Resources