create a Parquet backed Hive table by using a schema file - hadoop

Cloudera documentation, shows a simple way to "create a Avro backed Hive table by using an Avro schema file." This works great. I would like to do the same thing for a Parquet backed Hive table, but the relevant documentation in this case lists out every column type rather than reading from a schema. Is it possible to read the Parquet columns from a schema, in the same way as Avro data?

Currently, the answer appears to be no. There is an open issue with Hive.
https://issues.apache.org/jira/browse/PARQUET-76
The issue has been active recently, so hopefully in the near future Hive will offer the same functionality for Parquet as it does for Avro.

Related

Schema on read in hive for tsv format file

I am new on hadoop. I have data in tsv format with 50 columns and I need to store the data into hive. How can I create and load the data into table on the fly without manually creating table using create table statementa using schema on read?
Hive requires you to run a CREATE TABLE statement because the Hive metastore must be updated with the description of what data location you're going to be querying later on.
Schema-on-read doesn't mean that you can query every possible file without knowing metadata beforehand such as storage location and storage format.
SparkSQL or Apache Drill, on the other hand, will let you infer the schema from a file, but you must again define the column types for a TSV if you don't want everything to be a string column (or coerced to unexpected types). Both of these tools can interact with a Hive metastore for "decoupled" storage of schema information
you can use Hue :
http://gethue.com/hadoop-tutorial-create-hive-tables-with-headers-and/
or with Spark you can infer the schema of csv file and you can save it as a hive table.
val df=spark.read
.option("delimiter", "\t")
.option("header",true)
.option("inferSchema", "true") // <-- HERE
.csv("/home/cloudera/Book1.csv")

Solution for Dynamic Schemas - HIVE/AVRO

The requirement is to keep up with the schema evolution for target ORC table. I am receiving JSON events from source. We plan to convert these to AVRO (since it supports schema evolution). Since schema can change daily/weekly, we need to keep ingesting new data JSON files, convert them to AVRO and store all the data (old/new) in an ORC hive table. How do we solve for this?
You can follow below approach, which is one among many different ways that you can implement to solve this.
1. Create HBASE Table
Read the AVRO data and create table in HBASE initially.( You can use spark to do this efficiently)
HBASE table will take care of schema evolution even in the future.
2. Create Hive Wrapper Table
Create a hive wrapper table (storage handlers) pointing to the HBASE table. (You can read more about it here
3. Create ORC Table
Now create ORC table from the table created in step2
4. Things you need to handle
Since Hive tables are tightly coupled with a schema, you need to handle a step before writing the data into Hive wrapper table in step 2. You need to identify the new columns here and then add the columns appropriately to the existing wrapper or ORC table. This again can be achieved by NiFi or Spark or as simple as a shell script. Choose the right tools according to your use case.

Where does the Hive data gets stored?

I am a little confused on where does the hive stores it's data.
Does it stores it's data in HDFS or in a RDBMS ??
Does Hive Meta store uses a RDBMS to store the hive tables metadata ??
Thanks in Advance !!
Hive data are stored in one of Hadoop compatible filesystem: S3, HDFS or other compatible filesystem.
Hive metadata are stored in RDBMS like MySQL, see supported RDBMS.
The location of Hive tables data in S3 or HDFS can be specified for both managed and external tables.
The difference between managed and external tables is that DROP TABLE statement, in managed table, will drop the table and delete table's data. Whereas, for external table DROP TABLE will drop only the table and data will remain as is and can be used for creating other tables over it.
See details here: Create/Drop/Truncate Table
Here is the answer to your question. But I will suggest you to read hive books or apache hive site for better understanding.
Does it stores it's data in HDFS or in a RDBMS ?? - The Data for HIVE is always stored in HDFS. For managed tables the data is stored in hive warehouse by default which is a directory in HDFS. For HIVE External table user can specify the location anywhere in HDFS.
Does Hive Meta store uses a RDBMS to store the hive tables metadata ?? - Yes HIVE uses RDBMS to store the metadata.

How to use Parquet files created using Apache Drill inside Hive

Apache Drill has a nice feature of making parquet files out of many incoming datasets, but it seems like there is not a lot of information on how to use those parquet files later on - specifically in Hive.
Is there a way for Hive to make use of those "1_0_0.parquet", etc files? Maybe create a table and load the data from parquet files or create a table and somehow place those parquet files inside hdfs so that Hive reads it?
I have faced this problem, if you are using a Cloudera distribution, you can create the tables using impala (Impala and Hive share the metastore), it allows create tables from a parquet file. Unfortunately Hive doesn't allow this
CREATE EXTERNAL TABLE table_from_fileLIKE PARQUET '/user/etl/destination/datafile1.parquet'
STORED AS PARQUET
LOCATION '/user/test/destination';

Why we need to move external table to managed hive table?

I am new to Hadoop and learning Hive.
In Hadoop definative guide 3rd edition page no. 428 last paragraph
I don't understand below paragraph regarding external table in HIVE.
"A common pattern is to use an external table to access an initial dataset stored in HDFS (created by another process), then use a Hive transform to move the data into a managed Hive table."
Can anybody explain briefly what above phrase says?
Usually the data in the initial dataset is not constructed in the optimal way for queries.
You may want to modify the data (like modifying some columns adding columns, making aggregation etc) and to store it in a specific way (partitions / buckets / sorted etc) so that the queries would benefit from these optimizations.
The key difference between external and managed table in Hive is that data in the external table is not managed by Hive.
When you create external table you define HDFS directory for that table and Hive is simply "looking" in it and can get data from it but Hive can't delete or change data in that folder. When you drop external table Hive only deletes metadata from its metastore and data in HDFS remains unchanged.
Managed table basically is a directory in HDFS and it's created and managed by Hive. Even more - all operations for removing/changing partitions/raw data/table in that table MUST be done by Hive otherwise metadata in Hive metastore may become incorrect (e.g. you manually delete partition from HDFS but Hive metastore contains info that partition exists).
In Hadoop definative guide I think author meant that it is a common practice to write MR-job that produces some raw data and keeps it in some folder. Than you create Hive external table which will look into that folder. And than safelly run queries without the risk to drop table etc.
In other words - you can do MR job that produces some generic data and than use Hive external table as a source of data for insert into managed tables. It helps you to avoid creating boring similar MR jobs and delegate this task to Hive queries - you create query that takes data from external table, aggregates/processes it how you want and puts the result into managed tables.
Another goal of external table is to use as a source data from remote servers, e.g. in csv format.
There is no reason to move table to managed unless you are going to enable ACID or other features supported only for managed tables.
The list of differences in features supported by managed/external tables may change in future, better use current documentation. Currently these features are:
ARCHIVE/UNARCHIVE/TRUNCATE/MERGE/CONCATENATE only work for managed
tables
DROP deletes data for managed tables while it only deletes
metadata for external ones
ACID/Transactional only works for
managed tables
Query Results Caching only works for managed
tables
Only the RELY constraint is allowed on external tables
Some Materialized View features only work on managed tables
You can create both EXTERNAL and MANAGED tables on top of the same location, see this answer with more details and tests: https://stackoverflow.com/a/54038932/2700344
Data structure has nothing in common with external/managed table type. If you want to change structure you do not necessarily need to change table managed/external type
It is also mentioned in the book.
when your table is external table.
you can use other technologies like PIG,Cascading or Mapreduce to process it .
You can also use multiple schemas for that dataset.
and You can also create data lazily if it is external table.
when you decide that dataset should be used by only Hive,make it hive managed table.

Resources