Unable to understand significance of external keyword in hive - hadoop

I have a few doubts which I need to be clarified:
If I create a table without the "External" keyword, but specify "location", will it be an external or internal table in the hive?
If I use the "external" keyword with a table name but do not specify the 'location', it will be saved to hive/warehouse location which is the default storage. In this case, will it be an external table?
Overall, I want to understand, what makes a table external, the keyword "External" or specifying the 'location'.
Any help will be appreciated.

If I create a table without the "External" keyword, but specify
"location", will it be an external or internal table in the hive?
It will be MANAGED table (EXTERNAL=False). You can check it using DESCRIBE FORMATTED tablename;
If I use the "external" keyword with a table name but do not specify
the 'location', it will be saved to hive/warehouse location which is
the default storage. In this case, will it be an external table?
Yes, it will be EXTERNAL table.
what makes a table external, the keyword "External" or specifying the
'location'
Only EXTERNAL property / keyword in CREATE TABLE makes EXTERNAL TABLE, not location. EXTERNAL table property is not about location initially. EXTERNAL or not EXTERNAL(MANAGED) defines how DROP TABLE behaves: for EXTERNAL table DROP TABLE will not remove table location, only table metadata will be deleted. For managed table DROP TABLE will remove it's location with all data files as well as metadata. There are also differences in features supported for managed and external tables
In earlier versions of Hive it was no constraints on where the managed or external tables should be located and it was possible to create MANAGED table outside hive.metastore.warehouse.dir. If LOCATION is not specified, hive will use the value of hive.metastore.warehouse.dir for both managed and external tables. And you can create both managed and external tables on top of the same location: https://stackoverflow.com/a/54038932/2700344.
See also https://stackoverflow.com/a/56957960/2700344 and https://stackoverflow.com/a/67073849/2700344

Related

How to use external table in hive?

Can anyone please explain why and where do we use external tables in hive?
Please explain a scenario to understand easily.
We use external table when our underlying dataset pointed by hive table is shared by many purpose i.e for map reduce job, pig etc and use managed table in hive when our dataset pointed by hive table is used only by hive application.
Actually in hive managed table has full control on dataset i.e in managed table if you will drop the table dataset will also be deleted from hive warehouse(/usr/hive/warehouse) present in HDFS, but in case of external table when you drop the table, dataset are not deleted from hive warehouse in HDFS.
Suppose take an example you have 50 gb data set now if you create multiple copies of dataset for different purpose it will simply take more space so the better option is to use external table so that when you drop the table dataset are not deleted and you can use it further by any other application like by pig or by any other purpose.
As a rule of thumb: use external table if you plan to work with those data not only from Hive but from other frameworks as well. Otherwise make it internal.
The only difference between External and Managed table in Hive is Drop table or Drop partition behavior. For Managed it will drop data as well, for External table the data will remain untouched in the table/partition location.
Use External in most cases. External table allows you to change table definition easily. Also you can create few tables on top of the same location.
Use Managed table if the table is temporary/intermediate and data should be deleted to free space.
Managed table can be converted to external and vice-versa using
alter table table_name SET TBLPROPERTIES('EXTERNAL'='TRUE');

Hive External Table vs Internal table commands

Assuming I have these two tables:
External:
create external table emp_feedback (
emp_id int,
emp_name string
)
LOCATION '/user/hive/warehouse/mydb.db/contacts';
Internal:
create table emp_feedback (
emp_id int,
emp_name string
)
LOAD DATA INPATH 'file_location_of_csv' INTO TABLE emp_feedback;
When I say: LOCATION '/user/hive/warehouse/mydb.db/contacts'; for the external table does that mean that the data for that table is found in the directory '/user/hive/warehouse/mydb.db/contacts';? So that directory has to exist in HDFS before hand?
Can I use LOAD DATA INPATH... for an external table or is that only used for internal tables. And vice versa can I use Location... for an internal table?
(a) Yes. You are right, it means that the data is found in that location/directory
(b) No. The directory doesn't have to exist to create a Schema, Hive will create the directory if it doesn't exist. But there is no point in doing as your table will be empty therefore your query will be empty. But in the future, you can move data to that location and use that table.
(a) LOAD DATA INPATH can be used for both external and internal tables. When you do, it moves the data, to the location specified by the schema (for external tables) or to /.../warehouse/... (for internal tables)
(b) location can be specified for both internal and external tables. But when you drop the internal table, it will also remove the data from that location, whereas only meta data information is removed for external tables.
load data inpath command is use to load data into hive table. 'LOCAL' signifies that the input file is on the local file system. If 'LOCAL' is omitted then it looks for the file in HDFS.
load data inpath '/path/file.csv' into mytable;
load data local inpath '/Local path/file.csv' into mytable;
This command will remove content at source directory and create a internal table
LOCATION keyword allow to points to any HDFS location for its storage, rather than being stored in a folder specified by the configuration property hive.metastore.warehouse.dir.
In other words, with specified LOCATION '/your path/', Hive does not use a default location for this table. This comes in handy if you already have data generated.
Remember, LOCATION can be specify on EXTERNAL tables only. For regular tables, default location will be used.
Create external table and copy the data into table. Now data won't be moved from source. You can drop external table but still source data is available.
When you drop an external table, it only drops the meta data of HIVE table. Data still exists at HDFS file location.
To summarize, load data inpath tell hive where to look for input files and LOCATION keyword tells hive where to save output files on HDFS.

External hive table as parquet file returns NULL when queried

I created a .parquet file by using map reduce job. Now I want to create an external table on top of this file. Here is the command:
CREATE EXTERNAL TABLE testparquet (
NAME STRING,
AGE INT
)
STORED AS PARQUET
LOCATION 'file location'
The table is created successfully but when I query the table using simple SELECT * , I get data as NULL for all fields. The version of hive is 0.13.
Is there anything that I am missing?
When using external files, you need to explicitly synchronize the metadata store that knows about the schema of your data, with the actual data itself.
Typically, you'll use the INVALIDATE METADATA command to force following queries to re-read the data. You can also use REFRESH <table-name> if you have just one table that has been updated.

Why hive doesn't allow create external table with CTAS?

In hive, create external table by CTAS is a semantic error, why?
The table created by CTAS is atomic, while external table means data will not be deleted when dropping table, they do not seem to conflict.
In Hive when we create a table(NOT external) the data will be stored in /user/hive/warehouse.
But during External hive table creation the file will be anywhere else, we are just pointing to that hdfs directory and exposing the data as hive table to run hive queries etc.
This SO answer more precisely Create hive table using "as select" or "like" and also specify delimiter
Am I missing something here?
Try this...You should be able to create an external table with CTAS.
CREATE TABLE ext_table LOCATION '/user/XXXXX/XXXXXX'
AS SELECT * from managed_table;
I was able to create one. I am using 0.12.
i think its a semantic error because it misses the most imp parameter of external table definition viz. the External Location of the data file! by definition, 1. External means the data is outside hive control residing outside the hive data warehouse dir. 2. if table is dropped data remains intact only table definition is removed from hive metastore. so,
i. if CTAS is with managed table, the new ext table will have file in warehouse which will be removed with drop table making #2 wrong
ii. if CTAS is with other external table, the 2 tables will point to same file location.
CTAS creates a managed hive table with the new name using the schema and data of the said table.
You can convert it to an external table using:
ALTER TABLE <TABLE_NAME> SET TBLPROPERTIES('EXTERNAL'='TRUE');

Difference between Hive internal tables and external tables?

Can anyone tell me the difference between Hive's external table and internal tables.
I know the difference comes when dropping the table. I don't understand what you mean by the data and metadata is deleted in internal and only metadata is deleted in external tables.
Can anyone explain me in terms of nodes please.
Hive has a relational database on the master node it uses to keep track of state.
For instance, when you CREATE TABLE FOO(foo string) LOCATION 'hdfs://tmp/';, this table schema is stored in the database.
If you have a partitioned table, the partitions are stored in the database(this allows hive to use lists of partitions without going to the file-system and finding them, etc). These sorts of things are the 'metadata'.
When you drop an internal table, it drops the data, and it also drops the metadata.
When you drop an external table, it only drops the meta data. That means hive is ignorant of that data now. It does not touch the data itself.
Hive tables can be created as EXTERNAL or INTERNAL. This is a choice that affects how data is loaded, controlled, and managed.
Use EXTERNAL tables when:
The data is also used outside of Hive. For example, the data files are read and processed by an existing program that doesn't lock the files.
Data needs to remain in the underlying location even after a DROP TABLE. This can apply if you are pointing multiple schemas (tables or views) at a single data set or if you are iterating through various possible schemas.
You want to use a custom location such as ASV.
Hive should not own data and control settings, dirs, etc., you have another program or process that will do those things.
You are not creating table based on existing table (AS SELECT).
Use INTERNAL tables when:
The data is temporary.
You want Hive to completely manage the lifecycle of the table and data.
To answer you Question :
For External Tables, Hive stores the data in the LOCATION specified during creation of the table(generally not in warehouse directory). If the external table is dropped, then the table metadata is deleted but not the data.
For Internal tables, Hive stores data into its warehouse directory. If the table is dropped then both the table metadata and the data will be deleted.
For your reference,
Difference between Internal & External tables :
For External Tables -
External table stores files on the HDFS server but tables are not linked to the source file completely.
If you delete an external table the file still remains on the HDFS server.
As an example if you create an external table called “table_test” in HIVE using HIVE-QL and link the table to file “file”, then deleting “table_test” from HIVE will not delete “file” from HDFS.
External table files are accessible to anyone who has access to HDFS file structure and therefore security needs to be managed at the HDFS
file/folder level.
Meta data is maintained on master node, and deleting an external table from HIVE only deletes the metadata not the data/file.
For Internal Tables-
Stored in a directory based on settings in hive.metastore.warehouse.dir,
by default internal tables are stored in the following directory “/user/hive/warehouse” you can change it by updating the location in the config file .
Deleting the table deletes the metadata and data from master-node and HDFS respectively.
Internal table file security is controlled solely via HIVE. Security needs to be managed within HIVE, probably at the schema level (depends
on organization).
Hive may have internal or external tables, this is a choice that affects how data is loaded, controlled, and managed.
Use EXTERNAL tables when:
The data is also used outside of Hive. For example, the data files are read and processed by an existing program that doesn’t lock the files.
Data needs to remain in the underlying location even after a DROP TABLE. This can apply if you are pointing multiple schema (tables or views) at a single data set or if you are iterating through various possible schema.
Hive should not own data and control settings, directories, etc., you may have another program or process that will do those things.
You are not creating table based on existing table (AS SELECT).
Use INTERNAL tables when:
The data is temporary.
You want Hive to completely manage the life-cycle of the table and data.
Source :
HDInsight: Hive Internal and External Tables Intro
Internal & external tables in Hadoop- HIVE
An internal table data is stored in the warehouse folder, whereas an external table data is stored at the location you mentioned in table creation.
So when you delete an internal table, it deletes the schema as well as the data under the warehouse folder, but for an external table it's only the schema that you will loose.
So when you want an external table back you again after deleting it, can create a table with the same schema again and point it to the original data location. Hope it is clear now.
The only difference in behaviour (not the intended usage) based on my limited research and testing so far (using Hive 1.1.0 -cdh5.12.0) seems to be that when a table is dropped
the data of the Internal (Managed) tables gets deleted from the HDFS file system
while the data of the External tables does NOT get deleted from the HDFS file system.
(NOTE: See Section 'Managed and External Tables' in https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL which list some other difference which I did not completely understand)
I believe Hive chooses the location where it needs to create the table based on the following precedence from top to bottom
Location defined during the Table Creation
Location defined in the Database/Schema Creation in which the table is created.
Default Hive Warehouse Directory (Property hive.metastore.warehouse.dir in hive.site.xml)
When the "Location" option is not used during the "creation of a hive table", the above precedence rule is used. This is applicable for both Internal and External tables. This means an Internal table does not necessarily have to reside in the Warehouse directory and can reside anywhere else.
Note: I might have missed some scenarios, but based on my limited exploration, the behaviour of both Internal and Extenal table seems to be the same except for the one difference (data deletion) described above. I tried the following scenarios for both Internal and External tables.
Creating table with and without Location option
Creating table with and without Partition Option
Adding new data using the Hive Load and Insert Statements
Adding data files to the Table location outside of Hive (using HDFS commands) and refreshing the table using the "MSCK REPAIR TABLE command
Dropping the tables
In external tables, if you drop it, it deletes only schema of the table, table data exists in physical location. So to deleted the data use hadoop fs - rmr tablename .
Managed table hive will have full control on tables. In external tables users will have control on it.
INTERNAL : Table is created First and Data is loaded later
EXTERNAL : Data is present and Table is created on top of it.
Internal tables are useful if you want Hive to manage the complete lifecycle of your data including the deletion, whereas external tables are useful when the files are being used outside of Hive.
External hive table has advantages that it does not remove files when we drop tables,we can set row formats with different settings , like serde....delimited
Also Keep in mind that Hive is a big data warehouse. When you want to drop a table you dont want to lose Gigabytes or Terabytes of data. Generating, moving and copying data at that scale can be time consuming.
When you drop a 'Managed' table hive will also trash its data.
When you drop a 'External' table only the schema definition from hive meta-store is removed. The data on the hdfs still remains.
Consider this scenario which best suits for External Table:
A MapReduce (MR) job filters a huge log file to spit out n sub log files (e.g. each sub log file contains a specific message type log) and the output i.e n sub log files are stored in hdfs.
These log files are to be loaded into Hive tables for performing further analytic, in this scenario I would recommend an External Table(s), because the actual log files are generated and owned by an external process i.e. a MR job besides you can avoid an additional step of loading each generated log file into respective Hive table as well.
The best use case for an external table in the hive is when you want to create the table from a file either CSV or text
Both Internal and External tables are owned by HIVE. The only difference being the ownership of data. The commands for creating both tables are shown below. Only an additional EXTERNAL keyword comes in case of external table creation. Both tables can be created/deleted/modified using SQL Statements.
In case of Internal Tables, both the table and the data contained in the tables are managed by HIVE. That is, we can add/delete/modify any data using HIVE. When we DROP the table, along with the table, the data will also get deleted.
Eg: CREATE TABLE tweets (text STRING, words INT, length INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
In case of External Tables, only the table is managed by HIVE. The data present in these tables can be from any storage locations like HDFS. We cant add/delete/modify the data in these tables. We can only use the data in these tables using SELECT statements. When we DROP the table, only the table gets deleted and not the data contained in it. This is why its said that only meta-data gets deleted. When we create EXTERNAL tables, we need to mention the location of the data.
Eg: CREATE EXTERNAL TABLE tweets (text STRING, words INT, length INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/user/hive/warehouse/tweets';
hive stores only the meta data in metastore and original data in out side of hive when we use external table we can give location' ' by these our original data wont effect when we drop the table
When there is data already in HDFS, an external Hive table can be created to describe the data. It is called EXTERNAL because the data in the external table is specified in the LOCATION properties instead of the default warehouse directory.
When keeping data in the internal tables, Hive fully manages the life cycle of the table and data. This means the data is removed once the internal table is dropped. If the external table is dropped, the table metadata is deleted but the data is kept. Most of the time, an external table is preferred to avoid deleting data along with tables by mistake.
For managed tables, Hive controls the lifecycle of their data. Hive stores the data for managed tables in a sub-directory under the directory defined by hive.metastore.warehouse.dir by default.
When we drop a managed table, Hive deletes the data in the table.But managed tables are less convenient for sharing with other tools. For example, lets say we have data that is created and used primarily by Pig , but we want to run some queries against it, but not give Hive ownership of the data.
At that time, external table is defined that points to that data, but doesn’t take ownership of it.
In Hive We can also create an external table. It tells Hive to refer to the data that is at an existing location outside the warehouse directory.
Dropping External tables will delete metadata but not the data.
I would like to add that
Internal tables are used when the data needs to be updated or some rows need to be deleted because ACID properties can be supported on the Internal tables but ACID properties cannot be supported on the external tables.
Please ensure that there is a backup of the data in the Internal table because if a internal table is dropped then the data will also be lost.
In simple words, there are two things:
Hive can manage things in warehouse i.e. it will not delete data out of warehouse.
When we delete table:
1) For internal tables the data is managed internally in warehouse. So will be deleted.
2) For external tables the data is managed eternal from warehouse. So can't be deleted and clients other then hive can also use it.

Resources