I want to apply archive and purge mechanism on hive tables, which includes internal and external tables and both partitioned and non-partitioned.
I have a site_visitors table and its partitioned with visit_date.
And I wanted to archive the site_visitors table data, where in users not visited my site in last one year. At the same time, I don't want to keep this archived data in same table directory. I can have archived data some specific location.
You can handle that on the partitions in the HDFS directory, below is one of the ways you can achieve that.
Your internal table/Main table will be sitting on top of hdfs and the directory will look something like below hdfs:namenonde/user/hive/warehouse/schema.db/site_visitors/visit_date=2017-01-01
hdfs:namenonde/user/hive/warehouse/schema.db/site_visitors/visit_date=2017-01-02
hdfs:namenonde/user/hive/warehouse/schema.db/site_visitors/visit_date=2017-01-03
You can create an archive table on top of HDFS or if you are just looking to archive the data you can dump the partitions to other location in HDFS. Either way, your HDFS location will look something like below.
hdfs:namenonde/hdfs_location/site_visitors/visit_date=2017-01-01
hdfs:namenonde/hdfs_location/site_visitors/visit_date=2017-01-02
hdfs:namenonde/hdfs_location/site_visitors/visit_date=2017-01-03
You can run a UNIX script or javascript or in any other language that is used in your environment to move the files from one HDFS location to the other archive hdfs location based on the partition dates.
You can also do with the below approach, where you can load the data into archive table and drop the data in the original table.
#!bin/bash
ARCHIVE=$1
now=$(date +%Y-%m-%d)
StartDate=$now
#archive_dt will give a date based on the ARCHIVE date and that be will used for alterations and loading
archive_dt=$(date --date="${now} - ${ARCHIVE} day" +%Y-%m-%d)
EndDate=$archive_dt
#You can use hive or beeline or impala to insert the data into archive table, i'm using beeline for my example
beeline -u ${CONN_URL} -e "insert into table ${SCHEMA}.archive_table partition (visit_date) select * from ${SCHEMA}.${TABLE_NAME} where visit_date < ${archive_dt}"
#After the data been loaded to the archive table i can drop the partitions in original table
beeline -u ${CONN_URL} -e "ALTER TABLE ${SCHEMA}.main_table DROP PARTITION(visit_date < ${archive_dt})"
#Repair the tables to sync the metadata after alterations
beeline -u ${CONN_URL} -e "MSCK REPAIR TABLE ${SCHEMA}.main_table; MSCK REPAIR TABLE archiveSchema.archive_table"
Related
Maybe this is an easy question but, I am having a difficult time resolving the issue. At this time, I have an pseudo-distributed HDFS that contains recordings that are encoded using protobuf 3.0.0. Then, using Elephant-Bird/Hive I am able to put that data into Hive tables to query. The problem that I am having is partitioning the data.
This is the table create statement that I am using
CREATE EXTERNAL TABLE IF NOT EXISTS test_messages
PARTITIONED BY (dt string)
ROW FORMAT SERDE
"com.twitter.elephantbird.hive.serde.ProtobufDeserializer"
WITH serdeproperties (
"serialization.class"="path.to.my.java.class.ProtoClass")
STORED AS SEQUENCEFILE;
The table is created and I do not receive any runtime errors when I query the table.
When I attempt to load data as follows:
ALTER TABLE test_messages_20180116_20180116 ADD PARTITION (dt = '20171117') LOCATION '/test/20171117'
I receive an "OK" statement. However, when I query the table:
select * from test_messages limit 1;
I receive the following error:
Failed with exception java.io.IOException:java.lang.IllegalArgumentException: FieldDescriptor does not match message type.
I have been reading up on Hive table and have seen that the partition columns do not need to be part of the data being loaded. The reason I am trying to partition the date is both for performance but, more so, because the "LOAD DATA ... " statements move the files between directories in HDFS.
P.S. I have proven that I am able to run queries against hive table without partitioning.
Any thoughts ?
I see that you have created EXTERNAL TABLE. So you cannot add or drop partition using hive. you need to create a folder using hdfs or MR or SPARK. EXTERNAL table can only be read by hive but not managed by HDFS. You can check the hdfs location '/test/dt=20171117' and you will see that folder has not been created.
My suggestion is create the folder(partition) using "hadoop fs -mkdir '/test/20171117'" then try to query the table. although it will give 0 row. but you can add the data to that folder and read from Hive.
You need to specify a LOCATION for an EXTERNAL TABLE
CREATE EXTERNAL TABLE
...
LOCATION '/test';
Then, is the data actually a sequence file? All you've said is that it's protobuf data. I'm not sure how the elephantbird library works, but you'll want to double check that.
Then, your table locations need to look like /test/dt=value in order for Hive to read them.
After you create an external table over HDFS location, you must run MSCK REPAIR TABLE table_name for the partitions to be added to the Hive metastore
I have to create a hive table from data present in oracle tables.
I'm doing a sqoop, thereby converting the oracle data into HDFS files. Then I'm creating a hive table on the HDFS files.
The sqoop completes successfully and the files also get generated in the HDFS target directory.
Then I run the create table script in hive. The tables gets created. But it is an empty table, no data is seen in the hive table.
Has anyone faced a similar problem?
Hive default delimiter is ctrlA, if you don't specify any delimiter it will take default delimiter. Add below line in your hive script .
row format delimited fields terminated by '\t'
Your Hive script and your expectation is wrong. You are trying to create a partitioned table on the data that you have already imported, partitions won't work that way. If your query has no partition in it then you can able to see data.
Basically If you want partitioned table , you can't create on the under lying data like you have tried above. If you want hive partition load the data from intermediate table or that sqoop directory to your partitioned table to get Hive partitions.
lets say, I created Hive external table "myTable" from file myFile.csv ( located in HDFS ).
myFile.csv is changed every day, then I'm interested to update "myTable" once a day too.
Is there any HiveQL query that tells to update the table every day?
Thank you.
P.S.
I would like to know if it works the same way with directories: lets say, I create Hive partition from HDFS directory "myDir", when "myDir" contains 10 files. next day "myDIr" contains 20 files (10 files were added). Should I update Hive partition?
There are two types of tables in Hive basically.
One is Managed table managed by hive warehouse whenever you create a table data will be copied to internal warehouse.
You can not have latest data in the query output.
Other is external table in which hive will not copy its data to internal warehouse.
So whenever you fire query on table then it retrieves data from the file.
SO you can even have the latest data in the query output.
That is one of the goals of external table.
You can even drop the table and the data is not lost.
If you add a LOCATION '/path/to/myFile.csv' clause to your table create statement, you shouldn't have to update anything in Hive. It will always use the latest version of the file in queries.
recently I want to load the log files into hive tables, I want a tool which can read data from a certain directory and load them into hive automatically. This directory may include lots of subdirectories, for example, the certain directory is '/log' and the subdirectories are '/log/20130115','/log/20130116','/log/201301017'. Is there some ETL tools which can achieve the function that:once the new data is stored in the certain directory, the tool can detect this data automatically and load them into hive table. Is there such tools, do I have to write script by myself?
You can easily do this using Hive external tables and partitioning your table by day. For example, create your table as such:
create external table mytable(...)
partitioned by (day string)
location '/user/hive/warehouse/mytable';
This will essentially create an empty table in the metastore and make it point to /user/hive/warehouse/mytable.
Then you can load your data in this directory with the format key=value where key is your partition name (here "day") and value is the value of your partition. For example:
hadoop fs -put /log/20130115 /user/hive/warehouse/mytable/day=20130115
Once your data is loaded there, it is in the HDFS directory, but the Hive metastore doesn't know yet that it belongs to the table, so you can add it this way:
alter table mytable add partition(day='20130115');
And you should be good to go, the metastore will be updated with your new partition, and you can now query your table on this partition.
This should be trivial to script, you can create a cron job running once a day that will do these command in order and find the partition to load with the date command, for example continuously doing this command:
hadoop fs -test /log/`date +%Y%m%d`
and checking if $? is equal to 0 will tell you if the file is here and if it is, you can transfer it and add the partition as described above.
You can make use of LOAD DATA command provided by Hive. It exactly matches your use case. Specify a directory in your local file system and make Hive tables from it.
Example usage -
LOAD DATA LOCAL INPATH '/home/user/some-directory'
OVERWRITE INTO TABLE table
I'd like to know how to find the mapping between Hive tables and the actual HDFS files (or rather, directories) that they represent. I need to access the table files directly.
Where does Hive store its files in HDFS?
Hive tables may not necessarily be stored in a warehouse (since you can create tables located anywhere on the HDFS).
You should use DESCRIBE FORMATTED <table_name> command.
hive -S -e "describe formatted <table_name> ;" | grep 'Location' | awk '{ print $NF }'
Please note that partitions may be stored in different places and to get the location of the alpha=foo/beta=bar partition you'd have to add partition(alpha='foo',beta='bar') after <table_name>.
The location they are stored on the HDFS is fairly easy to figure out once you know where to look. :)
If you go to http://NAMENODE_MACHINE_NAME:50070/ in your browser it should take you to a page with a Browse the filesystem link.
In the $HIVE_HOME/conf directory there is the hive-default.xml and/or hive-site.xml which has the hive.metastore.warehouse.dir property. That value is where you will want to navigate to after clicking the Browse the filesystem link.
In mine, it's /usr/hive/warehouse. Once I navigate to that location, I see the names of my tables. Clicking on a table name (which is just a folder) will then expose the partitions of the table. In my case, I currently only have it partitioned on date. When I click on the folder at this level, I will then see files (more partitioning will have more levels). These files are where the data is actually stored on the HDFS.
I have not attempted to access these files directly, I'm assuming it can be done. I would take GREAT care if you are thinking about editing them. :)
For me - I'd figure out a way to do what I need to without direct access to the Hive data on the disk. If you need access to raw data, you can use a Hive query and output the result to a file. These will have the exact same structure (divider between columns, ect) as the files on the HDFS. I do queries like this all the time and convert them to CSVs.
The section about how to write data from queries to disk is https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Writingdataintothefilesystemfromqueries
UPDATE
Since Hadoop 3.0.0 - Alpha 1 there is a change in the default port numbers. NAMENODE_MACHINE_NAME:50070 changes to NAMENODE_MACHINE_NAME:9870. Use the latter if you are running on Hadoop 3.x. The full list of port changes are described in HDFS-9427
In Hive terminal type:
hive> set hive.metastore.warehouse.dir;
(it will print the path)
It's also very possible that typing show create table <table_name> in the hive cli will give you the exact location of your hive table.
Summarize few points posted earlier,
in hive-site.xml, property hive.metastore.warehouse.dir specifies where the files located under hadoop HDFS
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
To view files, use this command:
hadoop fs -ls /user/hive/warehouse
or
http://localhost:50070
Utilities > Browse the file system
or
http://localhost:50070/explorer.html#/
tested under hadoop-2.7.3, hive-2.1.1
describe formatted <table_name>; inside hive shell.
Notice the "Location" value that shows the location of the table.
Another way to check where a specific table is stored would be execute this query on the hive interactive interface:
show create table table_name;
where table_name is the name of the subject table.
An example for the above query on 'customers' table would be something like this:
CREATE TABLE `customers`(
`id` string,
`name` string)
COMMENT 'Imported by sqoop on 2016/03/01 13:01:49'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://quickstart.cloudera:8020/user/hive/warehouse/
sqoop_workspace.db/customers'
TBLPROPERTIES (
'COLUMN_STATS_ACCURATE'='true',
'numFiles'='4',
'totalSize'='77',
'transient_lastDdlTime'='1456866115')
LOCATION in the example above is where you should focus on. That is your hdfs location for hive warehouse.
Don't forget to like if you like this solution. Cheers!
Hive database is nothing but directories within HDFS with .db extensions.
So, from a Unix or Linux host which is connected to HDFS, search by following based on type of HDFS distribution:
hdfs dfs -ls -R / 2>/dev/null|grep db
or
hadoop fs -ls -R / 2>/dev/null|grep db
You will see full path of .db database directories. All tables will be residing under respective .db database directories.
Hive tables are stored in the Hive warehouse directory.
By default, MapR configures the Hive warehouse directory to be /user/hive/warehouse under the root volume. This default is defined in the $HIVE_HOME/conf/hive-default.xml.
In Sandbox, you need to go for /apps/hive/warehouse/ and normal cluster /user/hive/warehouse
If you look at the hive-site.xml file you will see something like this
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/usr/hive/warehouse </value>
<description>location of the warehouse directory</description>
</property>
/usr/hive/warehouse is the default location for all managed tables.
External tables may be stored at a different location.
describe formatted <table_name> is the hive shell command which can be use more generally to find the location of data pertaining to a hive table.
In Hive, tables are actually stored in a few places. Specifically, if you use partitions (which you should, if your tables are very large or growing) then each partition can have its own storage.
To show the default location where table data or partitions will be created if you create them through default HIVE commands: (insert overwrite ... partition ... and such):
describe formatted dbname.tablename
To show the actual location of a particular partition within a HIVE table, instead do this:
describe formatted dbname.tablename partition (name=value)
If you look in your filesystem where a table "should" live, and you find no files there, it's very likely that the table is created (usually incrementally) by creating a new partition and pointing that partition at some other location. This is a great way of building tables from things like daily imports from third parties and such, which avoids having to copy the files around or storing them more than once in different places.