I have a dynamic partitioned managed table in Hive partitioned by (country,state).
I wish to add one more column to these partitioned columns, say (country,state,city).
I am thinking I may use ALTER TABLE tab_nm DROP PARTITION old_partitions and then use another ALTER TABLE tab_nm ADD PARTITION.. to add new set of columns.
In one blog I read that a new table be created with latest partitions, load the data from the table with old partitions. But then I do not wish to recreate the table as its a huge production table.
I still have not implemented the ALTER TABLE.. since I am wondering that the DROP PARTITION may remove all the data in those partitions.
Please help.
Related
Is there a way to dynamically partition an external Hive table? I referred to posts which mentioned to follow below steps -
Create an external non-partitioned table
Create an external partitioned table
Use insert into table command to insert the data from non-partitioned to partitioned table
Problem with the above solution is that I need to always use the insert command if my base (non-partitioned) table is modified (addition/deletion of parquet files).
I am seeking for a solution where I need not do any insert command and my partitioned table should be updated as and how my non-partitioned table is changed.
I want to generate dynamically the below alter code(the below one is an eg, it will differ table to table) for all the partitioned tables in 12c DB.
Some tables may be partitioned on RANGE, LIST etc.
The column name, partition type will also change as per the table.
ALTER TABLE EMP
MODIFY PARTITION BY RANGE (START_DATE)
( PARTITION P1 VALUES LESS THAN (date'2021-1-1') ) ONLINE;
I have already created tables without partition in another db and now want to partition those tables which were partitioned in the source db. So want a simple script which can create code to partition the tables in the target db. Note - all tables have different partition and my goal is to make them sync with source. Only data differs in both the DBs.
I have an existing bucketed table that has YEAR, MONTH, DAY partitioning, but I want to add additional partitioning by INGESTION_KEY, a column that doesn't exist in the existing table. This is to accommodate future table inserts so that I don't have to OVERWRITE a YEAR, MONTH, DAY partition every time I ingest data for that date; I can just do a simple INSERT INTO and create a new INGESTION_KEY partition.
I need a year's worth of data in my new table to start, so I want to copy a year of partitions from my existing table to a new table. Rather than doing a Hive INSERT for each partition, I thought it would be quicker to use distcp to copy files into the new table's partition directories in the Hive warehouse directory in HDFS, then ADD PARTITION to the new table.
So, this is all I'm doing:
hadoop distcp /apps/hive/warehouse/src_db.db/src_tbl/year=2017/month=02/day=06 /apps/hive/warehouse/dest_db.db/dest_tbl/year=2017/month=02/day=06/ingestion_key=123
hive -e "ALTER TABLE dest_tbl ADD PARTITION (year=2017,month=02,day=06,ingestion_key='123')"
Both are managed tables, the new table dest_tbl is clustered by the same column into the same number of buckets as the src_tbl, and the only difference in schema is the addition of INGESTION_KEY.
So far my SELECT * FROM dest_tbl shows everything in the new table looking normal. So my question is: is there anything wrong with this approach? Is it bad to INSERT to a managed, bucketed table this way, or is this an acceptable alternative to INSERT if no transformations are being done on the copied data?
Thanks!!
Although i prefer copying by Hive query just to make it all in Hive, but it's ok to copy data files using other tools, but ..
There is a dedicated command that add the new partitions metadata, you can use it in place of alter table add partition.., and it can add many partitions at once :
MSCK REPAIR TABLE dest_tbl;
Keep using Hive default partitioning format : partionKey=partitionValue
I have a partitioned table in greenplum(modeled after psql), which has been partitioned with specific range of values.
Now, i have to insert the data again into the same table. New values for Partitions might overlap with existing ones. I have created alter command with new start and end dates. But, if the overlaps are there, the command fails. So, i need to create partition for each date, in order to avoid whole command failure.
Just wondering, if there is a way in greenplum to create partitions based on the inserted data automatically, just like hive does.
thanks for your help.
Greenplum does not (currently) create additional partitions for data which does not fit into an existing partition.
If you have a default partition on the table it will receive all the records which do not fit into one of the specified partitions. You can then use ALTER TABLE ... SPLIT DEFAULT PARTITION (see the documentation if required) to create the new partitions for any new dates at the end of the load batch.
I have dropped the all the partitions in the hive table by using the alter command
alter table emp drop partition (hiredate>'0');
After droping partitions still I can see the partitions metadata.How to delete this partition metadata? Can I use the same table for new partitions?
Partitioning is defined when the table is created. By running ALTER TABLE ... DROP PARTITION ... you are only deleting the data and metadata for the matching partitions, not the partitioning of the table itself.
Your best bet at this point will be to recreate the table without the partitioning. If there is some data you are trying to save, rename the current table, create the new table (without the partitioning), then run an INSERT from the old table to the new table.