HPE Vertica : DROP_PARTITION dynamic predicate value - vertica

From the Vertica Docs:
DROP_PARTITION ( table_name , partition_value [ , ignore_moveout_errors, reorganize_data ])
Can the partition_value predicate be dynamic by any method?
I want to drop the partition based on MAX(partition_col_val) condition from another staging table.
Vertica also does not support variable creation as such, where I can keep the MAX(partition_col_val).
Is there any workaround possible?

You cannot use sub-querys to generate a dynamic predicate value for you drop_partition.
Normally i treat the lack of the dynamic feature with a script that will generate the drop_partition sql expresion into a .sql file and i execute in the next step.
See an example here :
create table tblone (id int not null) partition by id;
insert into tblone values (1);
commit;
-- flush the output of the query into a file
\o /tmp/file.sql
select
'SELECT DROP_PARTITION(''public.tblone'','||
max(id)||
');' from tblone;
-- execute the content of the file
\i /tmp/file.sql
--this is the content.
SELECT DROP_PARTITION('public.tblone',1);
This is when you have partition based on non dates and data that needs to be derived from other data sets.
In case you have date as your partition key or a derived value from a data column you could use a internal function to dynamically populate the drop_partition key value:
drop table tblone cascade;
create table tblone (id date not null) partition by
(((date_part('year', id) * 100) + date_part('month', id)));
insert into tblone values (getdate());
commit;
dbadmin=> select * from tblone;
id
------------
2017-01-04
(1 row)
dbadmin=> SELECT DROP_PARTITION('tblone',(date_part('year', getdate()) * 100) + date_part('month', getdate()));
DROP_PARTITION
-------------------
Partition dropped
(1 row)
dbadmin=> select * from tblone;
id
----
(0 rows)
-you can always play with the getdate() to get the current or last month or any periods you wish to.
Another option is to use the vsql cmd line variables
Example
dbadmin=> drop table tblone cascade;
DROP TABLE
dbadmin=> create table tblone (id int not null) partition by id;
CREATE TABLE
dbadmin=> insert into tblone values (1);
1
dbadmin=> commit;
COMMIT
dbadmin=> select * from tblone;
id
----
1
(1 row)
-- show only tuples
dbadmin=> \t
Showing only tuples.
-- spit the max value into a file
dbadmin=> \o /tmp/file
dbadmin=> select max(id) from tblone;
dbadmin=> \o
dbadmin=> \t
Tuples only is off.
-- set the value of the variable to the file content(your max value)
dbadmin=> \set maxvalue `cat /tmp/file`
dbadmin=> \echo :maxvalue
1
-- run the drop partition using the variable
dbadmin=> SELECT DROP_PARTITION('tblone',:maxvalue);
DROP_PARTITION
-------------------
Partition dropped
(1 row)
dbadmin=> select * from tblone;
id
----
(0 rows)
i hope this helped :)
An easy way to drop many partitions from a table using a single line of code would be using MOVE_PARTITIONS_TO_TABLE into a dummy table and then drop the dummy table
- this will require no lock on the main table and dropping the dummy table is an cheap task for the database (will be a bulk drop_partition's).
create dummy table from base table (including projections).
generate dynamic MOVE_PARTITIONS_TO_TABLE('source','from
partition','to partition','target/dummy table').
drop dummy table.
see small example (is not 100 complete- you can adapt it)
is the same approach (generate & execute)
\o /tmp/file.sql
select 'create dummy table as source table including projections;';
select
'SELECT MOVE_PARTITIONS_TO_TABLE(''source'','''||
:minpartition()||
''','''||
:maxpartition()||
''',''target/dummy table'')'
from tblone;
select 'drop table dummy cascade';
-- execute the content of the file
\i /tmp/file.sql
--make sure the content of the file is correct content
**
BTW - if you look for Vertica Database Articles and Scripts from time
to time i post at http://wwww.aodba.com
**

Related

SELECT AS OF a version before column drop

1/ Does FLASHBACK and SELECT AS OF/ VERSION BETWEEN use the same source of history to fall back to ? This question is related to the second question.
2/ I am aware that FLASHBACK cannot go back before a DDL change.
My question is for SELECT AS OF, would it be able to select something before a DDL change.
Take for example
CREATE TABLE T
(col1 NUMBER, col2 NUMBER)
INSERT INTO T(col1, col2) VALUES('1', '1')
INSERT INTO T(col1, col2) VALUES('2', '2')
COMMIT;
SLEEP(15)
ALTER TABLE T DROP COLUMN col2;
SELECT * FROM T
AS OF SYSTIMESTAMP - INTERVAL '10' SECOND;
Would the select return 2 columns or 1 ?
Pardon me I do not have a database at hand to test.
Any DDL that alter the structure of a table invalidates any existing undo data for the table. So you will get the error 'ORA-01466' unable to read data - table definition has changed.
Here is a simple test
CREATE TABLE T
(col1 NUMBER, col2 NUMBER);
INSERT INTO T(col1, col2) VALUES('1', '1');
INSERT INTO T(col1, col2) VALUES('2', '2');
COMMIT;
SLEEP(15)
ALTER TABLE T DROP COLUMN col2;
SELECT * FROM T
AS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '60' SECOND);
ERRROR ORA-01466 upon executing the above select statement.
However DDL operations that alter the storage attributes of a table do no invalidate undo data so that you can still use flashback query.
1) FLASHBACK TABLE and SELECT .. AS OF use the same source, UNDO. There is also FLASHBACK DATABASE - although it uses the same mechanism it uses a separate source, flashback logs that must be optionally configured.
2) Flashback table and flashback queries can go back before a DDL change if you enable a flashback archive.
To use that feature, add a few statements to the sample code:
CREATE FLASHBACK ARCHIVE my_flashback_archive TABLESPACE users RETENTION 10 YEAR;
...
ALTER TABLE t FLASHBACK ARCHIVE my_flashback_archive;
Now this statement will return 1 column:
SELECT * FROM T;
And this statement will return 2 columns:
SELECT * FROM T AS OF SYSTIMESTAMP - INTERVAL '10' SECOND;

insert multiple row into table using select however table has primery key in oracle SQL [duplicate]

This question already has answers here:
How to create id with AUTO_INCREMENT on Oracle?
(18 answers)
Closed 8 years ago.
I am facing issue while inserting multiple row in one go into table because column id has primary key and its created based on sequence.
for ex:
create table test (
iD number primary key,
name varchar2(10)
);
insert into test values (123, 'xxx');
insert into test values (124, 'yyy');
insert into test values (125, 'xxx');
insert into test values (126, 'xxx');
The following statement creates a constraint violoation error:
insert into test
(
select (SELECT MAX (id) + 1 FROM test) as id,
name from test
where name='xxx'
);
This query should insert 3 rows in table test (having name=xxx).
You're saying that your query inserts rows with primary key ID based on a sequence. Yet, in your insert/select there is select (SELECT MAX (id) + 1 FROM test) as id, which clearly is not based on sequence. It may be the case that you are not using the term "sequence" in the usual, Oracle way.
Anyway, there are two options for you ...
Create a sequence, e.g. seq_test_id with the starting value of select max(id) from test and use it (i.e. seq_test_id.nextval) in your query instead of the select max(id)+1 from test.
Fix the actual subselect to nvl((select max(id) from test),0)+rownum instead of (select max(id)+1 from test).
Please note, however, that the option 2 (as well as your original solution) will cause you huge troubles whenever your code runs in multiple concurrent database sessions. So, option 1 is strongly recommended.
Use
insert into test (
select (SELECT MAX (id) FROM test) + rownum as id,
name from test
where name='xxx'
);
as a workaround.
Of course, you should be using sequences for integer-primary keys.
If you want to insert an ID/Primary Key value generated by a sequence you should use the sequence instead of selecting the max(ID)+1.
Usually this is done using a trigger on your table wich is executed for each row. See sample below:
CREATE TABLE "MY_TABLE"
(
"MY_ID" NUMBER(10,0) CONSTRAINT PK_MY_TABLE PRIMARY KEY ,
"MY_COLUMN" VARCHAR2(100)
);
/
CREATE SEQUENCE "S_MY_TABLE"
MINVALUE 1 MAXVALUE 999999999999999999999999999
INCREMENT BY 1 START WITH 10 NOCACHE ORDER NOCYCLE NOPARTITION ;
/
CREATE OR REPLACE TRIGGER "T_MY_TABLE"
BEFORE INSERT
ON
MY_TABLE
REFERENCING OLD AS OLDEST NEW AS NEWEST
FOR EACH ROW
WHEN (NEWEST.MY_ID IS NULL)
DECLARE
IDNOW NUMBER;
BEGIN
SELECT S_MY_TABLE.NEXTVAL INTO IDNOW FROM DUAL;
:NEWEST.MY_ID := IDNOW;
END;
/
ALTER TRIGGER "T_MY_TABLE" ENABLE;
/
insert into MY_TABLE (MY_COLUMN) values ('DATA1');
insert into MY_TABLE (MY_COLUMN) values ('DATA2');
insert into MY_TABLE (MY_ID, MY_COLUMN) values (S_MY_TABLE.NEXTVAL, 'DATA3');
insert into MY_TABLE (MY_ID, MY_COLUMN) values (S_MY_TABLE.NEXTVAL, 'DATA4');
insert into MY_TABLE (MY_COLUMN) values ('DATA5');
/
select * from MY_TABLE;

dynamically creating the partition name as sysdate?

How to give the name to the partition dynamically. I have a requirement in which i'm creating the interval partitioning on monthly basis. I want to give '1-jan-2014' , '1-Feb-2014' as partition names? Please suggest the approach?
I see some problem here because interval partitioning has system generated partition names...
So you have to change this generated names after creation, this problem is resolved here:
https://dba.stackexchange.com/questions/33225/rename-interval-partitioning-system-generated-partition-names
or here
http://bobjankovsky.org/show.php?seq=90
You could make a job which will periodically change the partition names.
If you're adding a partition to an existing table you'd need to do it dynamically. And since you want the name to have dashes in as date element separators the partition names would have to be quoted identifiers, which will make working with them explicitly a bit more complicated later.
If you have a table that initially has named partitions:
create table t42 (date_col date, other_col varchar2(10))
partition by range (date_col)
(
partition "Pre-2014" values less than (date '2014-01-01'),
partition "1-jan-2014" values less than (date '2014-02-01'),
partition "1-feb-2014" values less than (date '2014-03-01'),
partition "1-mar-2014" values less than (date '2014-04-01'),
partition "1-apr-2014" values less than (date '2014-05-01')
);
Then you can manually add a new named partition:
alter table t42 add partition "1-may-2014" values less than (date '2014-06-01');
alter table t42 add partition "1-jun-2014" values less than (date '2014-07-01');
If you want to add a new partition based on sysdate (or any date) then you'd need dynamic SQL to generate that command; here in an anonymous block but you coudl have a procedure if you want to pass a date in:
begin
execute immediate 'alter table t42 add partition "'
|| to_char(trunc(sysdate, 'MM'), 'FMdd-mon-yyyy')
|| '" values less than (date '''
|| to_char(add_months(trunc(sysdate, 'MM'), 1), 'YYYY-MM-DD')
|| ''')';
end;
/
Today that would generate and execute command:
alter table t42 add partition "1-jul-2014" values less than (date '2014-08-01')
Then you can see the partition is named as you wanted:
select partition_name from user_tab_partitions order by partition_position;
PARTITION_NAME
------------------------------
Pre-2014
1-jan-2014
1-feb-2014
1-mar-2014
1-apr-2014
1-may-2014
1-jun-2014
1-jul-2014

Insert data from one schema to another shcema

I have two schemas in single database.
rxdata (its a fresh schema)
fbdata
Table name - kostst (cost center)
column name - kst_id (cost center id)
column name- kst_name (cost center name)
I would like to insert the entire data from the table kostst from rxdata to kostst.fbdata.
When i execute the below command, i get this and i knew that there are similar ids exists in both the schemas (kst_id=1 & 2)
SQL> insert into rxdata.kostst select * from fbdata.kostst;
insert into rxdata.kostst select * from fbdata.kostst
*
ERROR at line 1:
ORA-00001: unique constraint (RXDATA.SYS_C0070558) violated
SQL> select table_name,column_name from user_cons_columns where constraint_name=
'SYS_C0070558';
TABLE_NAME COLUMN_NAME
------------ ------------
KOSTST KST_ID
If you want to insert only the rows that don't exist, you could use:
insert into rxdata.kostst
select *
from fbdata.kostst f
where not exists (select null
from rxdata.kostst r
where r.kst_id = f.kst_id)
If you want to replace completely the table R by the table F, I suggest instead TRUNCATE+INSERT or DELETE+INSERT.

How to duplicate all data in a table except for a single column that should be changed

I have a question regarding a unified insert query against tables with different data
structures (Oracle). Let me elaborate with an example:
tb_customers (
id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3)
)
tb_suppliers (
id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx,
archive_id NUMBER(3)
)
The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key.
My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id.
One solution (that works), is to iterate over all the tables in a stored procedure and do a:
CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE);
UPDATE temp SET archive_id=something;
INSERT INTO ORIGINAL_TABLE (select * from temp);
DROP TABLE temp;
I do not like this solution very much as the DDL commands muck up all restore points.
Does anyone else have any solution?
How about creating a global temporary table for each base table?
create global temporary table tb_customers$ as select * from tb_customers;
create global temporary table tb_suppliers$ as select * from tb_suppliers;
You don't need to create and drop these each time, just leave them as-is.
You're archive process is then a single transaction...
insert into tb_customers$ as select * from tb_customers;
update tb_customers$ set archive_id = :v_new_archive_id;
insert into tb_customers select * from tb_customers$;
insert into tb_suppliers$ as select * from tb_suppliers;
update tb_suppliers$ set archive_id = :v_new_archive_id;
insert into tb_suppliers select * from tb_suppliers$;
commit; -- this will clear the global temporary tables
Hope this helps.
I would suggest not having a single sql statement for all tables and just use and insert.
insert into tb_customers_2
select id, name, 'new_archive_id' from tb_customers;
insert into tb_suppliers_2
select id, name, contact, xxx, xxx, 'new_archive_id' from tb_suppliers;
Or if you really need a single sql statement for all of them at least precreate all the temp tables (as temp tables) and leave them in place for next time. Then just use dynamic sql to refer to the temp table.
insert into ORIGINAL_TABLE_TEMP (SELECT * from ORIGINAL_TABLE);
UPDATE ORIGINAL_TABLE_TEMP SET archive_id=something;
INSERT INTO NEW_TABLE (select * from ORIGINAL_TABLE_TEMP);

Resources