Insert data from one schema to another shcema - oracle

I have two schemas in single database.
rxdata (its a fresh schema)
fbdata
Table name - kostst (cost center)
column name - kst_id (cost center id)
column name- kst_name (cost center name)
I would like to insert the entire data from the table kostst from rxdata to kostst.fbdata.
When i execute the below command, i get this and i knew that there are similar ids exists in both the schemas (kst_id=1 & 2)
SQL> insert into rxdata.kostst select * from fbdata.kostst;
insert into rxdata.kostst select * from fbdata.kostst
*
ERROR at line 1:
ORA-00001: unique constraint (RXDATA.SYS_C0070558) violated
SQL> select table_name,column_name from user_cons_columns where constraint_name=
'SYS_C0070558';
TABLE_NAME COLUMN_NAME
------------ ------------
KOSTST KST_ID

If you want to insert only the rows that don't exist, you could use:
insert into rxdata.kostst
select *
from fbdata.kostst f
where not exists (select null
from rxdata.kostst r
where r.kst_id = f.kst_id)
If you want to replace completely the table R by the table F, I suggest instead TRUNCATE+INSERT or DELETE+INSERT.

Related

PL/SQL: I have problem updating a table with data from another table

First table ROOMDB:
roomnumber
rentalbalance
N327
0
Second table RENTALINVOICE:
invoicedate
roomnumber
totaldue
11/26/2021
N327
2,200.00
My update code:
UPDATE ROOMDB
SET
RENTALBALANCE = (SELECT TOTALDUE
FROM RENTALINVOICE
WHERE RENTALINVOICE.ROOMNUMBER=ROOMDB.ROOMNUMBER
AND INVOICEDATE=SYSDATE) ;
I need to update the totaldue column in ROOMDB with data from RENTALINVOICE though it successfully enters 2,200 to the totaldue column, AT THE SAME TIME IT ALSO WIPES OUT THE REST OF THE RECORDS on this column in ROOMDB.
Everytime I update it, it erases the rest of the records except the roomnumber I specified. Please help.
You basically seem to also updates rows in table roomdb where there is no row in table rentalinvoice and therefore the column rentalbalance will be set to null.
Have a look a following example:
drop table a;
create table a (id number, cost number);
insert into a values (1, 1);
insert into a values (2, 2);
drop table b;
create table b (id number, cost number);
insert into b values (1, 100);
-- updates all rows in a and sets cost to null whenen there is no row in b
update a set cost = (select cost from b where a.id = b.id);
select * from a;
-- only updaes rows in a where there is a row in b
update a set cost = id;
update a set cost = (select cost from b where a.id = b.id) where exists (select 1 from b where a.id = b.id);
select * from a;
When you execute an update command like this Oracle will update all the table rows. For every row Select command will execute with these row values. If the result of select in the update does not find any record will return null. In this situation the RENTALBALANCE column will set null too.
(SELECT TOTALDUE
FROM RENTALINVOICE
where WHERE RENTALINVOICE.ROOMNUMBER = ROOMDB.ROOMNUMBER
AND INVOICEDATE = SYSDATE)

NVL2 and not exists in Oracle

I am pulling some records in table A from X table.
Now, I want to select records which are not available in table A but available in table B. But at the same time, I don't want to select records available in both tables.
Moreover,if a column in table A is null but the same column in the record in table B has value, I want to take that too.
Is it possible to do something like this in one statement ?
This is a simple set based operation. Relational databases are very good in that:
CREATE TABLE a (id NUMBER);
CREATE TABLE b (id NUMBER);
INSERT INTO a VALUES (1);
INSERT INTO a VALUES (3);
INSERT INTO b VALUES (2);
INSERT INTO b VALUES (3);
SELECT id FROM b
MINUS
SELECT id FROM a;
ID
2

Create table in Hue after many with statements

I am having an issue creating a table in Hue after I do a bunch of temp. table commands. A very high-level example is below.. I am trying to create a table after the many temporary tables are created.
I am basically trying to create a table of the last select statement but I am running into errors both with the create table line and also determining what the last select * table is called..
With TABLEA as (Select * from TEST1.FILEA),
TableB as (Select * from tableA)
Select * from tableB
where TableB.Curr = 'TYPEE'
CREATE TABLE TEST
row format delimited
fields terminated by '|'
STORED AS RCFile
as Select * from TableB
In your query please follow the syntax and examples as below
create table as <your_with_clause_select_query>
Example:
create table test as
with tableA as ( select * from test1.fileA)
select * from tableA;
You can also use nested select statements with CTAS.
CREATE TABLE TEST AS
select * from (
select
*
from
test1.fileA
) b
row format delimited fields terminated by '|'
STORED AS RCFile

HPE Vertica : DROP_PARTITION dynamic predicate value

From the Vertica Docs:
DROP_PARTITION ( table_name , partition_value [ , ignore_moveout_errors, reorganize_data ])
Can the partition_value predicate be dynamic by any method?
I want to drop the partition based on MAX(partition_col_val) condition from another staging table.
Vertica also does not support variable creation as such, where I can keep the MAX(partition_col_val).
Is there any workaround possible?
You cannot use sub-querys to generate a dynamic predicate value for you drop_partition.
Normally i treat the lack of the dynamic feature with a script that will generate the drop_partition sql expresion into a .sql file and i execute in the next step.
See an example here :
create table tblone (id int not null) partition by id;
insert into tblone values (1);
commit;
-- flush the output of the query into a file
\o /tmp/file.sql
select
'SELECT DROP_PARTITION(''public.tblone'','||
max(id)||
');' from tblone;
-- execute the content of the file
\i /tmp/file.sql
--this is the content.
SELECT DROP_PARTITION('public.tblone',1);
This is when you have partition based on non dates and data that needs to be derived from other data sets.
In case you have date as your partition key or a derived value from a data column you could use a internal function to dynamically populate the drop_partition key value:
drop table tblone cascade;
create table tblone (id date not null) partition by
(((date_part('year', id) * 100) + date_part('month', id)));
insert into tblone values (getdate());
commit;
dbadmin=> select * from tblone;
id
------------
2017-01-04
(1 row)
dbadmin=> SELECT DROP_PARTITION('tblone',(date_part('year', getdate()) * 100) + date_part('month', getdate()));
DROP_PARTITION
-------------------
Partition dropped
(1 row)
dbadmin=> select * from tblone;
id
----
(0 rows)
-you can always play with the getdate() to get the current or last month or any periods you wish to.
Another option is to use the vsql cmd line variables
Example
dbadmin=> drop table tblone cascade;
DROP TABLE
dbadmin=> create table tblone (id int not null) partition by id;
CREATE TABLE
dbadmin=> insert into tblone values (1);
1
dbadmin=> commit;
COMMIT
dbadmin=> select * from tblone;
id
----
1
(1 row)
-- show only tuples
dbadmin=> \t
Showing only tuples.
-- spit the max value into a file
dbadmin=> \o /tmp/file
dbadmin=> select max(id) from tblone;
dbadmin=> \o
dbadmin=> \t
Tuples only is off.
-- set the value of the variable to the file content(your max value)
dbadmin=> \set maxvalue `cat /tmp/file`
dbadmin=> \echo :maxvalue
1
-- run the drop partition using the variable
dbadmin=> SELECT DROP_PARTITION('tblone',:maxvalue);
DROP_PARTITION
-------------------
Partition dropped
(1 row)
dbadmin=> select * from tblone;
id
----
(0 rows)
i hope this helped :)
An easy way to drop many partitions from a table using a single line of code would be using MOVE_PARTITIONS_TO_TABLE into a dummy table and then drop the dummy table
- this will require no lock on the main table and dropping the dummy table is an cheap task for the database (will be a bulk drop_partition's).
create dummy table from base table (including projections).
generate dynamic MOVE_PARTITIONS_TO_TABLE('source','from
partition','to partition','target/dummy table').
drop dummy table.
see small example (is not 100 complete- you can adapt it)
is the same approach (generate & execute)
\o /tmp/file.sql
select 'create dummy table as source table including projections;';
select
'SELECT MOVE_PARTITIONS_TO_TABLE(''source'','''||
:minpartition()||
''','''||
:maxpartition()||
''',''target/dummy table'')'
from tblone;
select 'drop table dummy cascade';
-- execute the content of the file
\i /tmp/file.sql
--make sure the content of the file is correct content
**
BTW - if you look for Vertica Database Articles and Scripts from time
to time i post at http://wwww.aodba.com
**

How to duplicate all data in a table except for a single column that should be changed

I have a question regarding a unified insert query against tables with different data
structures (Oracle). Let me elaborate with an example:
tb_customers (
id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3)
)
tb_suppliers (
id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx,
archive_id NUMBER(3)
)
The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key.
My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id.
One solution (that works), is to iterate over all the tables in a stored procedure and do a:
CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE);
UPDATE temp SET archive_id=something;
INSERT INTO ORIGINAL_TABLE (select * from temp);
DROP TABLE temp;
I do not like this solution very much as the DDL commands muck up all restore points.
Does anyone else have any solution?
How about creating a global temporary table for each base table?
create global temporary table tb_customers$ as select * from tb_customers;
create global temporary table tb_suppliers$ as select * from tb_suppliers;
You don't need to create and drop these each time, just leave them as-is.
You're archive process is then a single transaction...
insert into tb_customers$ as select * from tb_customers;
update tb_customers$ set archive_id = :v_new_archive_id;
insert into tb_customers select * from tb_customers$;
insert into tb_suppliers$ as select * from tb_suppliers;
update tb_suppliers$ set archive_id = :v_new_archive_id;
insert into tb_suppliers select * from tb_suppliers$;
commit; -- this will clear the global temporary tables
Hope this helps.
I would suggest not having a single sql statement for all tables and just use and insert.
insert into tb_customers_2
select id, name, 'new_archive_id' from tb_customers;
insert into tb_suppliers_2
select id, name, contact, xxx, xxx, 'new_archive_id' from tb_suppliers;
Or if you really need a single sql statement for all of them at least precreate all the temp tables (as temp tables) and leave them in place for next time. Then just use dynamic sql to refer to the temp table.
insert into ORIGINAL_TABLE_TEMP (SELECT * from ORIGINAL_TABLE);
UPDATE ORIGINAL_TABLE_TEMP SET archive_id=something;
INSERT INTO NEW_TABLE (select * from ORIGINAL_TABLE_TEMP);

Resources