Oracle SQL / PLSQL : I need to copy data from one database to another - oracle

I have two instances of the same database, but data is only committed to the "original" one. I need to copy inserted data from certain tables and commit them to the same tables in the second DB automatically. How can I do it?
I've already created synonyms for the tables in the second DB on original and within a specially prepared trigger I tried to use INSERT INTO ... statement with :new. but it is causing the data to not be committed anywhere and I receive Oracle Errors like:
ORA-02291: integrity constraint (PRDBSHADOW.FK_ED_PHY_ENT) violated.
Here is my trigger code
create or replace TRIGGER INS_COPY_DATA
AFTER INSERT ON ORIGDB.TABLE_A
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
BEGIN
insert into COPY_TABLE_A(val1,val2,val3,val4) values (:new.val1, :new.val2, :new.val3, :new.val4);
END;

I think the entry in parent table is missing here. At least the FK ending of constraint is telling me so.
It means you need to insert first all the data into a "parent" table in order to be able to insert records in a "child".
For example the table auto_maker is having 3 rows only: Audi, Peugeot, and Honda.
Another table named "model" has 2 columns "maker" and "model". "maker" is a foreign key referencing to the "auto_maker" table.
It means in the models table are only the records allowed whose "maker" column value exists in "auto_maker" table.
In other words only these are available:
maker model
Audi A4
Peugeot 308
Honda Accord
Of course you can enter every model you wish, but "maker" value has to exist in the auto_maker table.
This is what probably happen - the trigger tries to insert a data in a column which is referencing to a "parent" table and the :new value just doesn't exist.
The following script will let you know what table you need to fill first.
select aic.index_owner, aic.table_name, aic.column_name
from all_constraints uc,
all_ind_columns aic
where aic.INDEX_NAME = uc.r_constraint_name
and uc.table_name = 'TABLE_A'
and uc.constraint_type = 'R';
If the query returns something just create similar triggers on those tables with similar logic you already have

Related

How can I merge two tables using ROWID in oracle?

I know that ROWID is distinct for each row in different tables.But,I am seeing somewhere that two tables are being merged using rowid.So,I also tried to see it,but I am getting the blank output.
I have person table which looks as:
scrowid is the column which contains rowid as:
alter table ot.person
add scrowid VARCHAR2(200) PRIMARY KEY;
I populated this person table as:
insert into ot.person(id,name,age,scrowid)
select id,name, age,a.rowid from ot.per a;
After this I also created another table ot.temp_person by same steps.Both table has same table structure and datatypes.So, i wanted to see them using inner join and I tried them as:
select * from ot.person p inner join ot.temp_person tp ON p.scrowid=tp.scrowid
I got my output as empty table:
Is there is any possible way I can merge two tables using rowid? Or I have forgotten some steps?If there is any way to join these two tables using rowid then suggest me.
Define scrowid as datatype ROWID or UROWID then it may work.
However, in general the ROWID may change at any time unless you lock the record, so it would be a poor key to join your tables.
I think perhaps you misunderstood the merging of two tables via rowid, unless what you actually saw was a Union, Cross Join, or Full Outer Join. Any attempt to match rowid, requardless of you define it, doomed to fail. This results from it being an internal definition. Rowid in not just a data type it is an internal structure (That is an older version of description but Oracle doesn't link documentation versions.) Those fields are basically:
- The data object number of the object
- The data block in the datafile in which the row resides
- The position of the row in the data block (first row is 0)
- The datafile in which the row resides (first file is 1). The file
number is relative to the tablespace.
So while it's possible for different tables to have the same rowid, it would be exteremly unlikely. Thus making an inner join on them always return null.

ORA-22816 while updating Joined View with Instead of trigger

I read a lot about it but didn't found any help on that.
My Situation:
I've two database tables which belongs together. This tables I want to query with EntityFramework. Because Table B contains for EntityFramework the discriminator (for choosing the correct class for Table A) I've created a View which Joins Table A and Table B.
This join is quite simple. But: I want also to store data with that View. The issue is, that EntityFramework also wants to store the discriminator. But this isn't possible because it would update/insert into two tables.
So I've tried to create an "Instead of" trigger to just update/insert Table A (because Table B doesn't matter and will never be updated).
When I created the trigger - everything fine. If I insert something with an SQL Statement - everything is fine. But: If I'm inserting directly in the View (using Oracle SQL Developer) it throws the Exception as below:
ORA-22816 (unsupported feature with RETURNING clause).
If I do the same with EntityFramework I get the same error. Can someone help me?
Below my Code:
Table A and Table B:
CREATE Table "TableA"
(
"ID" Number NOT NULL,
"OTHER_VALUESA" varchar2(255),
"TableB_ID" number not null,
CONSTRAINT PK_TableA PRIMARY KEY (ID)
);
CREATE Table "TableB"
(
"ID" Number NOT NULL,
"NAME" varchar2(255),
"DISCRIMINATOR" varchar2(255),
CONSTRAINT PK_TableB PRIMARY KEY (ID)
);
The Joined View:
Create or Replace View "JoinTableAandB"
(
"ID",
"OTHER_VALUESA",
"TableB_ID",
"DISCRIMINATOR"
) AS
select tableA.ID, tableA.OTHER_VALUESA, tableA.TableB_ID, tableB.DISCRIMINATOR
from TABLEA tableA
inner join TABLEB tableB on tableA.TableB_ID = tableB.ID;
And finally the Trigger:
create or replace TRIGGER "JoinTableAandB_TRG"
INSTEAD OF INSERT ON "JoinTableAandB"
FOR EACH ROW
BEGIN
insert into TABLEA(OTHER_VALUESA, TABLEB_ID)
values (:NEW.OTHER_VALUESA, :NEW.TABLEB_ID);
END;
I've also tried it (to verify if the insert is correct just to enter "NULL" into the trigger instead of insert. But got the same error message.
Does anybody know how to solve this? Or does anybody have a good alternative (better Idea)?
Thanks!
Note: I've also defined a sequence for TableA's ID so that it will be generated automatically.
// Edit:
I found a possible Solution for MS SQL:
https://stackoverflow.com/a/26897952/3598980
But I don't know how to translate this to Oracle... How can I return something from a trigger?
Note: I've also defined a sequence for TableA's ID so that it will be generated automatically.
In EF StoreGenerated keys in Oracle are incompatible with INSTEAD OF triggers. EF uses a RETURNING clause to output the store generated keys, which doesn't work with INSTEAD OF triggers.

Change column name in a table in Clickhouse

Is there any way to ALTER a table and change the column name in clickhouse?
I only found to change tha table name but not for an individual column in a straight forward way.
Thanks.
The feature has been introduced here into v20.4.
ALTER TABLE table1 RENAME COLUMN old_name TO new_name
You can also rename multiple columns at on:
ALTER TABLE table1
RENAME COLUMN old_name1 TO new_name1,
RENAME COLUMN old_name2 TO new_name2
Old answer:
ClickHouse doesn't have that feature yet.
Implementation is not trivial, because ALTERs that changing columns
are processed outside of usual replication queue, and adding rename
without reworking of ALTERs will introduce race conditions in
replicated tables.
https://github.com/yandex/ClickHouse/issues/146#issuecomment-255631384
As #Slash said, the solution for now is to create new table and
INSERT INTO `new_table` SELECT * FROM `old_table`
Do not forget that column aliasing won't work there (AS).
INSERT INTO `new_table` SELECT a, b AS c, c AS b FROM `old_table`
That will still insert a into first column, b into second column and c into third column. AS has no effect there.
You can try use CREATE TABLE new_table with another field name
and run INSERT INTO new_table SELECT old_field AS new_field FROM old_table
If you created the table using Engine=log, it won't allow you to alter or rename the column.
connection_string = f'clickhouse://{username}:{password}#{host}:{port}/{database}'
engine = create_engine(connection_string)
conn = engine.connect()
table = "table1"
schema = 'Parameter String, Key UInt8'
engine.execute("CREATE TABLE IF NOT EXISTS {}({}) ENGINE = Log".format(table,schema))
If you created table using the mergeTree engine, it’s allowed to rename the column:
engine.execute("CREATE TABLE IF NOT EXISTS {}({}) ENGINE =MergeTree ORDER BY Key".format(table,schema))

How to modify data type in Oracle with existing rows in table

How can I change DATA TYPE of a column from number to varchar2 without deleting the table data?
You can't.
You can, however, create a new column with the new data type, migrate the data, drop the old column, and rename the new column. Something like
ALTER TABLE table_name
ADD( new_column_name varchar2(10) );
UPDATE table_name
SET new_column_name = to_char(old_column_name, <<some format>>);
ALTER TABLE table_name
DROP COLUMN old_column_name;
ALTER TABLE table_name
RENAME COLUMN new_column_name TO old_coulumn_name;
If you have code that depends on the position of the column in the table (which you really shouldn't have), you could rename the table and create a view on the table with the original name of the table that exposes the columns in the order your code expects until you can fix that buggy code.
You have to first deal with the existing rows before you modify the column DATA TYPE.
You could do the following steps:
Add the new column with a new name.
Update the new column from old column.
Drop the old column.
Rename the new column with the old column name.
For example,
alter table t add (col_new varchar2(50));
update t set col_new = to_char(col_old);
alter table t drop column col_old cascade constraints;
alter table t rename column col_new to col_old;
Make sure you re-create any required indexes which you had.
You could also try the CTAS approach, i.e. create table as select. But, the above is safe and preferrable.
The most efficient way is probably to do a CREATE TABLE ... AS SELECT
(CTAS)
alter table table_name modify (column_name VARCHAR2(255));
Since we can't change data type of a column with values, the approach that I was followed as below,
Say the column name you want to change type is 'A' and this can be achieved with SQL developer.
First sort table data by other column (ex: datetime).
Next copy the values of column 'A' and paste to excel file.
Delete values of the column 'A' an commit.
Change the data type and commit.
Again sort table data by previously used column (ex: datetime).
Then paste copied data from excel and commit.

Is it possible to compare other tables within a trigger?

I have a database with tables that are chained together with foreign keys, and the last one in the chain also has a foreign key to itself. I want to delete them with cascade on, exapt for the last one in the chain. That one should be set null, unless it's parent record has a certain value. I figured i would do that with a trigger: whenever the last table updated, if the foreign key to itself had been set to null, check the field in the parent record, and if it is the value "default", delete the record in the last table.
However, I haven't found any help online indicting that comparing a parent record in another table.
Is this possible?
In general, a row-level trigger on table A cannot query table A. Doing so would generally raise a mutating table exception (ORA-04091). So a trigger is generally not the right solution.
Presumably, you have some sort of API (i.e. a stored procedure) to delete records from the parent table. That API should query this last table before issuing the DELETE against the parent table. It should take care of updating the last table in the chain as well as deleting the data from the parent table.
If you really wanted a trigger-based solution, life would get substantially more complicated. You could work around the mutating table exception by
Creating a package with a collection of primary keys from the parent table
Creating a before statement trigger that initializes this collection
Creating a row-level trigger that populates the collection with the primary keys that were modified by the SQL statement
Creating an after statement trigger that iterates over the collection and issues whatever DML is necessary (unlike row-level triggers, statement-level triggers on table A can query or modify table A).
If you're using 11g, you can simplify this a bit with a compound trigger with before statement, after row, and after statement sections. But you've still got a number of moving pieces to try to coordinate.
AFAIK you won't be able to really delete the record in the last table (mutating table problem), but you could update a status field indicating the record has been logically deleted (untested):
create or replace trigger last_table_trig
before update on last_table
for each row
declare
l_parentField varchar2(100);
begin
if :new.self_ref_fk is null then
select p.parent_field into l_parentField from parent_table p
where p.pk = :new.parent_fk;
if l_parentField = 'default' then
:new.status := 'DELETED';
end if;
end if;
end;

Resources