Oracle GoldenGate When Table column is generate always identity type_IT does not let insert in target DB table - oracle

Trying to replicate the data into the replica DB (target) from source DB, using Oracle Golden Gate (OGG). Let's say I have TableA and B in source DB. A has identity column managed by a trigger which add unique number using sequence object (the old Oracle way, prior to 12C). The table B has identity column as "GENERATE ALWAYS AS IDENTITY ...", the way came newly in 12C. Now below is my observation, followed by question:
(A) SourceDB TableA, insert 1 record, id=1. Then in TargetDB TableA, OGG replicates 1 insert, id=1. Good.
Source A------------------------------------Target A
id=1----------------------------------------id=1
(B) In TargetDB, manually insert 1 record, it gets done, id=3. Good. Here id should have been 2 but OGG skips 2 and set id of this newly added record in Target table as 3.
Source A------------------------------------Target A
id=1----------------------------------------id=1
.-------------------------------------------id=3
(C) SourceDB TableA, insert 1 record, id=2. Then in TargetDB TableA, OGG replicates 1 insert, id=2. Good.
Source A------------------------------------Target A
id=1----------------------------------------id=1
.-------------------------------------------id=3
id=2----------------------------------------id=2
So, apart from the nice behavior of the OGG it looks good!
But when the same thing I try to do on the TableB, it gives me unique constraint error in Step B!! It looks like because the Table B the identity column is defined as Generate Always as Identity. So, it is really because of this? And this new way cause more problem then the old way of using sequence.nextValue object to generate new unique id column. Or there is any way in OGG to overcome this and make this table B behave same as Table A, for the step B?

Let's split your question in the two scenarios:
SEQUENCES
For a sequence, you can ONLY replicate in one-way replications. This means
You can't replicate a sequence in an two-way or multi-way replication.
You can ONLY replicate in an active-passive HA (high availability) not an active-active HA setup. You need to turn off the sequence replication by:
Excluding sequence capture from the capture (extract) using TABLEEXCLUDE.
Disabling triggers that process sequences with DBOPTIONS SUPRESSTRIGGERS in the delivery (REPLICAT).
During a replication, Oracle GoldenGate captures the sequence updates and makes sure the target sequence value is equal or higher than the source sequence number:
If the NOCACHE option is specified in the sequence, an data entry will show in the GoldenGate trail every time the sequence is updated.
If the CACHE option is specified for the sequence, an data entry will show in the GoldenGate trail every time the high water mark is updated.
IDENTITY COLUMNS
Capture and replication of identity columns are supported by integrated processes from OGG v18 onwards.
Capture and replication supported by integrated extracts and replicates only. All other nonintegrated modes do not support identity column replication, including classic parallel and coordinated replicates.
Only RDBMS 18.x and above with OGG v18 and above supports replication for identity columns.
Allows for bidirectional replication of identity columns from OGG v18 and above
If the target column is an identity column, OGG replicate will overwrite the target using the value from the source.
There are no restrictions on how the IDENTITY property is set on the source or the target
This feature cannot be backported to older versions.
An empty table has an identity column added to it by ALTER TABLE ADD is still not supported.

Related

Move Range Interval partition data from one table to history table in other database

We have a primary table that is Range partitioned by date with a 1-month interval. It's also a list sub-partitioned with 4 distinct values. So essentially it is one month partition having 4 sub-partitions.
Database: Oracle 19c
I need advice on how to effectively move the partition/sub-partition data from active schema to historical schema in another database.
Also, there are about 30 tables that are referenced partitioned on the primary table for which the data needs to be moved as well. Overall I'm looking to move about 2500 subpartitions
I'm not sure if an exchange partition would be the right approach in this scenario?
TIA
You could use exchange to get the data rapidly out of your active table, but you would still then to send that table over the wire to the remote history database to load it in.
In which case, using "exchange" probably is just adding more steps to the process for little gain. (There are still potential uses here depending on how you want to handle indexing etc).
But simplest is perhaps just transferring the data over, assuming a common structure between the two tables, ie
insert /*+ APPEND */ into history_table#remote_db
select * from active_table partition ( myparname )
I can't remember if partition naming syntax is supported over a db link, but if not, then the appropriate date predicates will do the same trick, and then just follow up with:
alter table active_table truncate partition myparname;

ORACLE APEX / SQL DEVELOPER: Cannot get PK to autoincrement

I am trying to implement my SQLDeveloper DB into Oracle APEX. I cannot figure out how to get the PK's in my table to auto-increment starting from a certain value (i.e. 400001). I have tried making triggers and sequences but when I try to add a row using a form in APEX, my PK increments from 40 for some reason.
Here is my APEX form outcome
enter image description here
Here is how it inserts into SQL Developer
enter image description here
Basically, can someone describe to me how I can edit the existing trigger, or create a sequence, that would make application_id of a new entry auto-increment by 1.
Thanks!
Find max application_id:
select max(application_id) From your_Table;
Suppose it is 400010 (as screenshot suggests). Now recreate the sequence (presuming its name is seq_app):
drop sequence seq_app;
create sequence seq_app start with 400011 increment by 1 nocache;
Trigger is most probably OK, as you see values being inserted into the table.
Side note: sequences will be unique, but not necessarily gapless. CACHE (or NOCACHE) might affect that, but - for performance sake, you'd rather let Oracle cache sequence numbers (default is 20) which means that - if you don't use some of those cached numbers, they will be lost. I wouldn't worry, if I were you.

Oracle 12c - refreshing the data in my tables based on the data from warehouse tables

I need to update the some tables in my application from some other warehouse tables which would be updating weekly or biweekly. I should update my tables based on those. And these are having foreign keys in another tables. So I cannot just truncate the table and reinsert the whole data every time. So I have to take the delta and update accordingly based on few primary key columns which doesn't change. Need some inputs on how to implement this approach.
My approach:
Check the last updated time of those tables, views.
If it is most recent then compare each row based on the primary key in my table and warehouse table.
update each column if it is different.
Do nothing if there is no change in columns.
insert if there is a new record.
My Question:
How do I implement this? Writing a PL/SQL code is it a good and efficient way? as the expected number of records are around 800K.
Please provide any sample code or links.
I would go for Pl/Sql and bulk collect forall method. You can use minus in your cursor in order to reduce data size and calculating difference.
You can check this site for more information about bulk collect, forall and engines: http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html
There are many parts to your question above and I will answer as best I can:
While it is possible to disable referencing foreign keys, truncate the table, repopulate the table with the updated data then reenable the foreign keys, given your requirements described above I don't believe truncating the table each time to be optimal
Yes, in principle PL/SQL is a good way to achieve what you are wanting to
achieve as this is too complex to deal with in native SQL and PL/SQL is an efficient alternative
Conceptually, the approach I would take is something like as follows:
Initial set up:
create a sequence called activity_seq
Add an "activity_id" column of type number to your source tables with a unique constraint
Add a trigger to the source table/s setting activity_id = activity_seq.nextval for each insert / update of a table row
create some kind of master table to hold the "last processed activity id" value
Then bi/weekly:
retrieve the value of "last processed activity id" from the master
table
select all rows in the source table/s having activity_id value > "last processed activity id" value
iterate through the selected source rows and update the target if a match is found based on whatever your match criterion is, or if
no match is found then insert a new row into the target (I assume
there is no delete as you do not mention it)
on completion, update the master table "last processed activity id" to the greatest value of activity_id for the source rows
processed in step 3 above.
(please note that, depending on your environment and the number of rows processed, the above process may need to be split and repeated over a number of transactions)
I hope this proves helpful

Golden Gate replication from primary to secondary database, WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403

I am using golden gate to replicate data from primary to secondary. I have inserted records in the primary database, but replication abdends with error message
WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403 mapping primaryDB_GG1.TB_myTableName to secondaryDB.TB_myTableName OCI Error ORA-01403: no data found, SQL < UPDATE ......
The update statement has all the columns from table in the where clause.
Whereas there is no such update statement in the application with so many columns in where clause.
Can you help on this issue. Why Golden Gate replication is converting insert in to update while replication.
I know this very old, but if you haven't figured out a solution, please provide your prm file if you can. You may a parameter in there that is converting inserts to updates based upon a PK already existing in the target database. It is likely that handlecollisions or CDR is set.
For replication, you might have already enabled the transaction log in the source db. Now, you need to run from ggsci:
"ADD TRANDATA schema_name.table_name, COLS(...)"
In the COLS part, you need to mention the Column/Columns(comma seperated) that can be used to identify a unique record (You can mention the unique indexed columns if present). If there is no unique index on the table and you are not sure of what columns could be used to uniquely identify a row, then just run from ggsci:
"ADD TRANDATA schema_name.table_name"
Then Golden gate will start logging all the necessary columns for uniquely identifying a row.
Note: This should be done before you start the replication process.

Update Index Organized Tables using multiple UPDATE queries (temporary duplicates)

I need to update the primary key of a large Index Organized Table (20 million rows) on Oracle 11g.
Is it possible to do this using multiple UPDATE queries? i.e. Many smaller UPDATEs of say 100,000 rows at a time. The problem is that one of these UPDATE batches could temporarily produce a duplicate primary key value (there would be no duplicates after all the UPDATEs have completed.)
So, I guess I'm asking is it somehow possible to temporarily disable the primary key constraint (but which is required for an IOT!) or alter the table temporarily some other way. I can have exclusive and offline access to this table.
The only solution I can see is to create a new table and when complete, drop the original table and rename the new table to the original table name.
Am I missing another possibility?
You can't disable / drop the primary key constraint from an IOT, since it is a unique index by definition.
When I need to change an IOT like this, I either do a CTAS (create table as) for a new plain heap table, do my maintenance, and then CTAS a new IOT.
Something like:
create table t_temp as select * from t_iot;
-- do maintenance
create table t_new_iot as select * from t_temp;
If, however, you need to simply add or join a new field to the existing key, you can do this in one step by creating the new IOT structure, then populating directly from the old IOT with a query.
Unfortunately, this is one of the downsides to IOTs.
I would recommend following method:
Create new IOT table partitioned by system with single partition
with exactly same structure as current one.
Lock current IOT table to prevent any DML.
insert into new table as select from current table changing PK values in select. This step
could be repeated several times if needed. In this case it's better
to do it in another session to keep lock on original table.
Exchange partition of new table with original table.

Resources