Golden Gate replication from primary to secondary database, WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403 - oracle

I am using golden gate to replicate data from primary to secondary. I have inserted records in the primary database, but replication abdends with error message
WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403 mapping primaryDB_GG1.TB_myTableName to secondaryDB.TB_myTableName OCI Error ORA-01403: no data found, SQL < UPDATE ......
The update statement has all the columns from table in the where clause.
Whereas there is no such update statement in the application with so many columns in where clause.
Can you help on this issue. Why Golden Gate replication is converting insert in to update while replication.

I know this very old, but if you haven't figured out a solution, please provide your prm file if you can. You may a parameter in there that is converting inserts to updates based upon a PK already existing in the target database. It is likely that handlecollisions or CDR is set.

For replication, you might have already enabled the transaction log in the source db. Now, you need to run from ggsci:
"ADD TRANDATA schema_name.table_name, COLS(...)"
In the COLS part, you need to mention the Column/Columns(comma seperated) that can be used to identify a unique record (You can mention the unique indexed columns if present). If there is no unique index on the table and you are not sure of what columns could be used to uniquely identify a row, then just run from ggsci:
"ADD TRANDATA schema_name.table_name"
Then Golden gate will start logging all the necessary columns for uniquely identifying a row.
Note: This should be done before you start the replication process.

Related

replicat failing on insert with no data found

Failing to understand in which situation will an insert fail with no data found error. Any insights please.
Oracle GoldenGate Delivery for Oracle process started, group REPA discard file opened: 2020-08-21 18:32:07.326069
Current time: 2020-08-21 18:32:08
Discarded record from action ABEND on error 1403
No data found
Aborting transaction on /zfssa/gg_02/ogg/dirdat/REPA/EX beginning at seqno 473 rba 425209949
error at seqno 473 rba 425214669
Problem replicating SRC.TABLE to TGT.TABLE.
Record not found
Mapping problem with insert record (target format) SCN:3329198919.29.23.78560...
An error by NO DATA FOUND usually points to a inconsistency problem. The REPLICAT is basically an application doing data manipulation using SQL statements. If it attempts to perform a DML and the database rejects it, normally is because of inconsistency with the attempted DML and the records related to it.
For example, attempting to delete a row which does not exist will fail with a database error. Aside from an Oracle GoldenGate bug, this usually points to target database inconsistencies. In other words, the target database is in a state that the customer did not expect it to be in.
Determine why your target database is not in the state that you expect it to be. Numerous reasons can cause this and below are some them. This list is by no means exhaustive.
Possible Causes
The use of parameters to ignore previous errors such as HANDLECOLLISIONS, REPERROR with IGNORE or DISCARD options.
Primary keys or key columns that are different between source and target database. They might be the same columns but the type and/or size are different.
The target database is manipulated by an application program.
The target replicat is MAPped by filters and selection, or the extract DML operations have filters. This will be based on your business needs and may or may not be intentional.
Non-primary key table updates. If all the columns are used for replication there are cases whereby more than one update can occur making subsequent DML operations fail.
Non-primary key table updates where KEYCOLS are used. These keys may not be unique. To test uniqueness of selected keys, run the query on the source database based on these KEYCOLS and sort them.
The language and characterset in use (NLS, double-byte or multibyte charset) is different and may cause unexpected conversion issues automatically done by the database. Use SETENV parameter to change the language and set this before the USERID parameter.
Your source database and target database are of a different type, for example Oracle to MSSQL and the conversion done on the primary keys or key columns are not what you expected.
There are other specific configurations, patches, features, default database behaviors and so on. Search the My Oracle Support (MOS) Knowledge base for the database error number, example: "ORA-01403" under the Oracle GoldenGate core product. Review these knowledge solutions to see if they are related to your issue.
In rare situations, this may be an Oracle GoldenGate bug in that a particular DML was not captured or the values were incorrectly interpreted. Please submit an SR if you think this is the case. You will need to provide in addition to the target replicat reports and materials stated above, all extract reports on the source machine and GoldenGate trails.
Duplicate Mapping in replicat parameter with ALLOWDUPTARGETMAP parameter
Incorrect use of Extract parameter THREADOPTIONS PROCESSTHREADS. This can cause it to miss data.
Possible Solutions
What do you need to investigate a replicat database issue?
For a start you will need the replicat parameter file, report file and discard file.
Report file: Contains all warnings, errors, tables that are already mapped, columns mapped or unmapped and all run time environment settings.
Discard file: Display in detail the issue with mapping this table that generates the database error, the columns, its values, position of the record in the GoldenGate trail.
Parameter file: Usually the parameters are within the report file but this will be useful if the report file has been rolled over (REPORTROLLOVER parameter).
What are the next steps?
Query your target database based on the above data. Depending on the database error, the report file and/or discard file may contain the exact SQL statement used. Nevertheless one should be able to construct the appropriate query. This is to confirm that the replicat has indeed reported the correct database errors.
For example if the Oracle DB error was ORA-01403 which means no data found, your query should be selecting the row with the primary keys or keys as specified. Your query should return the same results as the replicat.
Fixing the replicat.
The first thing to consider is whether you can ignore this error for now and resolve the situation later on. If your business allows you to do so then you may either exclude this table altogether (TABLEEXCLUDE) or simply skip this error (REPERROR , DISCARD). If you skip the error, then start the replicat with REPERROR but run it for a short while (stop replicat) and remove this REPERROR. Then restart the replicat.
Fixing the database.
No matter what the reason is, you will need to resync the table that caused this issue.
Configuration Issue
If you have Duplicate mapping in replicat parameters in the replicat parameter file and ALLOWDUPTARGETMAP parameter is used, DML will be applied twice. This leads to ORA-1403 error on delete operation and ORA-001 error for Insert operation. Remove the duplicate mapping to fix the issue.
Summary, there are a lot of possibilities for a no data found error in a replicat process in GoldenGate.
This could also depend on the type of replicat you are using. Many of them perform queries inside the database prior to actually performing a DML statement. The error you have can often result from a lack of permissions in the target database, or if you are using something like a database vault, or other technology which manipulates how DML is performed.

Oracle GoldenGate When Table column is generate always identity type_IT does not let insert in target DB table

Trying to replicate the data into the replica DB (target) from source DB, using Oracle Golden Gate (OGG). Let's say I have TableA and B in source DB. A has identity column managed by a trigger which add unique number using sequence object (the old Oracle way, prior to 12C). The table B has identity column as "GENERATE ALWAYS AS IDENTITY ...", the way came newly in 12C. Now below is my observation, followed by question:
(A) SourceDB TableA, insert 1 record, id=1. Then in TargetDB TableA, OGG replicates 1 insert, id=1. Good.
Source A------------------------------------Target A
id=1----------------------------------------id=1
(B) In TargetDB, manually insert 1 record, it gets done, id=3. Good. Here id should have been 2 but OGG skips 2 and set id of this newly added record in Target table as 3.
Source A------------------------------------Target A
id=1----------------------------------------id=1
.-------------------------------------------id=3
(C) SourceDB TableA, insert 1 record, id=2. Then in TargetDB TableA, OGG replicates 1 insert, id=2. Good.
Source A------------------------------------Target A
id=1----------------------------------------id=1
.-------------------------------------------id=3
id=2----------------------------------------id=2
So, apart from the nice behavior of the OGG it looks good!
But when the same thing I try to do on the TableB, it gives me unique constraint error in Step B!! It looks like because the Table B the identity column is defined as Generate Always as Identity. So, it is really because of this? And this new way cause more problem then the old way of using sequence.nextValue object to generate new unique id column. Or there is any way in OGG to overcome this and make this table B behave same as Table A, for the step B?
Let's split your question in the two scenarios:
SEQUENCES
For a sequence, you can ONLY replicate in one-way replications. This means
You can't replicate a sequence in an two-way or multi-way replication.
You can ONLY replicate in an active-passive HA (high availability) not an active-active HA setup. You need to turn off the sequence replication by:
Excluding sequence capture from the capture (extract) using TABLEEXCLUDE.
Disabling triggers that process sequences with DBOPTIONS SUPRESSTRIGGERS in the delivery (REPLICAT).
During a replication, Oracle GoldenGate captures the sequence updates and makes sure the target sequence value is equal or higher than the source sequence number:
If the NOCACHE option is specified in the sequence, an data entry will show in the GoldenGate trail every time the sequence is updated.
If the CACHE option is specified for the sequence, an data entry will show in the GoldenGate trail every time the high water mark is updated.
IDENTITY COLUMNS
Capture and replication of identity columns are supported by integrated processes from OGG v18 onwards.
Capture and replication supported by integrated extracts and replicates only. All other nonintegrated modes do not support identity column replication, including classic parallel and coordinated replicates.
Only RDBMS 18.x and above with OGG v18 and above supports replication for identity columns.
Allows for bidirectional replication of identity columns from OGG v18 and above
If the target column is an identity column, OGG replicate will overwrite the target using the value from the source.
There are no restrictions on how the IDENTITY property is set on the source or the target
This feature cannot be backported to older versions.
An empty table has an identity column added to it by ALTER TABLE ADD is still not supported.

Azure Sql database schema missing constraints after schema compare

I created a blank SQL database in Azure.
From Visual Studio 2017, I performed a Schema Compare, and updated the blank Azure database to my schema. There were no errors so I didn't check everything was exactly the same.
I setup replication and replicated all data fine.
Upon performing another schema compare, I discovered that all foreign key constraints are missing, along with default values and indexing.
It appears that the initial snapshot taken for replication does not replicate constraints and default values, due to entity replication being done in an arbitrary order; these constraints would cause errors.
After removing seed column NOT FOR REPLICATION using
ALTER TABLE [dbo].[ColumnName] ALTER COLUMN Id DROP NOT FOR REPLICATION;
I could do another schema compare to re-apply all constraints and default values.

Verify an Oracle database rollback action is successful

How can I verify an Oracle database rollback action is successful? Can I use Number of rows in activity log and Number of rows in event log?
V$TRANSACTION does not contain historical information but it does contain information about all active transactions. In practice this is often enough to quickly and easily monitor rollbacks and estimate when they will complete.
Specifically the columns USED_UBLK and USED_UREC contain the number of UNDO blocks and records remaining. USED_UREC is not always the same as the number of rows; sometimes the number is higher because it includes index entries and sometimes the number is lower because it groups inserts together.
During a long rollback those numbers will decrease until they hit 0. No rows in the table imply that the transactions successfully committed or rolled back. Below is a simple example.
create table table1(a number);
create index table1_idx on table1(a);
insert into table1 values(1);
insert into table1 values(1);
insert into table1 values(1);
select used_ublk, used_urec, ses_addr from v$transaction;
USED_UBLK USED_UREC SES_ADDR
--------- --------- --------
1 6 000007FF1C5A8EA0
Oracle LogMiner, which is part of Oracle Database, enables you to query online and archived redo log files through a SQL interface. Redo log files contain information about the history of activity on a database.
LogMiner Benefits
All changes made to user data or to the database dictionary are
recorded in the Oracle redo log files so that database recovery
operations can be performed.
Because LogMiner provides a well-defined, easy-to-use, and
comprehensive relational interface to redo log files, it can be used
as a powerful data audit tool, as well as a tool for sophisticated
data analysis. The following list describes some key capabilities of
LogMiner:
Pinpointing when a logical corruption to a database, such as errors
made at the application level, may have begun. These might include
errors such as those where the wrong rows were deleted because of
incorrect values in a WHERE clause, rows were updated with incorrect
values, the wrong index was dropped, and so forth. For example, a user
application could mistakenly update a database to give all employees
100 percent salary increases rather than 10 percent increases, or a
database administrator (DBA) could accidently delete a critical system
table. It is important to know exactly when an error was made so that
you know when to initiate time-based or change-based recovery. This
enables you to restore the database to the state it was in just before
corruption. See Querying V$LOGMNR_CONTENTS Based on Column Values
for details about how you can use LogMiner to accomplish this.
Determining what actions you would have to take to perform
fine-grained recovery at the transaction level. If you fully
understand and take into account existing dependencies, it may be
possible to perform a table-specific undo operation to return the
table to its original state. This is achieved by applying
table-specific reconstructed SQL statements that LogMiner provides in
the reverse order from which they were originally issued. See
Scenario 1: Using LogMiner to Track Changes Made by a Specific
User for an example.
Normally you would have to restore the table to its previous state,
and then apply an archived redo log file to roll it forward.
Performance tuning and capacity planning through trend analysis. You
can determine which tables get the most updates and inserts. That
information provides a historical perspective on disk access
statistics, which can be used for tuning purposes. See Scenario 2:
Using LogMiner to Calculate Table Access Statistics for an
example.
Performing postauditing. LogMiner can be used to track any data
manipulation language (DML) and data definition language (DDL)
statements executed on the database, the order in which they were
executed, and who executed them. (However, to use LogMiner for such a
purpose, you need to have an idea when the event occurred so that you
can specify the appropriate logs for analysis; otherwise you might
have to mine a large number of redo log files, which can take a long
time. Consider using LogMiner as a complementary activity to auditing
database use. See the Oracle Database Administrator's Guide for
information about database auditing.)
Enjoy.

Sybase ASE remote row insert locking

Im working on an application which access a Sybase ASE 15.0.2 ,where the current code access a remote database
(CIS) to insert a row using a proxy table definition (the destination table is a DOL - DRL table - The PK
row is defined as identity ,and is always growing). The current code performs a select to check if the row
already exists to avoid duplicate data to be inserted.
Since the remote table also have a PK definition on the table, i do understand that the PK verification will
be done again prior to commiting the row.
Im planning to remove the select check since its being effectively done again by the PK verification,
but im concerned about if when receiving a file with many duplicates, the table may suffer
some unecessary contention when the data is tried to be commited.
Its not clear to me if Sybase ASE tries to hold the last row and writes the data prior to check for the
duplicate. Also, if the table is very big, im concerned also about the time it will spend looking the
whole index to find duplicates.
I've found some documentation for SQL anywhere, but not ASE in the following link
http://dcx.sybase.com/1200/en/dbusage/insert-how-transact.html
The best i could find is the following explanation
https://groups.google.com/forum/?fromgroups#!topic/comp.databases.sybase/tHnOqptD7X8
But it doesn't enlighten in details how the row is locked (and if there is any kind of
optimization to write it ahead or at the same time of PK checking)
, and also if it will waste a full PK look if im positively inserting a row which the PK
positively greater than the last row commited
Thanks
Alex
Unlike SqlAnywhere there is no option for ASE to set wait_for_commit. The primary key constraint is checked during the insert and not at the commit time. The problem as I understand from your post I see is if you have a mass insert from a file that may contain duplicates is to load into a temp table , check for duplicates, remove the duplicates and then insert the unique ones. Mass insert are lot faster though it still checks for primary key violations. However there is no cost associated as there is no rolling back. The insert statement is always all or nothing. Even if one row is duplicate the entire insert statement will fail. Check before insert in more of error free approach as opposed to use of constraint to the verification because it is going to fail and rollback is going to again be costly.
Thanks Mike
The link does have a very quick explanation about the insert from the CIS perspective. Its a variable to keep an eye on given that CIS may become a representative time consumer
if its performing data and syntax checking if it will be done again when CIS forwards the insert statement to the target server. I was afraid that CIS could have some influence beyond the network traffic/time over the locking/PK checking
Raju
I do agree that avoiding the PK duplication by checking if the row already exists by running a select and doing in a batch, but im currently looking for a stop gap solution, and that may be to perform the insert command in batches of about 50 rows and leave the
duplicate key check for the PK.
Hopefully the PK check will be done over a join of the 50 newly inserted rows, and thus
avoid to traverse the index for each single row...
Ill try to test this and comment back
Alex

Resources