I defined a matView for fast refresh by rowid and from time to time I got duplicated entries for the primary key. The master table definitely has no double entries. The view accesses a remote DB.
I cannot refresh via primary key, because I need to have an outer join in case the referenced id is null.
Most of the time it works fine, but every 1000 entries or so I get an entry twice.
When I update this duplicated record in the master, the refresh of the view "repairs" the record and I have a single record.
We have a RAC cluster with 2 instances.
create materialized view teststatussetat
refresh force on demand with rowid
as
select
ctssa.uuid id,
ctssa.uuid,
ctssa.rowid ctssa_rowid,
ctps.rowid ctps_rowid,
ssf.rowid ssf_rowid,
ctssa.coretestid coretest_uuid,
ctssa.lastupdate,
ctssa.pstatussetat statussetat,
ctps.code status_code,
ssf.account statussetfrom
from
coreteststatussetat#coredb ctssa,
coretestprocessstatusrev#coredb ctps,
coreuser#coredb ssf
where
ssf.uuid(+) = ctssa.statussetfromid and
ctps.uuid = ctssa.statusid
;
The log files are created like this:
create materialized view log on coreteststatussetat with sequence, rowid, primary key including new values;
We have an Oracle Database 19c Enterprise Edition Release 19.0.0.0.0.
To watch what happens I created a job which checks every 5 seconds, if the view contains duplicates for one day. The job found thousands of duplicate entries during the day but most of them (but not all) vanished away, so were there only temporary. The job protololled the primary key and also the rowid, so I hoped that I can find some changing rowids. But all of the duplicated primary keys have distinct rowids.
The data is created via Hibernate. But this should not make a difference. Oracle should not create duplicate entries.
Related
I want to ask you if there is a solution to auto-synchronize a table ,e.g., every one minute based on view created in oracle.
This view is using data from another table. I've created a trigger but I noticed a big slowness in database whenever a user update a column or insert a row.
Furthermore, I've tested to create a job schedule on the specified table (Which I wanted to be synchronized with the view), however we don't have the privilege to do this.
Is there any other way to keep data updated between the table and the view ?
PS : I'm using toad for oracle V 12.9.0.71
A materialized view in Oracle is a database object that contains the results of a query. They are local copies of data located remotely, or are used to create summary tables based on aggregations of a table's data. Materialized views, which store data based on remote tables, are also known as snapshots.
Example:
SQL> CREATE MATERIALIZED VIEW mv_emp_pk
REFRESH FAST START WITH SYSDATE
NEXT SYSDATE + 1/48
WITH PRIMARY KEY
AS SELECT * FROM emp#remote_db;
You can use cronjob or dbms_jobs to schedule a snapshot.
This problem happens ONLY with the database user, which was imported by datapump. The problem is NOT present with the original database user.
I am getting deadlock in oracle 11.2.0.3 version where the current sqls of two transactions participating in deadlock are as follows :
SELECT /*AApaAA*/ objectid FROM T_DS_0 WHERE objectid = :1 FOR UPDATE
insert /*AApaAA*/ into T_DS_0(OBJECTID) values (:1 )
Both bind variables are 'AApaAA' and it is also primary key. It looks like deadlock on single resource.
There are foreign keys (on delete cascade) pointing to that primary key and they are indexed.
The deadlock graph is as follows :
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
TX-000c000f-00000322 49 102 X 46 587 X
TX-00070011-00000da4 46 587 X 49 102 S
It is not clear to me how the deadlock could happen at the single resource. It is true that the insert does not lock the row but the constraint which is probably a different resource, so that the deadlock could theoretically be possible if 1st transaction would perform the lock constraint and then lock row and the other one would be in the reverse order but I do not see any way how this could happen. Theoretically there is the child table locking possible (insert causes SX on child tables) but the select for update should not touch the child tables at all.
The full tracefile from oracle is at : https://gist.github.com/milanro/06f9a76a2607a26ac9ba8c91b88639b3
Did anyone experience a similar behavior?
Additional Info : This problem happens only when datapump is used to duplicate the db user. The original schema contains SYS indexes created during creation of Primary Keys. There are further indexes which start with the PRIMARY KEY column. Datapump then does not create the SYS index on the PRIMARY KEY column but uses the indexes starting with the PRIMARY KEY column.
It looks like when I create following database objects :
create table tbl(objectid varchar2(30), a integer, primary key (objectid));
create index idx1 on tbl(objectid,a);
there is 1 table and 2 indexes created. SYS index (OBJECTID) and idx1(OBJECTID,A). The PRIMARY KEY uses the SYS index.
After the datapump si performed, on the imported side only 1 table and 1 index is created, the index is idx1(OBJECTID, A) and that index is used for the PRIMARY KEY.
This happens in my database schema with the table SDMBase. And the deadlocks happen when I use in different transactions the combination of INSERT INTO SDMBase ... and SELECT ... FROM SDMBase ... FOR UPDATE where I use the same OBJECTID. The same code is executed in those transactions and it can be as follows in 1 transaction
1.INSERT (objectid1)
2.SELECT FOR UPDATE (objectid1)
3.INSERT (objectid1)
4.SELECT FOR UPDATE (objectid1)
...
The deadlocking situation happens on the line 2. and line 3. In my use-case, when these transactions run, the row with objectid1 is already in the database but does not have to be committed yet.
So I suppose that
step 1. should wait until objectid1 is commited and then fail and lock nothing
step 2. should lock objectid1 or wait if another transaction already locked it
step 3. should fail immediately and lock nothing
...
Apparently the step 1 even if failing, holds lock on PK for some time but only in case, the database is duplicated by datapump.
These deadlocks are rare and not reproducible manually, I suppose that the lock is not held the whole transaction but only very short time.
so it could be as follows :
TX1: 1.INSERT (objectid1) (fails and does not lock)
TX1: 2.SELECT (objectid1) (locks SDMBase)
TX2: 1.INSERT (objectid1) (fails but locks PK)
TX1: 3.INSERT (objectid1) (waits on PK)
TX2: 2.SELECT (objectid1) (waits on SDMBase)
Even if I create the index in the imported portal as SDMBase(OBJECTID) and let the PRIMARY KEY to use it, and even if I recreate the other index (OBJECTID,...), it still deadlocks. So I suppose that there is some problem with the PK constraint check.
The fix of this problem is to create the SDMBase(OBJECTID), let the PRIMARY KEY use it and then perform the datapump again. The import must be performed in 2 steps, at first one to exclude indexes and at the second one import only indexes
exclude=INDEX/INDEX,statistics
include=INDEX/INDEX CONTENT=METADATA_ONLY
This problem occurs in both 11.2.0.3 and 12.2.0.1
According to Oracle Documentation
You should not use ROWID as the primary key of a table. If you delete
and reinsert a row with the Import and Export utilities, for example,
then its rowid may change. If you delete a row, then Oracle may
reassign its rowid to a new row inserted later.
I didn't understand the actual reason. Does it mean, when we use Import/Export utilities, then only we shouldn't use ROWID as primary key or we should never use ROWID as primary key ?
As explained above, when we delete the row and re-insert then same ROWID may get assign but on the other side the row was already deleted, so there won't be any problem if we get same ROWID. Isn't it ? Can anyone explain this with some example ?
If you rebuild your table then the ROWID of the table may change and you dont want your primary key to be changed.
Also if you delete one record then a new record could be given that ROWID. Also you should understand that ROWID does not persist across a database EXPORT and IMPORT process.
From here
If rows are moved, the ROWID will change. Rows can move due to
maintenance operations like shrinks and table moves. As a result,
storing ROWIDs for long periods of time is a bad idea. They should
only be used in a single transaction, preferably as part of a SELECT
... FOR UPDATE, where the row is locked, preventing row movement.
We should never use ROWIDs as primary keys for permanent and business-important data.
ROWID is a technical address of a row. There are several scenarious when
a) rowid of the existing records would be changed
b) different records would have the same rowid.
For example, if you have partitioned table, updating of record's partitioning key would bring us into record's rowid changing. Such scenarious prevents of using ROWID keys unless we can to forget it without serious consequences.
ROWID keys can be used for unnecessary temporary data, such as exceptions tables, or for short-term navigation, such as in WHERE CURRENT OF clause.
I need to update the primary key of a large Index Organized Table (20 million rows) on Oracle 11g.
Is it possible to do this using multiple UPDATE queries? i.e. Many smaller UPDATEs of say 100,000 rows at a time. The problem is that one of these UPDATE batches could temporarily produce a duplicate primary key value (there would be no duplicates after all the UPDATEs have completed.)
So, I guess I'm asking is it somehow possible to temporarily disable the primary key constraint (but which is required for an IOT!) or alter the table temporarily some other way. I can have exclusive and offline access to this table.
The only solution I can see is to create a new table and when complete, drop the original table and rename the new table to the original table name.
Am I missing another possibility?
You can't disable / drop the primary key constraint from an IOT, since it is a unique index by definition.
When I need to change an IOT like this, I either do a CTAS (create table as) for a new plain heap table, do my maintenance, and then CTAS a new IOT.
Something like:
create table t_temp as select * from t_iot;
-- do maintenance
create table t_new_iot as select * from t_temp;
If, however, you need to simply add or join a new field to the existing key, you can do this in one step by creating the new IOT structure, then populating directly from the old IOT with a query.
Unfortunately, this is one of the downsides to IOTs.
I would recommend following method:
Create new IOT table partitioned by system with single partition
with exactly same structure as current one.
Lock current IOT table to prevent any DML.
insert into new table as select from current table changing PK values in select. This step
could be repeated several times if needed. In this case it's better
to do it in another session to keep lock on original table.
Exchange partition of new table with original table.
I have 62 columns in a table under SQL 2005 and LINQ to SQL doesn't handle the updates though the reading would work just fine, I tried re-adding the table to the model, created a new data model but nothing worked, I'm guessing I've hit the maximum number of columns limit on an object, can anyone explain that ?
I suspect there is some issue with an identity or timestamp column (something autogenerated on the SQL server). Make sure that any column that is autogenerated is marked that way in the model. You might also want to look at how it is handling concurrency. If you have triggers that update any values on the row after it is updated (changing values) and it is checking all columns on updates, this would cause the update to fail. Typically I create my tables with a timestamp column -- LINQ2SQL picks this up when I generate the model and uses it alone for concurrency.
Solved, either one of the following two
-I'm using a UniqueIdentifier column that was not set as Primary key
-Set Unique ID primary key, checked the properties of the same column in Server Explorer and it was still not showing as Primary key, refreshed the connection,dropped the same table on the model and voila.
So I assume I made a change to my model some time before, deleted the table from the model and added the same from the Server explorer without refreshing the connection and it never used to work.
Question is, does VS Server Explorer maintain it's own table schema and requires connection refresh everytime a change is made in the database ?
There is no limit to the number of columns LINQ to SQL will handle.
Have you got other tables updating successfully?
What else is different about how you are accessing the table content?