We have a daily partitioned table with retention of about 180 days. We have created a view with a group by .., to_char(DATE_COL,'YYYYMM');
The users started to extract the data from the views for each month; and at one moment of executing the view for 201510, it had failed with "Object no longer exists" even though the view and the underlying table exists.
I am suspecting that Unix issued a new partition creation statement for the next day which is automated process(verified in data dictionary);
The question is if the query on the view is running, there would be read lock on the table, thereby the partition couldn't have been created as it needs exclusive lock;
If the view wasn't running, then the ALTER table statement to create partition would have been completed and then if there was any query on the view, it wouldn't have been failed.
Did the query on the view fired at almost the same time when alter table statement to add partition was being executed; if so, as there is an exclusive lock on the table through alter table statement, the query on the view would have waited for read lock as currently there is exclusive lock. Why did I see this error, can you please elucidate.
Related
I defined a matView for fast refresh by rowid and from time to time I got duplicated entries for the primary key. The master table definitely has no double entries. The view accesses a remote DB.
I cannot refresh via primary key, because I need to have an outer join in case the referenced id is null.
Most of the time it works fine, but every 1000 entries or so I get an entry twice.
When I update this duplicated record in the master, the refresh of the view "repairs" the record and I have a single record.
We have a RAC cluster with 2 instances.
create materialized view teststatussetat
refresh force on demand with rowid
as
select
ctssa.uuid id,
ctssa.uuid,
ctssa.rowid ctssa_rowid,
ctps.rowid ctps_rowid,
ssf.rowid ssf_rowid,
ctssa.coretestid coretest_uuid,
ctssa.lastupdate,
ctssa.pstatussetat statussetat,
ctps.code status_code,
ssf.account statussetfrom
from
coreteststatussetat#coredb ctssa,
coretestprocessstatusrev#coredb ctps,
coreuser#coredb ssf
where
ssf.uuid(+) = ctssa.statussetfromid and
ctps.uuid = ctssa.statusid
;
The log files are created like this:
create materialized view log on coreteststatussetat with sequence, rowid, primary key including new values;
We have an Oracle Database 19c Enterprise Edition Release 19.0.0.0.0.
To watch what happens I created a job which checks every 5 seconds, if the view contains duplicates for one day. The job found thousands of duplicate entries during the day but most of them (but not all) vanished away, so were there only temporary. The job protololled the primary key and also the rowid, so I hoped that I can find some changing rowids. But all of the duplicated primary keys have distinct rowids.
The data is created via Hibernate. But this should not make a difference. Oracle should not create duplicate entries.
I have a web application using a mariaDB10.4.10 INNO_DB table which is updated every 5 minutes from a script.
The script is working like:
Create a temp table from a table XY and writing data to the temp table from a received csv file. When the data is written, the script starts a transaction, drop the XY table and rename the temp table to the XY, and commits the transaction.
Nevertheless some times a user gets an "XY table does not exists" error working with the application.
I already tried to LOCK the XY table in the transaction but it doesn't change a thing.
How can I solve this? Is there any kind of locking (I thought locking is no longer possible with INNO_DB?)
Do this another way.
Create the temporary table (not as temporary table, as a real table). Fill it as needed. Nobody else knows it's there, you have all the time.
SET autocommit = 0; // OR: LOCK TABLE xy WRITE, tempxy READ;
DELETE FROM xy;
INSERT INTO xy SELECT * FROM tempxy;
COMMIT WORK; // OR: UNLOCK TABLES;
DROP TABLE tempxy;
This way, other customers will see the old table until point 5, then they'll start seeing the new table.
If you use LOCK, customers will stall from point 2 to point 5, which, depending on time needed, might be bad.
At point #3, in some scenarios you might be able to optimize things by deleting only rows that are not in tempxy, and running an INSERT ON DUPLICATE KEY UPDATE at point 4.
Funnily enough, I answered recently another question that was somewhat like yours.
autoincrement
To prevent autoincrement column from overflowing, you can replace COMMIT WORK with ALTER TABLE xy AUTO_INCREMENT=. This is a dirty hack and relies on the fact that this DDL command in MySQL/MariaDB will execute an implicit COMMIT immediately followed by the DDL command itself. If nobody else inserts in that table, it is completely safe. If somebody else inserts in that table at the exact same time your script is running, it should be safe in MySQL 5.7 and derived releases; it might not be in other releases and flavours, e.g. MySQL 8.0 or Percona.
In practice, you fill up tempxy using a new autoincrement from 1 (since tempxy has been just created), then perform the DELETE/INSERT, and update the autoincrement counter to the count of rows you've just inserted.
To be completely sure, you can use a cooperative lock around the DDL command, on the one hand, and anyone else wanting to perform an INSERT, on the other:
script thread other thread
SELECT GET_LOCK('xy-ddl', 30);
SELECT GET_LOCK('xy-ddl', 30);
ALTER TABLE `xy` AUTO_INCREMENT=12345; # thread waits while
# script thread commits
# and runs DDL
SELECT RELEASE_LOCK('xy-ddl'); # thread can acquire lock
INSERT INTO ...
DROP TABLE tempxy; # Gets id = 12346
SELECT RELEASE_LOCK('xy-ddl');
What is the difference between row lock and table lock in Oracle database.
will for loop with update statement trigger table lock ??
Any DML statement on a table is going to acquire a table lock. But it is terribly unlikely that this table lock is going to affect another session in a way that limits concurrency. When your session updates rows, there will be a row exclusive table lock which will stop another session from doing DDL on the table (say, adding or removing a column) while there are active, uncommitted transactions involving the table. But presumably, you're not generally trying to modify the structure of the table at the same time that you're updating rows in the table (or understand that when you deploy these DDL changes that you'll block other sessions for a short period of time and you're picking your deployment times accordingly).
The specific rows that you are updating will be locked in order to prevent another session from modifying those rows until your transaction either commits or rolls back. Those row level locks are generally the locks that cause performance and scalability issues. Ideally, your code would be structured to hold the locks for as little time as possible (updating data in sets is much faster than doing row-by-row updates) and to minimize the probability that two sessions will try to update the same row simultaneously.
I want to ask you if there is a solution to auto-synchronize a table ,e.g., every one minute based on view created in oracle.
This view is using data from another table. I've created a trigger but I noticed a big slowness in database whenever a user update a column or insert a row.
Furthermore, I've tested to create a job schedule on the specified table (Which I wanted to be synchronized with the view), however we don't have the privilege to do this.
Is there any other way to keep data updated between the table and the view ?
PS : I'm using toad for oracle V 12.9.0.71
A materialized view in Oracle is a database object that contains the results of a query. They are local copies of data located remotely, or are used to create summary tables based on aggregations of a table's data. Materialized views, which store data based on remote tables, are also known as snapshots.
Example:
SQL> CREATE MATERIALIZED VIEW mv_emp_pk
REFRESH FAST START WITH SYSDATE
NEXT SYSDATE + 1/48
WITH PRIMARY KEY
AS SELECT * FROM emp#remote_db;
You can use cronjob or dbms_jobs to schedule a snapshot.
I'm using user_tab_modifications table to monitor all my table's change in DB, but sometimes the records disappeared.
For example, I updated the data in table A, and ran the following SQL to flush the table user_tab_modifications so that I can see the latest information there.
exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
Then
SELECT * FROM USER_TAB_MODIFICATIONS;
So I can see the record about table A in there.
But then I found the record about table A disappeared after about 1 minute even though I didn't do anything in Oracle.
(other records in user_tab_modifications do not change. No problems)
That's why and can I do some settings to change it (make sure the records there will not disappear)? Thank you.
From the documentation:
USER_TAB_MODIFICATIONS describes modifications to all tables owned by the current user that have been modified since the last time statistics were gathered on the tables.
You might want to check if some stat gathering process was running in the background on the concerned table between the time when the changes were done and when you saw the stat record disappear.