I am trying to know how MV works when i insert batch with 10000 records.
How many times MV will work ?
1 time for all records or 10000 time?
And if another client insert in the same time what will happen?
Can anyone explain the mechanism.
Thanks.
in general 1 time per insert.
Mat. view never reads a base table.
Every insert propagates inserted block into MV.
https://den-crane.github.io/Everything_you_should_know_about_materialized_views_commented.pdf
https://youtu.be/ckChUkC3Pns?t=9326
Related
I am facing a problem please help
I written a trigger to insert data from testing table to main table
doubt is that if i insert data into that testing table should trigger locks the testing table because its in process of inserting into main table.
the main thing to clarify is that when multiple inserting takes place in testing table at a time does it allow to insert new records.
for example each inserting to main table takes 2-3 sec
for each second if it got more than 2 inserting how it works
thanks in advance.
Locking means that you faced some use case where parallel work of several users leads to data inconsistency or other bad things and you're inventing locking to prevent it. Do you see such use case in your task? If so, think about locking.
With respect to Oracle RDBMS, which rollback is faster?
Rollback1 : Insert 1000000 records and then rollback
or
Rollback2 : Delete 1000000 records and then rollback
You can find out the % of completion of query using this:
SELECT SESSION_ID, percent_complete, estimated_completion_time
FROM SYS.DM_EXEC_REUQESTS
To answer this question we should know how Oracle handles insert ande delete operations internally. I know that when you insert something, it inserts on memory, then when you commit. Oracle writes to disk.
For delete operation I found this : http://www.dba-oracle.com/t_oracle_soft_logical_deletes.htm So it logically deletes usually and when it is possible it deletes real-time.
Now we should talk about rollback, when you rollback an insert, you clear rows from memory and clear your redo logs. It sounds simple. When you rollback a delete (if Oracle deleted the rows real-time) it should go to redo log, read, then insert the deleted rows back to database.
So if I am right and logical, rollbacking a delete operation should take more time than rollbacking insert.
Also if you are deleting with condition, delete process should take more time than inserting by itself.
P.S. Thanks for the question by the way, it's interesting and made me do some research on Oracle internals.
We have a batch process which reads the base tables and performs some aggregation and then update the tables with an modified flag.
We have an update statement which updates around 3million rows.As a part of the business requirement we need to have table-level lock on the table which we are updating.
UPDATE TABLE1 t1 SET PARAMETER1=(SELECT p1 from TABLE2 t2 where t1.ROW_ID=ROWIDTOCHAR(t2.ROW_ID)
The observation today we made is that, update statement with table level lock is taking 35 mins while without table level lock is taking 20 mins.
I am not able to ascertain this observation. Please help!
Cheers,
Dwarak
Nobody but your database could tell you the reason of your observation. You'll have to do an AWR report.
However, it's not quite possible that the UPDATE would run longer because the table had been locked before.
Did you account for caching (both in the database and the filesystem) in your testing? Depending on what you did when, one statement might have run faster due to data already being in memory.
I have a table in an oracle database with 15 fields.
This table had 3500000 inserts. I deleted them all.
delete
from table
After that, whenever I execute a select statement
I get a very slow response (7 sec) even though the table is empty.
I get a normal response only in the case that I search
according to an indexed field.
Why?
As Gritem says, you need to understand high water marks etc
If you do not want to truncate the table now (because fresh data has been inserted), use alter table xyz shrink space documented here for 10g
Tom Kyte has a good explanation of this issue:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:492636200346818072
It should help you understand deletes, truncates, and high watermarks etc.
In sql when you want to completely clear out a table, you should use truncate instead of delete. Let's say you have your table with 3.5 million rows in it and there is an index (unique identifier) on a column of bigint that increments for each row. Truncating the table will completely clear out the table and reset the index to 0. Delete will not clear the index and will continue at 3,500,001 when the next record is inserted. Truncate is also much faster than delete. Read the articles below to understand the differences.
Read this article Read this article that explains the difference between truncate and delete. There are times to use each one. Here is another article from an Oracle point of view.
I have a cursor that selects all rows in a table, a little over 500,000 rows. Read a row from cursor, INSERT into other table, which has two indexes, neither unique, one numeric, one 'DATE' type. COMMIT. Read next row from Cursor, INSERT...until Cursor is empty.
All my DATE column's values are the same, from a timestamp initialized at the start of the script.
This thing's been running for 24 hours, only posted 464K rows, a little less than 10K rows / hr.
Oracle 11g, 10 processors(!?)
Something has to be wrong. I think it's that DATE index trying to process all these entries with exactly the same value for that column.
Why don't you just do:
insert into target (columns....)
select columns and computed values
from source
commit
?
This slow by slow is doing far more damage to performance than an index that may not make any sense.
Indexes slow down inserts but speed up queries. This is normal.
If it is a problem you can remove the index, insert the rows, then add the index again. This can be faster if you are doing many inserts at once.
The way you are copying the data using cursors seems to be inefficient. You could try a set-based approach instead:
INSERT INTO table1 (x, y, z)
SELECT x, y, z FROM table2 WHERE ...
Committing after every inserted row doesn't make much sense. If you're worried about exceeding undo capacity, for example, you can keep a count of the inserts and issue a commit after every thousand rows.
Updating the indexes will have some impact but that's unavoidable if you can't drop (or disable) while the inserts are performed, but that's just how it goes. I'd expect the commits to have a bigger impact, though I suspect that's a topic with varied opinions.
This assumes you have a good reason for inserting from a cursor rather than as a direct insert into ... select from model.
In general, its often a good idea to delete the indexes before doing a massive insert and then add them back afterwards, so that the db doesnt have to try to update the indexes with each insert. Its been a long while since I've used oracle, but had you tried putting more than one insert statement in a transaction? That should also speed it up.
For operations like this you should look at oracle bulk operations, using FORALL and BULK COLLECT. It will reduce the number of DDL operations on the underlying tables considerably
create or replace procedure fast_proc is
type MyTable is table of source_table%ROWTYPE;
MyTable table;
begin
select * BULK COLLECT INTO table from source_table;
forall x in table.First..table.Last
insert into dest_table values table(x) ;
end;
Agreed on comment that what is killing your time is the 'slow by slow' processing. Copying 500,000 rows should be a matter of minutes.
The single INSERT ... SELECT FROM .... approach would be the best one, provided you have big enough Rollback segments. The database may even automatically apply parallel techniques to a plain SQL statement that it will not do with PL/SQL.
In addition you could look at using the /*+ APPEND */ hint - read up on it and see if it may apply to the situation with your target table.
o use all 10 cores you will need to either use plain parallel SQL, or run 10 copies of your pl/sql block, splitting the source table across the 10 copies.
In Oracle 10 this is a manual task (roll your own parallelism) but Oracle 11.2 introduces DBMS_PARALLEL_EXECUTE.
Failing that, bulking up your fetch / insert using the BULK COLLECT & bulk insert would be the next best option - process in chunks of 1000 or so rows (or larger). Again take a look as to whether DBMS_PARALLEL_EXECUTE may help you, or if you could submit the job in chunks via DBMS_JOB.
(Caveat : I don't have access to anything later than Oracle 10)