This is a general question about the Oracle MERGE INTO statement with a particular scenario, on Oracle RDBMS 12c.
Daily data will be loaded to StagingTableA - about 10m rows.
This will be MERGEd INTO TableA.
TableA will vary between 0 to 10m rows (matcing StagingTableA).
There may be times when TableA will be pruned/emptied and left with 0 rows.
Clearly, when TableA is empty, a straight INSERT will do the job, but the procedure has been written to use a MERGE INTO method to handle all scenarios.
The MERGE .. MATCH is on a indexed column.
My question is an uncertainty about how the MERGE handles the MATCH in circumstances where TableA will start empty, and then grow hugely during the MERGE execution. The MATCH on indexed columns will use a FTS as the stats will show the table has 0 rows.
At some point during the MERGE transaction, this will become inefficient.
Is the MERGE statement clever enough to detect this and change the execution plan, and start using the index instead of the FTS?
If this was done the old way with CURSOR, UPDATE and INSERT then we could potentially introduce a ANALYZE at a appropriate point (say after 50,000 processed) on the TableA to switch to a optimal plan.
I haven't been able to find any documentation dealing with this specific question.
Hopefully you've got a UNIQUE index on that table, which is based on the incoming data. If I was you, rather than using a simple MERGE I'd:
Mark all indexes on the table as UNUSABLE, except for the unique index.
INSERT all records
Catch the DUPLICATE VALUE ON INDEX exception at the time of INSERT and issue the appropriate UPDATE.
DELETE processed rows from the input record.
Commit every N records (1000? 10000? 100000? Your choice...), calling DBMS_STATS.GATHER_TABLE_STATS for the table you've inserted into after each COMMIT.
Best of luck.
Related
We have a ClickHouse table with CollapsingMergeTree engine. We want to update the records as and when data is imported from source(May be frequent). Initially all the records are inserted with +1 sign column. What we do to update a record is we insert the record to be updated with same values with -1 sign and then insert an updated record with +1 sign expecting that the same records with opposite signs will be collapsed by ClickHouse when data parts are merged in background.
The problem is 'It never happens'
I am aware of the fact that ClickHouse will merge data asynchronously but its been months and no merge is performed by ClickHouse.
I queried SELECT * FROM system.merges to know if any merges are in progress. Query result was 0. Also updated ClickHouse to its latest version. But no luck!
Would appreicate your help if anyone can point out what's the issue? Am I missing on any server level settings? When ClickHouse merges such records?
Or any other approach I should be taking to update ClickHouse data?
eventually -- could be never.
You should not rely on merge process. It has own complicated algorithm to balance number of parts.
Merge has no goal to do final merge -- to make 1 part because it's not efficient.
optimize does / forces unnplaned merge, and you'll get for example 4 parts from 22
optimize final does / forces unnplaned merges for all parts until only one part in partition
The only problem with final that it rewrites even 1 part to a new 1 part. Because sometimes it needed to collapse rows over a single part.
So for some tables we run optimize table x partition final (by crond) for partitions which have more than one part.
select concat('optimize table ',database, '.','\`', table, '\` partition ', ((groupUniqArray(partition)) as partition_count)[1], ' final')
from system.parts
where (database = 'xxx' or database like 'zzzz\_%') and (database,table) in (select database, table from system.replicas where engine='ReplicatedReplacingMergeTree' and is_leader)
and table not like '.inner%'
group by database,table having length(partition_count)>1 and sum(rows) < 5000000
https://stackoverflow.com/a/60154073/11644308
Consider this Oracle docs about indexes, this about speed of insert and this question on StackOverflow lead me to conclusion that:
Indexes helps us locate information faster
Primary and Unique Keys are indexed automatically
Inserting with indexes can cause worse performance
However every time indexes are discussed there are only SELECT operations shown as examples.
My question is: are indexes used in INSERT and UPDATE operations? When and how?
My suggestions are:
UPDATE can use index in WHERE clause (if the column in the clause has index)
INSERT can use index when uses SELECT (but in this case, index is from another table)
or probably when checking integrity constraints
but I don't have such deep knowledge of using indexes.
For UPDATE statements, index can be used by the optimiser if it deems the index can speed it up. The index would be used to locate the rows to be updated. The index is also a table in a manner of speaking, so if the indexed column is getting updated, it obviously needs to UPDATE the index as well. On the other hand if you're running an update without a WHERE clause the optimiser may choose not to use an index as it has to access the whole table, a full table scan may be more efficient (but may still have to update the index). The optimiser makes those decisions at runtime based on several parameters such as if there are valid stats against the tables and indexes in question, how much data is affected, what type of hardware, etc.
For INSERT statements though the INSERT itself does not need the index, the index will also need to be 'inserted into', so will need to be accessed by oracle. Another case where INSERT can cause the index to be used is an INSERT like this:
INSERT INTO mytable (mycolmn)
SELECT mycolumn + 10 FROM mytable;
Insert statement has no direct benefit for index. But more index on a table cause slower insert operation. Think about a table that has no index on it and if you want to add a row on it, it will find table block that has enough free space and store that row. But if that table has indexes on it database must make sure that these new rows also found via indexes, So to add new rows on a table that has indexes, also need to entry in indexes too. That multiplies the insert operation. So more index you have, more time you need to insert new rows.
For update it depends on whether you update indexed column or not. If you are not updating indexed column then performance should not be affected. Index can also speed up a update statements if the where conditions can make use of indexes.
I'm working with an application that has a large amount of outdated data clogging up a table in my databank. Ideally, I'd want to delete all entries in the table whose reference date is too old:
delete outdatedTable where referenceDate < :deletionCutoffDate
If this statement were to be run, it would take ages to complete, so I'd rather break it up into chunks with the following:
delete outdatedTable where referenceData < :deletionCutoffDate and rownum <= 10000
In testing, this works suprisingly slowly. The following query, however, runs dramatically faster:
delete outdatedTable where rownum <= 10000
I've been reading through multiple blogs and similar questions on StackOverflow, but I haven't yet found a straightforward description of how/whether using rownum affects the Oracle optimizer when there are other Where clauses in the query. In my case, it seems to me as if Oracle checks
referenceData < :deletionCutoffDate
on every single row, executes a massive Select on all matching rows, and only then filters out the top 10000 rows to return. Is this in fact the case? If so, is there any clever way to make Oracle stop checking the Where clause as soon as it's found enough matching rows?
How about a different approach without so much DML on the table. As a permanent solution for future you could go for table partitioning.
Create a new table with required partition(s).
Move ONLY the required rows from your existing table to the new partitioned table.
Once the new table is populated, add the required constraints and indexes.
Drop the old table.
In future, you would just need to DROP the old partitions.
CTAS(create table as select) is another way, however, if you want to have a new table with partition, you would have to go for exchange partition concept.
First of all, you should read about SQL statement's execution plan and learn how to explain in. It will help you to find answers on such questions.
Generally, one single delete is more effective than several chunked. It's main disadvantage is extremal using of undo tablespace.
If you wish to delete most rows of table, much faster way usially a trick:
create table new_table as select * from old_table where date >= :date_limit;
drop table old_table;
rename table new_table to old_table;
... recreate indexes and other stuff ...
If you wish to do it more than once, partitioning is a much better way. If table partitioned by date, you can select actual date quickly and you can drop partion with outdated data in milliseconds.
At last, paritioning if a way to dismiss 'deleting outdated records' at all. Sometimes we need old data, and it's sad if we delete it by own hands. With paritioning you can archive outdated partitions outside of the database, but connects them when you need to access old data.
This is an old request, but I'd like to show another approach (also using partitions).
Depending on what you consider old, you could create corresponding partitions (optimally exactly two; one current, one old; but you could just as well make more), e.g.:
PARTITION BY LIST ( mod(referenceDate,2) )
(
PARTITION year_odd VALUES (1),
PARTITION year_even VALUES (0)
);
This could as well be months (Jan, Feb, ... Dec), decades (XX0X, XX1X, ... XX9X), half years (first_half, second_half), etc. Anything circular.
Then whenever you want to get rid of old data, truncate:
ALTER TABLE mytable TRUNCATE PARTITION year_even;
delete from your_table
where PK not in
(select PK from your_table where rounum<=...) -- these records you want to leave
Shouldn't the following query work fine for deleting duplicate rows in oracle
SQL> delete from sessions o
where 1<=(select count(*)
from sessions i
where i.id!=o.id
and o.data=i.data);
It seems to delete all the duplication rows!! (I wish to keep 1 tough)
Your statement doesn't work because your table has at least one row where two different ID's share the same values for DATA.
Although your intent may be to look for differing values of DATA ID by ID, what your SQL is saying is in fact set-based: "Look at my table as a whole. If there are any rows in the table such that the DATA is the same but the ID's are different (i.e., that inner COUNT(*) is anything greater than 0), then DELETE every row in the table."
You may be attempting specific, row-based logic, but your statement is big-picture (set-based). There's nothing in it to single out duplicate rows, as there is in the solution Ollie has linked to, for example.
I have a cursor that selects all rows in a table, a little over 500,000 rows. Read a row from cursor, INSERT into other table, which has two indexes, neither unique, one numeric, one 'DATE' type. COMMIT. Read next row from Cursor, INSERT...until Cursor is empty.
All my DATE column's values are the same, from a timestamp initialized at the start of the script.
This thing's been running for 24 hours, only posted 464K rows, a little less than 10K rows / hr.
Oracle 11g, 10 processors(!?)
Something has to be wrong. I think it's that DATE index trying to process all these entries with exactly the same value for that column.
Why don't you just do:
insert into target (columns....)
select columns and computed values
from source
commit
?
This slow by slow is doing far more damage to performance than an index that may not make any sense.
Indexes slow down inserts but speed up queries. This is normal.
If it is a problem you can remove the index, insert the rows, then add the index again. This can be faster if you are doing many inserts at once.
The way you are copying the data using cursors seems to be inefficient. You could try a set-based approach instead:
INSERT INTO table1 (x, y, z)
SELECT x, y, z FROM table2 WHERE ...
Committing after every inserted row doesn't make much sense. If you're worried about exceeding undo capacity, for example, you can keep a count of the inserts and issue a commit after every thousand rows.
Updating the indexes will have some impact but that's unavoidable if you can't drop (or disable) while the inserts are performed, but that's just how it goes. I'd expect the commits to have a bigger impact, though I suspect that's a topic with varied opinions.
This assumes you have a good reason for inserting from a cursor rather than as a direct insert into ... select from model.
In general, its often a good idea to delete the indexes before doing a massive insert and then add them back afterwards, so that the db doesnt have to try to update the indexes with each insert. Its been a long while since I've used oracle, but had you tried putting more than one insert statement in a transaction? That should also speed it up.
For operations like this you should look at oracle bulk operations, using FORALL and BULK COLLECT. It will reduce the number of DDL operations on the underlying tables considerably
create or replace procedure fast_proc is
type MyTable is table of source_table%ROWTYPE;
MyTable table;
begin
select * BULK COLLECT INTO table from source_table;
forall x in table.First..table.Last
insert into dest_table values table(x) ;
end;
Agreed on comment that what is killing your time is the 'slow by slow' processing. Copying 500,000 rows should be a matter of minutes.
The single INSERT ... SELECT FROM .... approach would be the best one, provided you have big enough Rollback segments. The database may even automatically apply parallel techniques to a plain SQL statement that it will not do with PL/SQL.
In addition you could look at using the /*+ APPEND */ hint - read up on it and see if it may apply to the situation with your target table.
o use all 10 cores you will need to either use plain parallel SQL, or run 10 copies of your pl/sql block, splitting the source table across the 10 copies.
In Oracle 10 this is a manual task (roll your own parallelism) but Oracle 11.2 introduces DBMS_PARALLEL_EXECUTE.
Failing that, bulking up your fetch / insert using the BULK COLLECT & bulk insert would be the next best option - process in chunks of 1000 or so rows (or larger). Again take a look as to whether DBMS_PARALLEL_EXECUTE may help you, or if you could submit the job in chunks via DBMS_JOB.
(Caveat : I don't have access to anything later than Oracle 10)