oracle replace view during query - oracle

What happens if a view is replaced while a query that uses that view is running?
I guess that since a view is just a stored query, nothing will happen. When running a query that includes a view, the view is replaced by the actual query. Is that assumption correct?
Thanks

Since SQL cursor is open Oracle fetches all data from rollback segment. So even you drop table in another session your first session will continue to fetch rows from dropped table. Same for view.

I think currently-running queries will continue to run even if their underlying views are modified.
Do not confuse Data Definition Language (DDL) changes with the Oracle mantra "readers and writers do not block each other". The technology that prevents that blocking does not prevent DDL from interfering with queries. There most certainly are cases where a DDL statement can break a currently-running SQL statement.
The examples below demonstrate a TRUNCATE breaking a SELECT, and then demonstrate a similar CREATE OR REPLACE VIEW not breaking a SELECT.
Truncate interfering with currently-running query
--Session #1 - Create table and data.
drop table table1;
create table table1(a number);
insert into table1 select level from dual connect by level <= 100000;
commit;
--Session #2 - Query table. Use an IDE and only retrieve the first N rows.
select * from table1;
--Session #1 - Truncate table.
truncate table table1;
--Session #2: Retrieve the rest of the rows.
--This may be difficult to reproduce, and may depend on memory
--or other environment settings.
ORA-08103: object no longer exists
View change does not interfere with currently-running query
--Session #1 - Create table and data.
drop table table1;
create table table1(a number);
insert into table1 select level from dual connect by level <= 100000;
commit;
create or replace view view1 as select a from table1;
--Session #2 - Query table. Use an IDE and only retrieve the first N rows.
select * from view1;
--Session #1 - Create a new version of the view.
create or replace view view1 as select 1 a from table1;
--Session #2: Retrieve the rest of the rows.
All rows are returned with the original values.
Warnings
This is only a simple example, do not take this as proof that a SELECT is always immune to view changes. I'm not sure if you could ever prove that view changes don't interfere with SELECT. Your assumption that a view is initially replaced by the query makes sense but I'm not 100% sure it is accurate. Oracle has mechanisms to re-optimize queries in the middle of an execution, it's possible that one of those re-optimizations
can break something.
If you need to be certain this will work, here are some options:
Contact Oracle Support.
Offer a bounty for a counter-example. That still won't prove it's impossible but it certainly will increase your confidence.
Thoroughly test your application.

Related

Oracle. Select data from one session but commit it to another. Is it possible?

Probably I ask for the impossible, but I'll ask anyway.
Is there an easy way to select from one Oracle session and then insert/commit into another?
(I guess, technically it could be done with pl/sql procedure calls and PRAGMA AUTONOMUS Transactions, but it would be a hassle)
I have the following scenario:
I run some heavy calculations and update / insert into some tables.
After the process is completed I would like to 'backup' the results
(create table as select or insert into another temp table) and then rollback my current session without loosing the backups.
Here is desired/expected behavior:
Oracle 11g
insert into TableA (A,B,C) values (1,2,3);
select * from TableA
Result: 1,2,3
create table [in another session] TempA
as select * from TableA [in this session];
rollback;
select * from TableA;
Result null
select * from TempA;
Result 1,2,3
Is this possible?
Is there an easy way to select from one Oracle session and then insert/commit into another?
Create a program in a third-party language (C++, Java, PHP, etc.) that opens two connections to the database; they will have different sessions regardless of whether you connect as different users or both the same user. Read from one connection and write to the other connection.
you can insert your "heavy calculation" into a Oracle temp Table .
CREATE GLOBAL TEMPORARY TABLE HeavyCalc (
id NUMBER,
description VARCHAR2(20)
)
ON COMMIT DELETE ROWS;
the trick is that when you commit the transaction all rows are deleted from temporary table.
Then you first insert data into the temp table, copy the result to you backup table and commit the transaction.

Adding a sequence to a large Oracle table

I have queries that take an existing large table and build tables off of them for reporting. The problem is that the source tables are 60-80MM+ records and it takes a long time to recreate. I'd like to be able to identify which records are new so I can build just add the new records to the reporting tables.
To me, the best way to identify this is to have an identity column. Is there any significant cost to creating this and adding it to the table?
Separately, is it possible to create a materialized view that takes data from one of these tables but add a sequence as part of the materialized view? That is, something like
create materialized view some_materialized_view as
select somesequence.nextval, source_table.*
from source_table?
You can add a sequence based column to your table, but as Gary suggests I wouldn't do that.
The task you are about to solve is so common that other solutions have been already implemented.
The first built-in option that comes to mind is the system change number SCN, a kind of Oracle internal clock. By default, tables are set up to record the SCN of the whole (usually 8K) block, containing usually many rows, but you can set a table to keep a record of the SCN that changed every row. Then you can track the columns that are new or change and have not been copied to your reporting tables.
CREATE TABLE t (c1 NUMBER) ROWDEPENDENCIES;
INSERT INTO t VALUES (1);
COMMIT;
SELECT c1, ora_rowscn FROM t;
Secondly, I would think of adding a date column. With 60-80 mio rows I wouldn't do this with ALTER TABLE xxx ADD (d DATE DEFAULT SYSDATE), but with rename, create as select, drop:
CREATE TABLE t AS SELECT * FROM all_objects;
RENAME t TO told;
CREATE TABLE t AS SELECT sysdate AS d, told.* FROM told;
ALTER TABLE t MODIFY d DATE DEFAULT SYSDATE;
DROP TABLE told;
Thirdly, I would read up on materialized views. I never had the chance to use this a work, but in theory, you should be able to set up a materialized view log on your 80 m table that records changes and updates dependent materialized views.
And forthly, I'd look into partitioning your large table on the (newly introduced) date column, so that identifying the new rows will become faster. That sadly depends on your version and Oracle license, though.

Conditional Insert or Update in Oracle

I have one table in oracle where data gets inserted from some third party. I want to populate master tables from that table. So, what will be the best way performance wise using collection.
E.g. Suppose, the table into which data will get populated from third party is 'EMP_TMP'.
Now I want to populate 'EMPLOYEE' master table through procedure which will get populated from EMP_TMP Table.
Here again there is one condition like IF SAME EMPID (this is not primary key) EXISTS then we have to UPDATE FULL TABLE which consists of SAME EMPID ELSE we have INSERT NEW RECORD.
[Note: Here EMPID is VARCHAR2 and EMPNO will be primary key where we will use SEQUENCE]
I think here merge will not perform much better performancewise since we cant use collection in MERGE statement.
Well, if performance is your primary consideration, and you don't like MERGE, then how about this (run as script, single transaction):
delete from EMPLOYEE where emp_id IN (
select emp_id from EMP_TMP);
insert into EMPLOYEE
select * from EMP_TMP;
commit;
Obviously not the "safest" approach (and as written assumes exact same table definitions and you have the rollback), but should be fast (you could also mess with IN vs EXISTS etc). And I couldn't quite understand your post if emp_id or emp_no was the common key in these 2 tables, but use whichever makes sense in your situation.
Create a procedure, you need to be using PL/SQL.
Do an update first then test sql%rowcount.
If it is 0, no updates where done and you have to do an insert instead.
I think that this is fairly efficient.
pseudo code
Update table;
if sql%rowcount = 0 then
//get new sequence number
insert into table;
END IF;
COMMIT;
HTH
Harv

How do I UPDATE a large table in oracle pl/sql in batches to avoid running out of undospace?

I have a very large table (5mm records). I'm trying to obfuscate the table's VARCHAR2 columns with random alphanumerics for every record on the table. My procedure executes successfully on smaller datasets, but it will eventually be used on a remote db whose settings I can't control, so I'd like to EXECUTE the UPDATE statement in batches to avoid running out of undospace.
Is there some kind of option I can enable, or a standard way to do the update in chunks?
I'll add that there won't be any distinguishing features of the records that haven't been obfuscated so my one thought of using rownum in a loop won't work (I think).
If you are going to update every row in a table, you are better off doing a Create Table As Select, then drop/truncate the original table and re-append with the new data. If you've got the partitioning option, you can create your new table as a table with a single partition and simply swap it with EXCHANGE PARTITION.
Inserts require a LOT less undo and a direct path insert with nologging (/+APPEND/ hint) won't generate much redo either.
With either mechanism, there would probably sill be 'forensic' evidence of the old values (eg preserved in undo or in "available" space allocated to the table due to row movement).
The following is untested, but should work:
declare
l_fetchsize number := 10000;
cursor cur_getrows is
select rowid, random_function(my_column)
from my_table;
type rowid_tbl_type is table of urowid;
type my_column_tbl_type is table of my_table.my_column%type;
rowid_tbl rowid_tbl_type;
my_column_tbl my_column_tbl_type;
begin
open cur_getrows;
loop
fetch cur_getrows bulk collect
into rowid_tbl, my_column_tbl
limit l_fetchsize;
exit when rowid_tbl.count = 0;
forall i in rowid_tbl.first..rowid_tbl.last
update my_table
set my_column = my_column_tbl(i)
where rowid = rowid_tbl(i);
commit;
end loop;
close cur_getrows;
end;
/
This isn't optimally efficient -- a single update would be -- but it'll do smaller, user-tunable batches, using ROWID.
I do this by mapping the primary key to an integer (mod n), and then perform the update for each x, where 0 <= x < n.
For example, maybe you are unlucky and the primary key is a string. You can hash it with your favorite hash function, and break it into three partitions:
UPDATE myTable SET a=doMyUpdate(a) WHERE MOD(ORA_HASH(ID), 3)=0
UPDATE myTable SET a=doMyUpdate(a) WHERE MOD(ORA_HASH(ID), 3)=1
UPDATE myTable SET a=doMyUpdate(a) WHERE MOD(ORA_HASH(ID), 3)=2
You may have more partitions, and may want to put this into a loop (with some commits).
If I had to update millions of records I would probably opt to NOT update.
I would more likely create a temp table and then insert data from old table since insert doesnt take up a lot of redo space and takes less undo.
CREATE TABLE new_table as select <do the update "here"> from old_table;
index new_table
grant on new table
add constraints on new_table
etc on new_table
drop table old_table
rename new_table to old_table;
you can do that using parallel query, with nologging on most operations generating very
little redo and no undo at all -- in a fraction of the time it would take to update the
data.

Data loading in Oracle

I am facing problem in loading data. I have to copy 800,000 rows from one table to another in Oracle database.
I tried for 10,000 rows first but the time it took is not satisfactory. I tried using the "BULK COLLECT" and "INSERT INTO SELECT" clause but for both the cases response time is around 35 minutes. This is not the desired response I'm looking for.
Does anyone have any suggestions?
Anirban,
Using an "INSERT INTO SELECT" is the fastest way to populate your table. You may want to extend it with one or two of these hints:
APPEND: to use direct path loading, circumventing the buffer cache
PARALLEL: to use parallel processing if your system has multiple cpu's and this is a one-time operation or an operation that takes place at a time when it doesn't matter that one "selfish" process consumes more resources.
Just using the append hint on my laptop copies 800,000 very small rows below 5 seconds:
SQL> create table one_table (id,name)
2 as
3 select level, 'name' || to_char(level)
4 from dual
5 connect by level <= 800000
6 /
Tabel is aangemaakt.
SQL> create table another_table as select * from one_table where 1=0
2 /
Tabel is aangemaakt.
SQL> select count(*) from another_table
2 /
COUNT(*)
----------
0
1 rij is geselecteerd.
SQL> set timing on
SQL> insert /*+ append */ into another_table select * from one_table
2 /
800000 rijen zijn aangemaakt.
Verstreken: 00:00:04.76
You mention that this operation takes 35 minutes in your case. Can you post some more details, so we can see what exactly is taking 35 minutes?
Regards,
Rob.
I would agree with Rob. Insert into () select is the fastest way to do this.
What exactly do you need to do? If you're trying to do a table rename by copying to a new table and then deleting the old, you might be better off doing a table rename:
alter table
table
rename to
someothertable;
INSERT INTO SELECT is the fastest way to do it.
If possible/necessary, disable all indexes on the target table first.
If you have no existing data in the target table, you can also try CREATE AS SELECT.
As with the above, I would recommend the Insert INTO ... AS select .... or CREATE TABLE ... AS SELECT ... as the fastest way to copy a large volume of data between two tables.
You want to look up the direct-load insert in your oracle documentation. This adds two items to your statements: parallel and nologging. Repeat the tests but do the following:
CREATE TABLE Table2 AS SELECT * FROM Table1 where 1=2;
ALTER TABLE Table2 NOLOGGING;
ALTER TABLE TABLE2 PARALLEL (10);
ALTER TABLE TABLE1 PARALLEL (10);
ALTER SESSION ENABLE PARALLEL DML;
INSERT INTO TABLE2 SELECT * FROM Table 1;
COMMIT;
ALTER TABLE 2 LOGGING:
This turns off the rollback logging for inserts into the table. If the system crashes, there's not recovery and you can't do a rollback on the transaction. The PARALLEL uses N worker thread to copy the data in blocks. You'll have to experiment with the number of parallel worker threads to get best results on your system.
Is the table you are copying to the same structure as the other table? Does it have data or are you creating a new one? Can you use exp/imp? Exp can be give a query to limit what it exports and then imported into the db. What is the total size of the table you are copying from? If you are copying most of the data from one table to a second, can you instead copy the full table using exp/imp and then remove the unwanted rows which would be less than copying.
try to drop all indexes/constraints on your destination table and then re-create them after data load.
use /*+NOLOGGING*/ hint in case you use NOARCHIVELOG mode, or consider to do the backup right after the operation.

Resources