My query is the following :
ALTER TABLE EMPLOYEE MODIFY LOB(DOCUMENT) (SHRINK SPACE);
The parallel degree for the table is 1 :
SELECT DEGREE FROM USER_TABLES WHERE TABLE_NAME = 'EMPLOYEE';
Result : 1
Table EMPLOYEE is not partitioned.
If I launch the same query with a parallel degree, the system does not complain, but is it silently ignored ?
ALTER TABLE EMPLOYEE MODIFY LOB(DOCUMENT) (SHRINK SPACE) PARALLEL 8;
Any chance that the query will be faster ?
DDL operations will run in parallel when:
The degree of the table is not 1
OR
The session parallel ddl has been enabled by alter session enable parallel ddl
Anyway, you should always run the alter session enable parallel ddl before running any DDL operation that can run in parallel. Although the documentation did not say that shrink can run in parallel, the syntax is allowed, so I guess you can test whether it runs faster or not.
The parallel DDL statements for nonpartitioned tables and indexes are:
CREATE INDEX
CREATE TABLE AS SELECT
ALTER INDEX REBUILD
The parallel DDL statements for partitioned tables and indexes are:
CREATE INDEX
CREATE TABLE AS SELECT
ALTER TABLE {MOVE|SPLIT|COALESCE} PARTITION
ALTER INDEX {REBUILD|SPLIT} PARTITION
Example
SQL> create table t ( c1 clob ) ;
Table created.
SQL> alter table t MODIFY LOB(c1) (SHRINK SPACE) PARALLEL 8 ;
Table altered.
Instead of SHRINK you can always move the segment lob, which I can assure 100% will run in parallel and faster. The problem is that if you have indexes, there will become invalid.
UPDATE
To move the lob segment, you must do the following
Moving LOB:
SQL> spoolmovelob.sql
SET HEADING OFF
SET pagesize 200
SET linesize 200
select 'ALTER TABLE <owner>.'||TABLE_NAME||' MOVE LOB('||COLUMN_NAME||') STORE AS (TABLESPACE <Tablespace_name>) parallel 5 nologging;' from dba_lobs where TABLESPACE_NAME='<Tablespace_name>';
Note: The above query will include all the LOB,LOBSEGMENT,LOBINDEXES
Moving Table:
SQL> spool /home/oracle/moveTables.sql
SET HEADING OFF
SET PAGESIZE 200
SET LINESIZE 200
select ' ALTER TABLE <owner>.'||TABLE_NAME||' MOVE TABLESPACE <Tablespace_name>) parallel 5 nologging;' from dba_tables where owner='<owner name>';
Moving Indexes:
SQL> spool /home/oracle/moveIndex.sql
SET HEADING OFF
SET long 9999
SET linesize 200
select 'alter index <owner>.'||index_name||' from dba_indexes 'rebuild tablespace <Tablespace_name>) online parallel X nologging;' where owner='<owner>.';
Remember to replace X with the specific degree.
Did you enable parallel ddl?
alter session force parallel ddl parallel 8
Show us your session parameters:
select *
from v$ses_optimizer_env e
where e.sid=userenv('sid')
and (
name like '%parallel%'
or name like '%cpu%'
or name like '%optim%'
)
order by name
Accroding to the Oracle documentation
the ALTER TABLE MODIFY operation cannot be run in parallel.
If I launch the same query with a parallel degree, the system does not complain, but is it silently ignored ?
Depending on how your database in general and your specific seession are configured, it may be ignored.
Any chance that the query will be faster ?
This will very much depend on your specific dataset, your available system resources, and how you set up your parallelism. Pay attention to the parallel_degree_policy setting; it controls default behavior.
See the Oracle whitepaper, "Parallel Execution with Oracle Database" for a more complete understanding. In particular, read the section on "Controlling Parallel Execution" beginning on page 21.
Related
I've a problem which I'm hard to find solution. Hope you guys in this community can solve.
On daily basis I'm copying table from one database(T_TAGS_REMOTE) to table on another database (T_TAGS_LOCAL) through DB links. For this I truncate T_TAGS_LOCAL table first and then perform insert.
Above task is done through Linux job.
Problem comes when
Sometimes T_TAGS_REMOTE from remote database is not accessible giving ORA error
Sometimes T_TAGS_REMOTE have not complete data rows (i,e SYSDATE COUNT < SYSDATE-1 COUNT)
Requirements:
STOP truncating STOP inserting when any of the above problem (1) or (2) has encountered
MyCode:
BEGIN
SELECT COUNT(1) AS OLD_RECORDS_COUNT FROM T_TAGS_LOCAL;
EXECUTE IMMEDIATE 'TRUNCATE TABLE T_TAGS_LOCAL';
INSERT /*+ APPEND */ INTO T_TAGS_LOCAL SELECT * FROM AK.T_TAGS_REMOTE#NETCOOL;
END;
/
Please suggest BETTER option for table copy or code to handle this problem.
I would not use the technique you are using, it would always generate issues. Instead, I think your use case fits a replication using materialized views. A materialized view log in source, and a materialized view using the dblink in target
You only need to decide the refresh method, that could be FAST ON COMMIT, as I guess your table is not very big as you are copying the whole table each and every single day.
Example
In Source
SQL> create table t ( c1 number primary key, c2 number ) ;
Table created.
SQL> declare
begin
for i in 1 .. 100000
loop
insert into t values ( i , dbms_random.value ) ;
end loop;
commit ;
end;
/ 2 3 4 5 6 7 8 9
PL/SQL procedure successfully completed.
SQL> create materialized view log on t with primary key ;
Materialized view log created.
SQL> select count(*) from t ;
COUNT(*)
----------
100000
In Target
SQL> create materialized view my_copy_of_t build immediate refresh fast on demand as
select * from your_source#your_db_link
-- To refresh in target
SQL> select count(*) from my_copy_of_t ;
COUNT(*)
----------
100000
Now, we change source
SQL> insert into t values ( 100001 , dbms_random.value );
1 row inserted
SQL> commit ;
Commit completed.
In target, for refreshing
SQL> exec dbms_mview.refresh('MY_COPY_OF_T');
The only requirement for FAST REFRESH ON DEMAND is that you must have a materialized view log for each of the tables that are part of the Materialized View. In your case, as you are replicating a table, you only need a materialized view log on the source table.
A better option might be using a materialized view. The way you do it now, you'd refresh it on demand using a database job scheduled via DBMS_JOB or DBMS_SCHEDULER.
My question is related to the meaning of USER in Oracle.
We have a database with many user, but R1S contains almost all the tables, sequence, etc. We want to load new tables data, but we also need to update the sequence values to be in phase with the table data.
ORA-31684: Object type USER:"R1S" already exists
ORA-31684: Object type SEQUENCE:"R1S"."RS2QNUNI" already exists
. . imported "R1S"."RSCIN" 13.16 MB 150346 rows
in the impdp I've noticed that the sequences hadn't been updated because they already exists. We want to force the load of this kind of data.
I've thought in do a DROP USER R1S CASCADE;
This USER used in the drop command is an SCHEMA. With the DROP USER command we are deleting the schema called R1S.
I've said that because in the impdp documentation i see i can force schme import :
SCHEMAS=R1S
Or the basic command will do the same job ?
impdp xxxxxx/******** FULL=Y CONTENT=ALL directory=EXPLOIT_DUMP_DIR dumpfile=expdp_X.exp LOGFILE=impdp_X.log
Simply put, schema = user + its objects (tables, views, procedures, sequences, ...) so - when you drop user, all its objects are also dropped.
If you are happy with the rest of import results (i.e. tables are correctly imported), and if there are not that many sequences there, perhaps it would be simpler to
recreate sequences (drop + create), or
alter
those sequences. The first option is easy, while the second requires a few commands. Increment it so that it reaches desired value, fetch from it, reset increment to its previous value (1, by default). Here's an example:
SQL> select s.nextval from dual;
NEXTVAL
----------
15028
SQL> alter sequence s increment by 100000;
Sequence altered.
SQL> select s.nextval from dual;
NEXTVAL
----------
115028
SQL> alter sequence s increment by 1;
Sequence altered.
SQL> select s.nextval from dual;
NEXTVAL
----------
115029
SQL> select s.nextval from dual;
NEXTVAL
----------
115030
SQL>
I am using mentioned query to delete 250 million plus rows from my table and it is taking more time
I have tried with t_delete limit with up to 20000.
Still slow deletion happening.
Please suggest a few optimisations in the same code to done my job faster.
DECLARE
TYPE tt_delete IS TABLE OF ROWID; t_delete tt_delete;
CURSOR cIMAV IS SELECT ROWID FROM moc_attribute_value where id in (select
id from ORPHANS_MAV);
total Number:=0;
rcount Number:=0;
Stmt1 varchar2(2000);
Stmt2 varchar2(2000);
BEGIN
--- CREATE TABLE orphansInconsistenDelProgress (currentTable
VARCHAR(100), deletedCount INT, totalToDelete INT);
--- INSERT INTO orphansInconsistenDelProgress (currentTable,
deletedCount,totalToDelete) values ('',0,0);
Stmt1:='ALTER SESSION SET parallel_degree_policy = AUTO';
Stmt2:='ALTER SESSION FORCE PARALLEL DML';
EXECUTE IMMEDIATE Stmt1;
EXECUTE IMMEDIATE Stmt2;
--- ALTER SESSION SET parallel_degree_policy = AUTO;
--- ALTER SESSION FORCE PARALLEL DML;
COMMIT;
--- MOC_ATTRIBUTE_VALUE
SELECT count(*) INTO total FROM ORPHANS_MAV;
UPDATE orphansInconsistenDelProgress SET currentTable='ORPHANS_MAV',
totalToDelete=total;
rcount := 0;
OPEN cIMAV;
LOOP
FETCH cIMAV BULK COLLECT INTO t_delete LIMIT 2000;
EXIT WHEN t_delete.COUNT = 0;
FORALL i IN 1..t_delete.COUNT
DELETE moc_attribute_value WHERE ROWID = t_delete (i);
COMMIT;
rcount := rcount + 2000;
UPDATE orphansInconsistenDelProgress SET deletedCount=rcount;
END LOOP;
CLOSE cIMAV;
COMMIT;
END;
/
A single Oracle parallel query can simplify the code and improve performance.
declare
execute immediate 'alter session enable parallel dml';
delete /*+ parallel */
from moc_attribute_value
where id in (select id from ORPHANS_MAV);
update OrphansInconsistenDelProgress
set currentTable = 'ORPHANS_MAV',
totalToDelete = sql%rowcount;
commit;
end;
/
In general, we want to either let Oracle break the task into pieces or use our own custom chunking. The original code seems to be doing both - it reads the data in chunks, and then submits each chunk to be further divided into a parallel delete. That approach generates lots of tiny pieces, and Oracle likely wastes a lot of time on things like thread coordination.
Deleting a large number of rows is expensive because there's no way to avoid REDO and UNDO. You might want to look into using DDL options, such as truncating a partition, or dropping and recreating the table. (But be careful recreating objects, it's difficult to perfectly recreate complex objects. We tend to forget things like privileges and table options.)
Tuning parallelism and large jobs is complicated. It's important to use the best monitoring tools, to ensure that Oracle is requesting, allocating, and using the right number of parallel processes, and that the execution plan is correct. One strong advantage of using a single SQL statement is that you can use real-time SQL monitoring reports to monitor progress. If you have the Diagnostics and Tuning Pack licenses, find the SQL_ID in GV$SQL and generate the report with select dbms_sqltune.report_sql_monitor('your SQL_ID here');.
maybe use SQL TRUNCATE TABLE,
Truncate table is faster and uses lesser resources than DELETE TABLE command.
If you are keeping only a fraction of the rows, it is likely to be much faster to copy over the rows to keep, then swap tables and delete the old table.
(No I don't know the threshold at which this is faster than DELETEing.)
Is there a option to see if existing table/record from a Oracle database is updated?
From a monitoring perspective (not intended to find previous changes), you have several options including but not limited to triggers, streams, and a column with a default value of sysdate. A trigger will allow you to execute a bit of programming logic (stored directly in the trigger or in an external database object) whenever the record changes (insert, update, delete). Streams can be used to track changes by monitoring the redo logs. One of the easiest may be to add a date column with a default value of sysdate.
Are you talking about within a transaction or outside of it?
Within our program we can use things like SQL%ROWCOUNT to see whether our DML succeeded...
SQL> set serveroutput on size unlimited
SQL> begin
2 update emp
3 set job = 'SALESMAN', COMM=10
4 where empno = 8083;
5 dbms_output.put_line('Number of records updated = '||sql%rowcount);
6 end;
7 /
Number of records updated = 1
PL/SQL procedure successfully completed.
SQL>
Alternatively we might test for SQL%FOUND (or SQL%NOTFOUND).
From outside the transaction we can monitor ORA_ROWSCN to see whether a record has changed.
SQL> select ora_rowscn from emp
2 where empno = 8083
3 /
ORA_ROWSCN
----------
83828715
SQL> update emp
2 set comm = 25
3 where empno = 8083
4 /
1 row updated.
SQL> commit
2 /
Commit complete.
SQL> select ora_rowscn from emp
2 where empno = 8083
3 /
ORA_ROWSCN
----------
83828780
SQL>
By default ORA_ROWSCN is set at the block level. If you want to track it at the lower level your need to create the table with the ROWDEPENCIES keyword.
These are ad hoc solutions. If you want to proactive monitoring then you need to implementing some form of logging. Using triggers to write log records is a common solution. If you have Enterprise Edition you should consider using Fine Grained Auditing: Dan Morgan's library has a useful demo of how to use FGA to track changes.
You can see if a table definition has change by querying the last_ddl_time from the user_objects view.
Without using triggers or materialized logs (which would be a total hack) there is no way I know of to see when any particular row in a table has been updated.
How do i import a script with 3954275 Lines of Insert Statements into a Oracle 10g. I can do it with sqlplus user/pass # script.sql but this is dam slow (even worse the commit is at the end of this 900MB file. I dont know if my Oracle configuration can handle this). Is there a better (faster) way to import the Data?
Btw. the DB is empty before the Import.
Use SQL*Loader.
It can parse even your INSERT commands if you don't have your data in another format.
SQL*Loader is a good alternative if your 900MB file contains insert statements to the same table. It will be cumbersome if it contains numerous tables. It is the fastest option however.
If for some reason a little improvement is good enough, then make sure your sessions CURSOR SHARING parameter is set to FORCE or SIMILAR. Each insert statement in your file will likely be the same except for the values. If CURSOR_SHARING is set to EXACT, then each of insert statements needs to be hard parsed, because it is unique. FORCE and SIMILAR automatically turns your literals in the VALUES clause to bind variables, removing the need for the hard parse over and over again.
You can use the script below to test this:
set echo on
alter system flush shared_pool
/
create table t
( id int
, name varchar2(30)
)
/
set echo off
set feedback off
set heading off
set termout off
spool sof11.txt
prompt begin
select 'insert into t (id,name) values (' || to_char(level) || ', ''name' || to_char(level) || ''');'
from dual
connect by level <= 10000
/
prompt end;;
prompt /
spool off
set termout on
set heading on
set feedback on
set timing on
#sof11.txt
set timing off
alter session set cursor_sharing = force
/
set timing on
#sof11.txt
set timing off
alter session set cursor_sharing = exact
/
set echo off
drop table t purge
/
The example executes 10,000 statements like "insert into t (id,name) values (1, 'name1');
". The output on my laptop:
SQL> alter system flush shared_pool
2 /
Systeem is gewijzigd.
SQL> create table t
2 ( id int
3 , name varchar2(30)
4 )
5 /
Tabel is aangemaakt.
SQL> set echo off
PL/SQL-procedure is geslaagd.
Verstreken: 00:00:17.10
Sessie is gewijzigd.
PL/SQL-procedure is geslaagd.
Verstreken: 00:00:05.50
More than 3 times as fast with CURSOR_SHARING set to FORCE.
Hope this helps.
Regards,
Rob.
Agreed with the above: use SQL*Loader.
However, if that is not an option, you can adjust the size of the blocks that SQL Plus brings in by putting the statement
SET arraysize 1000;
at the beginning of your script. This is just an example from my own scripts, and you may have to fine tune it to your needs considering latency, etc. I think it defaults to like 15, so you're getting a lot of overhead in your script.