Programatically drop and recreate Oracle indexes for bulk update - oracle

Since Oracle does not support disabling normal indexes, I would like to programatically drop all indexes on a table before a bulk update and recreate them once the update is complete. I imagine this would require some custom PL/SQL. Can anyone offer a solution to this? Maybe someone here has already written such a script.
For reference, here is a solution for SQL Server: Automatically Drop and Recreate current indexes.

set head off
set echo off
set pages 1000
set lines 300
set feedback off
spool index_unusable.sql
select 'alter index ' || index_name || ' unusable;' from user_indexes where table_name='MY_TABLE';
spool off
#index_unusable.sql
do you bulk import setting this in your session before:
alter session set skip_unusable_indexes=true;
after the import:
set head off
set echo off
set pages 1000
set lines 300
set feedback off
spool index_rebuild.sql
select 'alter index ' || index_name || ' rebuild;' from user_indexes where table_name='MY_TABLE';
spool off
#index_rebuild.sql
If you have constraints you will need to disable them as well using:
alter table mytable modify constraint constraint_name DISABLE keep index;

Related

Is the parallel degree silently ignored?

My query is the following :
ALTER TABLE EMPLOYEE MODIFY LOB(DOCUMENT) (SHRINK SPACE);
The parallel degree for the table is 1 :
SELECT DEGREE FROM USER_TABLES WHERE TABLE_NAME = 'EMPLOYEE';
Result : 1
Table EMPLOYEE is not partitioned.
If I launch the same query with a parallel degree, the system does not complain, but is it silently ignored ?
ALTER TABLE EMPLOYEE MODIFY LOB(DOCUMENT) (SHRINK SPACE) PARALLEL 8;
Any chance that the query will be faster ?
DDL operations will run in parallel when:
The degree of the table is not 1
OR
The session parallel ddl has been enabled by alter session enable parallel ddl
Anyway, you should always run the alter session enable parallel ddl before running any DDL operation that can run in parallel. Although the documentation did not say that shrink can run in parallel, the syntax is allowed, so I guess you can test whether it runs faster or not.
The parallel DDL statements for nonpartitioned tables and indexes are:
CREATE INDEX
CREATE TABLE AS SELECT
ALTER INDEX REBUILD
The parallel DDL statements for partitioned tables and indexes are:
CREATE INDEX
CREATE TABLE AS SELECT
ALTER TABLE {MOVE|SPLIT|COALESCE} PARTITION
ALTER INDEX {REBUILD|SPLIT} PARTITION
Example
SQL> create table t ( c1 clob ) ;
Table created.
SQL> alter table t MODIFY LOB(c1) (SHRINK SPACE) PARALLEL 8 ;
Table altered.
Instead of SHRINK you can always move the segment lob, which I can assure 100% will run in parallel and faster. The problem is that if you have indexes, there will become invalid.
UPDATE
To move the lob segment, you must do the following
Moving LOB:
SQL> spoolmovelob.sql
SET HEADING OFF
SET pagesize 200
SET linesize 200
select 'ALTER TABLE <owner>.'||TABLE_NAME||' MOVE LOB('||COLUMN_NAME||') STORE AS (TABLESPACE <Tablespace_name>) parallel 5 nologging;' from dba_lobs where TABLESPACE_NAME='<Tablespace_name>';
Note: The above query will include all the LOB,LOBSEGMENT,LOBINDEXES
Moving Table:
SQL> spool /home/oracle/moveTables.sql
SET HEADING OFF
SET PAGESIZE 200
SET LINESIZE 200
select ' ALTER TABLE <owner>.'||TABLE_NAME||' MOVE TABLESPACE <Tablespace_name>) parallel 5 nologging;' from dba_tables where owner='<owner name>';
Moving Indexes:
SQL> spool /home/oracle/moveIndex.sql
SET HEADING OFF
SET long 9999
SET linesize 200
select 'alter index <owner>.'||index_name||' from dba_indexes 'rebuild tablespace <Tablespace_name>) online parallel X nologging;' where owner='<owner>.';
Remember to replace X with the specific degree.
Did you enable parallel ddl?
alter session force parallel ddl parallel 8
Show us your session parameters:
select *
from v$ses_optimizer_env e
where e.sid=userenv('sid')
and (
name like '%parallel%'
or name like '%cpu%'
or name like '%optim%'
)
order by name
Accroding to the Oracle documentation
the ALTER TABLE MODIFY operation cannot be run in parallel.
If I launch the same query with a parallel degree, the system does not complain, but is it silently ignored ?
Depending on how your database in general and your specific seession are configured, it may be ignored.
Any chance that the query will be faster ?
This will very much depend on your specific dataset, your available system resources, and how you set up your parallelism. Pay attention to the parallel_degree_policy setting; it controls default behavior.
See the Oracle whitepaper, "Parallel Execution with Oracle Database" for a more complete understanding. In particular, read the section on "Controlling Parallel Execution" beginning on page 21.

Automated process to Update data definition comments in oracle

I am trying to create an automated process to update data definition comments for a database in Oracle. Does anyone have prior experience on a specific stored procedure that could be used to automatically update column information after a load? I am trying to use a variable because we cannot be hard coding this comment information as it defeats the purpose of the automation. Thank you!
WHilst I don't understand the need to do this, you could try generating this and running as dynamic SQL. Assuming the list of tables and columns and teh comments follow some rules:
SET SERVEROPUTPUT ON
SPOOL colcomm.sql
DECLARE
CURSOR c1 IS
SELECT table_name, column_name
FROM user_tab_columns
WHERE table_name like 'fred%';
vCmd VARCHAR2(500);
BEGIN
FOR r1 IN c1 LOOP
vCmd := 'COMMENT ON COLUMN ' || r1.table_name || '.' || r1.column_name || ' IS ''' || '????' || '''';
DBMS_OUTPUT.PUT_LINE ( vCmd );
END LOOP;
END;
Run this, spool to a file, then run the spool file.
You can use other mechanisms like UTL_FILE is the output is too big, but I hope the above gives you an idea.
I haven't tried running this through "EXECUTE vCmd" to see if that would work, if so you would not need the spool.
Issue I see is, how do you know what you want to set the column comments to?

Undo the DISABLE PK CASCADE

I had to use temporally the enabling cascade to fix some problems in my DB.
I used this alter:
ALTER TABLE table
DISABLE PRIMARY KEY CASCADE;
And now I'd want to undo it and enable all the constraints again. Have I got to enable them one by one?
Yes, you have to re-enable the the dependent constraints one by one. There is no magic cascade option for the re-enabling of the constraint.
Here is a quote from Darl Kuhn's "Pro Oracle Database 12c Administration" book. The information applies to Oracle 11g as well:
Keep in mind that there is no ENABLE ... CASCADE statement. To reenable the constraints, you have to query the data dictionary to determine which constraints have been disabled and then reenable them individually.
In line with that quote, przemo_pl has provided you a good answer to minimize the pain of handling your use case.
I don't know any way of automatic enabling constraints in a cascade way, even tried to google that but no results of any value.
So this is what I would do:
Go and ask dba_constraints for all constraints referencing this primary key:
select *
from dba_constraints
connect by prior constraint_name = r_constraint_name
start with constraint_name = '<your_primary_key_constraint>';
Just double check that those are the ones you should enable and create a script to enable them all:
select 'alter table ' || table_name || ' enable ' || constraint_name || ';'
from dba_constraints
connect by prior constraint_name = r_constraint_name
start with constraint_name = '<your_primary_key_constraint>'
where c.status = 'DISABLED';
and just run it...

Oracle sample data problems

So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts.
I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software.
The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema.
Now, I have two issues in creating my target schema and emptying it.
I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff.
I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back.
Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.
UPDATE
The main script that you are required to run for manually installing oracle sample schemas is mkplug.sql. Here is the line that loads the schemas up from a dmp file:
host imp "'sys/&&password_sys AS SYSDBA'" transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
Well, I tried modifying this line (after patching up path related issues on mkplug.sql and all other sql files) to this:
host imp "'sys/&&password_sys AS SYSDBA'" rows=n transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
And... it did NOT help. The schema got created with row data, despite rows=n attribute :(
Since you're already familiar with exp/imp (or expdp/impdp) from the Oracle scripts that use the .dmp file, why not just:
Create the empty TR_xxx schemas
Populate the TR_xxx schema from the
xxx .dmp file with the FROMUSER/TOUSER
options and ROWS=N (similar options
exist for expdp/impdp)
[Edit after reading your comment about the transportable tablespaces]
I didn't know that the Oracle scripts were using transportable tablespaces and that multiple schemas were being imported from a single file. This is probably the most straightforward way to create your new empty TR schemas:
Start with the standard, populated
database built with the Oracle
scripts
Create no-data export files on a
schema-by-schema basis (OE shown) by:
exp sys/&&password_sys AS SYSDBA
file=oe_nodata.dmp
log=oe_nodata_exp.log owner=OE rows=N
grants=N
(You should only have to do this once
and this dmp file can be reused)
Now, your script should:
Drop any TR_ users with the CASCADE
option
Re-create the TR_ users
Populate the schema objects (OE
shown) by:
host imp "'sys/&&password_sys AS
SYSDBA'" file=oe_nodata.dmp
log=tr_oe_imp.log fromuser=OE
touser=TR_OE
Here is an anonymos block which - for a given schema - disables triggers and foreign keys, truncates all the tables and then re-enables triggers and foreign keys. It uses truncate for speed but obviously this means no rollback: so be careful which schema name you supply! It's easy enough to convert that call into a delete from statement if you prefer.
The script is a fine example of cut'n'paste programming, and would no doubt benefit from some refactoring to remove the repetition.
begin
<< dis_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' disable';
end loop dis_triggers;
<< dis_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' disable constraint '||fkeys.constraint_name;
end loop dis_fkeys;
<< zap_tables >>
for tabs in ( select owner, table_name
from all_tables
where owner = '&&schema_name' )
loop
execute immediate 'truncate table '||tabs.owner||'.'||tabs.table_name
||' reuse storage';
end loop zap_tables;
<< en_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' enable constraint '||fkeys.constraint_name;
end loop en_fkeys;
<< en_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' enable';
end loop en_triggers;
end;
/

Importing 3954275 Insert statements into Oracle 10g

How do i import a script with 3954275 Lines of Insert Statements into a Oracle 10g. I can do it with sqlplus user/pass # script.sql but this is dam slow (even worse the commit is at the end of this 900MB file. I dont know if my Oracle configuration can handle this). Is there a better (faster) way to import the Data?
Btw. the DB is empty before the Import.
Use SQL*Loader.
It can parse even your INSERT commands if you don't have your data in another format.
SQL*Loader is a good alternative if your 900MB file contains insert statements to the same table. It will be cumbersome if it contains numerous tables. It is the fastest option however.
If for some reason a little improvement is good enough, then make sure your sessions CURSOR SHARING parameter is set to FORCE or SIMILAR. Each insert statement in your file will likely be the same except for the values. If CURSOR_SHARING is set to EXACT, then each of insert statements needs to be hard parsed, because it is unique. FORCE and SIMILAR automatically turns your literals in the VALUES clause to bind variables, removing the need for the hard parse over and over again.
You can use the script below to test this:
set echo on
alter system flush shared_pool
/
create table t
( id int
, name varchar2(30)
)
/
set echo off
set feedback off
set heading off
set termout off
spool sof11.txt
prompt begin
select 'insert into t (id,name) values (' || to_char(level) || ', ''name' || to_char(level) || ''');'
from dual
connect by level <= 10000
/
prompt end;;
prompt /
spool off
set termout on
set heading on
set feedback on
set timing on
#sof11.txt
set timing off
alter session set cursor_sharing = force
/
set timing on
#sof11.txt
set timing off
alter session set cursor_sharing = exact
/
set echo off
drop table t purge
/
The example executes 10,000 statements like "insert into t (id,name) values (1, 'name1');
". The output on my laptop:
SQL> alter system flush shared_pool
2 /
Systeem is gewijzigd.
SQL> create table t
2 ( id int
3 , name varchar2(30)
4 )
5 /
Tabel is aangemaakt.
SQL> set echo off
PL/SQL-procedure is geslaagd.
Verstreken: 00:00:17.10
Sessie is gewijzigd.
PL/SQL-procedure is geslaagd.
Verstreken: 00:00:05.50
More than 3 times as fast with CURSOR_SHARING set to FORCE.
Hope this helps.
Regards,
Rob.
Agreed with the above: use SQL*Loader.
However, if that is not an option, you can adjust the size of the blocks that SQL Plus brings in by putting the statement
SET arraysize 1000;
at the beginning of your script. This is just an example from my own scripts, and you may have to fine tune it to your needs considering latency, etc. I think it defaults to like 15, so you're getting a lot of overhead in your script.

Resources