So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts.
I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software.
The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema.
Now, I have two issues in creating my target schema and emptying it.
I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff.
I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back.
Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.
UPDATE
The main script that you are required to run for manually installing oracle sample schemas is mkplug.sql. Here is the line that loads the schemas up from a dmp file:
host imp "'sys/&&password_sys AS SYSDBA'" transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
Well, I tried modifying this line (after patching up path related issues on mkplug.sql and all other sql files) to this:
host imp "'sys/&&password_sys AS SYSDBA'" rows=n transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
And... it did NOT help. The schema got created with row data, despite rows=n attribute :(
Since you're already familiar with exp/imp (or expdp/impdp) from the Oracle scripts that use the .dmp file, why not just:
Create the empty TR_xxx schemas
Populate the TR_xxx schema from the
xxx .dmp file with the FROMUSER/TOUSER
options and ROWS=N (similar options
exist for expdp/impdp)
[Edit after reading your comment about the transportable tablespaces]
I didn't know that the Oracle scripts were using transportable tablespaces and that multiple schemas were being imported from a single file. This is probably the most straightforward way to create your new empty TR schemas:
Start with the standard, populated
database built with the Oracle
scripts
Create no-data export files on a
schema-by-schema basis (OE shown) by:
exp sys/&&password_sys AS SYSDBA
file=oe_nodata.dmp
log=oe_nodata_exp.log owner=OE rows=N
grants=N
(You should only have to do this once
and this dmp file can be reused)
Now, your script should:
Drop any TR_ users with the CASCADE
option
Re-create the TR_ users
Populate the schema objects (OE
shown) by:
host imp "'sys/&&password_sys AS
SYSDBA'" file=oe_nodata.dmp
log=tr_oe_imp.log fromuser=OE
touser=TR_OE
Here is an anonymos block which - for a given schema - disables triggers and foreign keys, truncates all the tables and then re-enables triggers and foreign keys. It uses truncate for speed but obviously this means no rollback: so be careful which schema name you supply! It's easy enough to convert that call into a delete from statement if you prefer.
The script is a fine example of cut'n'paste programming, and would no doubt benefit from some refactoring to remove the repetition.
begin
<< dis_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' disable';
end loop dis_triggers;
<< dis_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' disable constraint '||fkeys.constraint_name;
end loop dis_fkeys;
<< zap_tables >>
for tabs in ( select owner, table_name
from all_tables
where owner = '&&schema_name' )
loop
execute immediate 'truncate table '||tabs.owner||'.'||tabs.table_name
||' reuse storage';
end loop zap_tables;
<< en_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' enable constraint '||fkeys.constraint_name;
end loop en_fkeys;
<< en_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' enable';
end loop en_triggers;
end;
/
Related
I see the concept of temporary table in oracle is quite different from other databases like SQL Server. In Oracle, we have a concept of global temporary table and we create it only once and in each session we fill it with data which is not the same in other databases.
In 18c, oracle has introduced the concept of private temporary tables which states that upon successful usage, tables can be dropped like in other databases. But how do we use it in a PL/SQL block?
I tried using it using dynamic SQL - EXECUTE IMMEDIATE. But it is giving me table must be declared error. what do I do here?
But how do we use it in a PL/SQL block?
If what you mean is, how can we use private temporary tables in a PL/SQL program (procedure or function) the answer is simple: we can't. PL/SQL programs need to be compiled before we can call them. This means any table referenced in the program must exist at compilation time. Private temporary tables don't change that.
The private temporary table is intended for use in ad hoc SQL work. It allows us to create a data structure we can use in SQL statements for the duration of a session, to make life easier for ourselves.
For instance, suppose I have a massive table of sales data - low level transactions - and my task is to investigate monthly trends. So I only need the total sales by month. Unfortunately, there is no materialized view providing this summary. I don't want to include the aggregating query in my select statements. In previous versions I would have had to create a permanent table (and had to remember to drop it afterwards) but in 18c I can use a private temporary table to stage my summary just for the session.
create private temporary table ora$ptt_sales_summary (
sales_month date
, total_value number )
/
insert into ora$ptt_sales_summary
select trunc(sales_date, 'MM')
, sum (qty*price)
from massive_sales_table
group by trunc(sales_date, 'MM')
/
select *
from ora$ptt_sales_summary
order by sales_month
/
Obviously we can write anonymous PL/SQL blocks in our session but let's continue assuming that's not what you need. So what is the equivalent of a private temporary table in a permanent PL/SQL program? Same as it's been for several versions now: a PL/SQL collection or a SQL nested table type.
Private temporary tables (Available from Oracle 18c ) are dropped at the end of the session/transaction depending on the definition of PTT.
The ON COMMIT DROP DEFINITION option creates a private temporary table that is transaction-specific. At the end of the transaction,
Oracle drops both table definitions and data.
The ON COMMIT PRESERVE DEFINITION option creates a private temporary table that is session-specific. Oracle removes all data and
drops the table at the end of the session.
You do not need to drop it manually. Oracle will do it for you.
CREATE PRIVATE TEMPORARY TABLE ora$ptt_temp_table (
......
)
ON COMMIT DROP DEFINITION;
-- or
-- ON COMMIT PRESERVE DEFINITION;
Example of ON COMMIT DROP DEFINITION (table is dropped after COMMIT is executed)
Example of ON COMMIT PRESERVE DEFINITION (table is retained after COMMIT is executed but it will be dropped at the end of the session)
Note: I don't have access to 18c DB currently and db<>fiddle is facing some issue so I have posted images for you.
Cheers!!
It works with dynamic SQL:
declare
cnt int;
begin
execute immediate 'create private temporary table ora$ptt_tmp (id int)';
execute immediate 'insert into ora$ptt_tmp values (55)';
execute immediate 'insert into ora$ptt_tmp values (66)';
execute immediate 'insert into ora$ptt_tmp values (77)';
execute immediate 'select count(*) from ora$ptt_tmp' into cnt;
dbms_output.put_line(cnt);
execute immediate 'delete from ora$ptt_tmp where id = 66';
cnt := 0;
execute immediate 'select count(*) from ora$ptt_tmp' into cnt;
dbms_output.put_line(cnt);
end;
Example here:
https://livesql.oracle.com/apex/livesql/s/l7lrzxpulhtj3hfea0wml09yg
I am facing an issue here. I have a table Products where there are columns called prod_trkg_tran_id and cntr_nbr etc. I also have two tables instance1 and instance2 which also contains prod_trkg_tran_id and cntr_nbr and they have one row of data each with same cntr_nbr. I ran a query like below in Oracle SQLDeveloper. It worked fine and deleted a row from prod_trkg_tran. Can you explain this??
But when i tried this in SP by assigning:
p_where_clause:= 'WHERE t.prod_trkg_tran_id in (
select distinct tp82.PROD_TRKG_TAN_ID
from instance_1 tp21
join instance_2 tp82 on tp21.cntr_nbr=tp82.cntr_nbr
)'
and called a method delete_table which a statement contains
'DELETE FROM ' || p_table_name ||' t ' || p_where_clause;
p_table_name is prod_trkg_tran and p_where_clause is the one which I defined earlier.
If I run this, SP records are not getting deleted from prod_trkg_tran.
Ideally it was supposed to delete like I tried in SQLDeveloper. Can you explain this?
delete from prod_trkg_tran t
WHERE t.prod_trkg_tran_id in (
select distinct tp82.PROD_TRKG_TRAN_ID
from instance_1 tp21
join instance_2 tp82 on tp21.cntr_nbr=tp82.cntr_nbr
);
It is most probably a grant issue where executing schema/user does not have grant for delete or select aforamentioned tables.
If your user have DBA privileges you do not encounter with this error while using SQL Developer, however Stored Procedure does.
In your case, your tables may be synonyms to another schema table and you may not have grants on these tables.
Please make sure that you have sufficient grants.
By the way, what is the error that you are facing?
I want to recreate the complete structure of multiple very large schemas (size in GB/TB) in another schema, but when filling the tables I only want the first n rows.
Right now I am using the following statement to copy the tables but this works only if there are no foreign key constraints.
create table DEV_OWN.mytable as select * from TEST_OWN.mytable where rownum < 10
I want to make a script that loops through all tables and copies the first n rows or maybe more or less if it is dependent on a foreign key, and also the indexes, views, packages, stored procedures and preferrably everything else so that the resulting schema is a replica of the originial but with only a limited number or records.
Since I have to run this script often I would like it to be as optimal as possible.
As #Aleksej has suggested, you can export the schema and then import it again
Alternatively, you can use Execute immediate to do this.
you can access the system views, such as
ALL_TABLES, all_indexes, all_triggers.
This will allow you to build dynamic sql instructions that you can execute with execute immediate command, but this way is more complicated the to Export and Import the hole schema.
Here is a simple example for creating and filling the table
declare
v_new_schema varchar2(100) := 'DEV_OWN';
begin
for rec in (select * from all_tables)
loop
execute immediate ('create table '||v_new_schema||'.'|| rec.table_name ||' as select * from '||rec.owner||'.'|| rec.table_name ||' where rownum < 10');
end loop;
end;
/
In this example, only the tables without contraint, triggers, or anything else belonging to it are created.
If you need it all, then it's actually easier to dump a Schema.
I think best solution for your case is make an expdp with a where clause and then make an impdp.
Is it possible to have permissions script all of the permissions in an Oracle 12c database without also simultaneously having the rights to modify either objects or data in the schema?
I need the ability to be able to script the existing permissions on a table before a table is dropped and recreated in order to re-apply the permissions after the table is recreated. I will have to submit the scripts to a DBA to run and need to include these permisions when dropping and re-creating a table. If I cannot see the existing permissions, I cannot include them. The DBA will not allow me to have rights to do this myself
but he will only run scripts that I write 100% myself.
When I try to view the DDL for a table while logged in using an ID that does not match the schema name, I get the following error:
To extract audit options, you must either have SELECT privilege on
DBA_OBJ_AUDIT_OPTS or log into the schema that you are extracting.
Will granting SELECT rights on DBA_OBJ_AUDIT_OPTS give me the ability to see all grants made on a table without also providing me additional rights to modify the schema or data?
Don't do a DROP TABLE/CREATE TABLE. Use DBMS_REDEFINITION instead. Here's a modified version of the sample code I keep around for this feature:
CREATE TABLE my_preexisting_table
( a number,
constraint my_preexisting_table_pk primary key (a) );
GRANT SELECT, UPDATE ON my_preexisting_table TO ont;
-- Start the online redefinition process...
-- First, check whether your table is a candidate for the process
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE('apps','my_preexisting_table',
DBMS_REDEFINITION.CONS_USE_ROWID);
END;
/
-- Create your new table with a new name. This will eventually replace the pre-existing one
--DROP TABLE apps.my_preexisting_table_redef;
CREATE TABLE apps.my_preexisting_table_redef
(
new_column1 NUMBER,
a NUMBER,
new_column2 DATE,
-- Let's change the primary key while we're at it
-- Unfortunately, we have to rename our constraints because they share a global namespace
constraint my_preexisting_table_pk_r primary key (new_column1, a)
)
-- Let's partition the table while we're at it...
PARTITION BY RANGE (new_column2)
INTERVAL (NUMTODSINTERVAL (1,'DAY')) ( partition my_preexisting_table_old values less than (to_date('01-JAN-2000','DD-MON-YYYY') ));
;
-- Takes long if your table is big.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE('apps', 'my_preexisting_table','my_preexisting_table_redef',
-- Map columns from the existing table to the new table here
'a new_column1, a a, sysdate new_column2',
dbms_redefinition.cons_use_rowid);
END;
/
DECLARE
num_errors PLS_INTEGER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('apps', 'my_preexisting_table','my_preexisting_table_redef',
DBMS_REDEFINITION.CONS_ORIG_PARAMS, TRUE, TRUE, TRUE, TRUE, num_errors);
DBMS_OUTPUT.PUT_LINE('Copy depenedent objects: num_errors = ' || num_errors);
END;
-- Make sure there were no problems... or, if there were problems then they were expected. For example,
-- there will be an error listed because it cannot copy the PK constraint from the original table (because we made a new one already)
-- and that's OK.
select * from DBA_REDEFINITION_ERRORS where base_table_name = 'MY_PREEXISTING_TABLE';
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE('apps', 'my_preexisting_table', 'my_preexisting_table_redef');
END;
/
-- Check out the results.
select * from my_preexisting_table;
-- Verify the grants are still in place...
select * from DBA_TAB_PRIVS where table_Name = 'MY_PREEXISTING_TABLE';
-- Drop our redef table when ready...
DROP TABLE apps.my_preexisting_table_redef;
Create a function on the application schema that returns object privileges on tables owned by that schema, then grant yourself the privilege to execute that function.
That's the simplest solution to the problem. Looking at the big picture, there are better methods but these might require significant changes to the process.
User ALTER instead of DROP and CREATE. There are a lot of dependent object types, it's impossible to think of them all. For example, do the tables have any
Virtual Private Database predicates, histograms built based on column usage, etc. In an environment where the code "lives" on the database, DROPs are the enemy.
Store the "one true version" of the database in version controlled text files. This is the only way you can safely DROP tables and know exactly how to rebuild them.
Only after the schemas have been dropped and recreated a few hundred times on local databases will your organization truly understand how things work.
Here's the easiest way to get this working:
Sample Schema
drop table test1;
create table test1(a number);
grant select on test1 to system;
grant references on test1 to system with grant option;
Create Function to Generate Script
Create this function on the application schema.
create or replace function get_table_grants(p_table_name in varchar2) return clob is
--Purpose: Return the object grants for a table.
v_ddl clob;
begin
--Enable the SQL terminator, ";" or "/".
dbms_metadata.set_transform_param(
dbms_metadata.session_transform,
'SQLTERMINATOR',
true);
--Get the DDL.
select dbms_metadata.get_dependent_ddl(
object_type => 'OBJECT_GRANT',
base_object_name => upper(trim(p_table_name)),
base_object_schema => user)
into v_ddl
from dual;
--Return the DDL.
return v_ddl;
end get_table_grants;
/
--Grant access to yourself.
grant execute on get_table_grants to YOUR_USERNAME;
Sample Output
select get_table_grants('TEST1') from dual;
GRANT REFERENCES ON "JHELLER"."TEST1" TO "SYSTEM" WITH GRANT OPTION;
GRANT SELECT ON "JHELLER"."TEST1" TO "SYSTEM";
Say you generate ddl to create all your db tables etc via Hibernate SchemaExport etc. What you get is a script which starts with drop statements at the beginning. Not a problem, as I want this. But running this script produces a crapload of ORA-00942 errors running on an Oracle db.
Since they're not really errors if the tables just didn't exist yet, I'd like my create script to be error free when it executes so it's easy to determine what (if any) failed.
What are my options? I DO want drop statements generated since tables may or may not exist yet, but I don't want a million ORA-s coming back at me that I have to check (to determine if they're actual errors) just because it couldn't drop a table that's brand new.
"Say you generate ddl to create all
your db tables etc via Hibernate
SchemaExport etc. What you get is a
script which starts with drop
statements at the beginning. Not a
problem, as I want this. But running
this script produces a crapload of
ORA-00942 errors running on an Oracle
db."
Ideally we should maintain our schema properly, using source control and configuration management best practices. In this scenario we know beforehand whether the schema we run our scripts against contains those tables. We don't get errors because we don't attempt to drop tables which don't exist.
However it is not always possible to do this. One alternate approach is to have two scripts. The first script just has the DROP TABLE statements, prefaced with a friendly
PROMPT It is safe to ignore any ORA-00942 errors in the following statements
The second script has all the CREATE TABLE statements and leads off with
PROMPT All the statements in this script should succeed. So investigate any errors
Another option is to use the data dictionary:
begin
for r in ( select table_name from user_tables )
loop
execute immediate 'drop table '||r.table_name
||' cascade constraints';
end loop;
end;
Be careful with this one. It is the nuclear option and will drop every table in your schema.
If you get a script of drop statements, and Hibernate won't do it for you then wrap the DROP TABLE statements in an IF to test if the table exists before dropping it:
IF EXISTS(SELECT NULL
FROM TABLE_XYZ) THEN
DROP TABLE TABLE_XYZ;
END IF;