I have a created number of table, function, view, and procedure scripts to support reporting. Due to the complicated environment, migrating scripts (development-->testing) can be a chore.
The DBA does not allow the developers to use the primary tablespace ('VENDOR'), nor either of main schemae ('UTIL','REPORTING'). The UTIL schema is intended for functions and procedures; REPORTING is for tables and views.
Because the development server is often recommissioned for other purposes, development is done on the testing server, using a development tablespace ('DEVL') and a schema for each developer ('CRAIG', for example).
As a result, a table's script must be converted from:
DROP TABLE CRAIG.X_TABLE;
CREATE TABLE CRAIG.X_TABLE;
...
TABLESPACE "DEVL";
to:
DROP TABLE REPORTING.X_TABLE;
CREATE TABLE REPORTING.X_TABLE;
...
TABLESPACE "VENDOR";
A view's script must be changed from:
CREATE OR REPLACE VIEW CRAIG.X_VIEW
...
;
to:
CREATE OR REPLACE VIEW REPORTING.X_VIEW
...
;
A procedure's script must be changed from:
CREATE OR REPLACE PROCEDURE CRAIG.X_PROCEDURE
...
INSERT INTO CRAIG.X_PROCEDURE
SELECT ...
-- reference a table in REPORTING schema
FROM REPORTING.ANOTHER_TABLE
;
to:
CREATE OR REPLACE PROCEDURE UTIL.X_PROCEDURE
...
INSERT INTO REPORTING.X_PROCEDURE
SELECT ...
FROM REPORTING.ANOTHER_TABLE
;
The table and procedure scripts require the most intervention, as you can see.
If it makes a difference, I use SQL Developer, TextMate, and Sublime Text 2 for coding and Cornerstone to interact with our organization's Subversion (SVN) repository.
Is there a way to simplify (i.e. automate) the changes that I need to each type of script as I migrate the logic from the development environment to the testing one?
I would connect as the schema owner; not sure if you're implying you're connecting as one user and building objects in a different schema? i.e. don't qualify the table names etc. at all. And have a suitable default tablespace for that user. Then the scripts don't need to specify either. Maybe I'm missing something?
If you really want to specify them, you can prompt for and accept the values at the start of the script and use substitution variables:
accept schma char prompt 'Enter schema: '
accept tbspc char prompt 'Enter tablespace: '
create table &&schma..my_table (...) tablespace &&tbspc;
etc.
If there are a limited number of scenarios you could maybe set the values automatically based on the database name, assuming different environments are in different instances:
column q_schma new_value schma
column q_tbspc new_value tbspc
select case name when 'TEST_NAME' then 'CBUCH' else 'PROD_USER' end as q_schma,
case name when 'TEST_NAME' then 'TBSP_DEV' else 'PROD_SCHEMA' end as q_tbspc
from v$database;
create table &&schma..my_table (...) tablespace &&tbspc;
You could also change your default schema to avoid the prefixes:
alter session set current_schema = &schma
create table my_table (...) tablespace &&tbspc;
Another approach might be to use placeholders in the checked-in code, and run the code through sed or similar to put the real values in.
Related
I've been trying to setup an Oracle database for certification/study purposes. I have been utilizing the docker image building script in my project to help create an easily recyclable database and I'm having some trouble creating my initial schemas.
The README documentation I found states that I should be able to create a number of setup scripts that are kicked off automatically when the database is ready, all I need to do is place them in the /opt/oracle/scripts/setup/ directory.
I tried doing that in my Dockerfile, and while they do execute they don't succeed with even the most trivial example I came up with.
For example, I tried creating a user named student and immediately came across an error specific to Oracle 12's multi-tennancy. Not really wanting to care about it, since it's not something covered by the 1Z0-071 certification, I took the black-magic answer and moved on.
But I was immediately blocked again by an even stranger error in my code.
CREATE SEQUENCE simpledata.simpledata_pk_sequence;
INSERT INTO simpledata.simpledata (id, text)
VALUES (simpledata.simpledata_pk_sequence.nextval, 'Hi, I''m Paul')
*
ERROR at line 2:
ORA-01950: no privileges on tablespace 'USERS'
Which seems to point to the fact that I'm doing something out-right wrong. I should be able to initialize the database with whatever arbitrary users and data. This leads me to believe I'm either missing configuration steps, I'm using the wrong user, or something else entirely that I'm not aware of.
What is the correct way to run an arbitrary setup script in Oracle 12?
I don't know if this will cover what you ask, because it looks like a question with different answers as alternatives to do it.
Anyway, your scripts look ok, except for the fact that you are missing something in the second one respect to the first one.
Script
{{ with .SimpleData }}
CREATE USER simpledata IDENTIFIED BY {{ RandomPassword }} DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP PROFILE DEFAULT QUOTA UNLIMITED ON USERS ;
CREATE TABLE simpledata.simpledata (
id NUMBER(9) PRIMARY KEY,
text VARCHAR2(20)
);
CREATE SEQUENCE simpledata.simpledata_pk_sequence;
{{ range .Items }}
INSERT INTO simpledata.simpledata (id, text)
VALUES (simpledata.simpledata_pk_sequence.nextval, '{{ . }}');
{{ end }}
If you ask me how is the way to create schemas and data, I would go for creating the things separately and in the right order:
Schemas
Sequences
Tables
DML scripts ( Insert, Update, etc..)
Functions , Procedures and Packages.
Example user creation in your case
SQL> create user simpledata identified by "Passw_1" default tablespace users temporary tablespace temp account unlock profile default quota unlimited on users;
User created.
Elapsed: 00:00:00.09
SQL>
Is it possible to have permissions script all of the permissions in an Oracle 12c database without also simultaneously having the rights to modify either objects or data in the schema?
I need the ability to be able to script the existing permissions on a table before a table is dropped and recreated in order to re-apply the permissions after the table is recreated. I will have to submit the scripts to a DBA to run and need to include these permisions when dropping and re-creating a table. If I cannot see the existing permissions, I cannot include them. The DBA will not allow me to have rights to do this myself
but he will only run scripts that I write 100% myself.
When I try to view the DDL for a table while logged in using an ID that does not match the schema name, I get the following error:
To extract audit options, you must either have SELECT privilege on
DBA_OBJ_AUDIT_OPTS or log into the schema that you are extracting.
Will granting SELECT rights on DBA_OBJ_AUDIT_OPTS give me the ability to see all grants made on a table without also providing me additional rights to modify the schema or data?
Don't do a DROP TABLE/CREATE TABLE. Use DBMS_REDEFINITION instead. Here's a modified version of the sample code I keep around for this feature:
CREATE TABLE my_preexisting_table
( a number,
constraint my_preexisting_table_pk primary key (a) );
GRANT SELECT, UPDATE ON my_preexisting_table TO ont;
-- Start the online redefinition process...
-- First, check whether your table is a candidate for the process
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE('apps','my_preexisting_table',
DBMS_REDEFINITION.CONS_USE_ROWID);
END;
/
-- Create your new table with a new name. This will eventually replace the pre-existing one
--DROP TABLE apps.my_preexisting_table_redef;
CREATE TABLE apps.my_preexisting_table_redef
(
new_column1 NUMBER,
a NUMBER,
new_column2 DATE,
-- Let's change the primary key while we're at it
-- Unfortunately, we have to rename our constraints because they share a global namespace
constraint my_preexisting_table_pk_r primary key (new_column1, a)
)
-- Let's partition the table while we're at it...
PARTITION BY RANGE (new_column2)
INTERVAL (NUMTODSINTERVAL (1,'DAY')) ( partition my_preexisting_table_old values less than (to_date('01-JAN-2000','DD-MON-YYYY') ));
;
-- Takes long if your table is big.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE('apps', 'my_preexisting_table','my_preexisting_table_redef',
-- Map columns from the existing table to the new table here
'a new_column1, a a, sysdate new_column2',
dbms_redefinition.cons_use_rowid);
END;
/
DECLARE
num_errors PLS_INTEGER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('apps', 'my_preexisting_table','my_preexisting_table_redef',
DBMS_REDEFINITION.CONS_ORIG_PARAMS, TRUE, TRUE, TRUE, TRUE, num_errors);
DBMS_OUTPUT.PUT_LINE('Copy depenedent objects: num_errors = ' || num_errors);
END;
-- Make sure there were no problems... or, if there were problems then they were expected. For example,
-- there will be an error listed because it cannot copy the PK constraint from the original table (because we made a new one already)
-- and that's OK.
select * from DBA_REDEFINITION_ERRORS where base_table_name = 'MY_PREEXISTING_TABLE';
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE('apps', 'my_preexisting_table', 'my_preexisting_table_redef');
END;
/
-- Check out the results.
select * from my_preexisting_table;
-- Verify the grants are still in place...
select * from DBA_TAB_PRIVS where table_Name = 'MY_PREEXISTING_TABLE';
-- Drop our redef table when ready...
DROP TABLE apps.my_preexisting_table_redef;
Create a function on the application schema that returns object privileges on tables owned by that schema, then grant yourself the privilege to execute that function.
That's the simplest solution to the problem. Looking at the big picture, there are better methods but these might require significant changes to the process.
User ALTER instead of DROP and CREATE. There are a lot of dependent object types, it's impossible to think of them all. For example, do the tables have any
Virtual Private Database predicates, histograms built based on column usage, etc. In an environment where the code "lives" on the database, DROPs are the enemy.
Store the "one true version" of the database in version controlled text files. This is the only way you can safely DROP tables and know exactly how to rebuild them.
Only after the schemas have been dropped and recreated a few hundred times on local databases will your organization truly understand how things work.
Here's the easiest way to get this working:
Sample Schema
drop table test1;
create table test1(a number);
grant select on test1 to system;
grant references on test1 to system with grant option;
Create Function to Generate Script
Create this function on the application schema.
create or replace function get_table_grants(p_table_name in varchar2) return clob is
--Purpose: Return the object grants for a table.
v_ddl clob;
begin
--Enable the SQL terminator, ";" or "/".
dbms_metadata.set_transform_param(
dbms_metadata.session_transform,
'SQLTERMINATOR',
true);
--Get the DDL.
select dbms_metadata.get_dependent_ddl(
object_type => 'OBJECT_GRANT',
base_object_name => upper(trim(p_table_name)),
base_object_schema => user)
into v_ddl
from dual;
--Return the DDL.
return v_ddl;
end get_table_grants;
/
--Grant access to yourself.
grant execute on get_table_grants to YOUR_USERNAME;
Sample Output
select get_table_grants('TEST1') from dual;
GRANT REFERENCES ON "JHELLER"."TEST1" TO "SYSTEM" WITH GRANT OPTION;
GRANT SELECT ON "JHELLER"."TEST1" TO "SYSTEM";
I have created a synonym for a dblink.
create synonym dblink2 for dblink1
But when I query anything using the synonym instead of the dblink, I'm getting connection description for remote database not found error.
SELECT * FROM DUAL#DBLINK2
How do I query using the synonym?Edit: I know that it'll work if I create a view of the table using dblink. But my requirement is the above question.
Unfortunately creation of synonyms for dblinks is not supported. If you read the documentation on synonyms, you will find that the permitted objects for synonyms are only:
Use the CREATE SYNONYM statement to create a synonym, which is an
alternative name for a table, view, sequence, procedure, stored
function, package, materialized view, Java class schema object,
user-defined object type, or another synonym.
The reason why your second query fails is that the synomym you have created is not functioning correctly. It is not being validated properly at creation time, and you can create any sort of incorrect synonyms like that. To verify, just test the following statement:
create synonym dblink3 for no_object_with_this_name;
You will still get a response like this:
*Synonym DBLINK3 created.*
But of course nothing will work via this synonym.
I don't see the point in creating a synonym for the dblink itself. Ideally you create the synonym for the remote table using the dblink.
CREATE DATABASE LINK my_db_link CONNECT TO user IDENTIFIED BY passwd USING 'alias';
CREATE SYNONYM my_table FOR remote_table#my_db_link;
Now, you could query the remote table using the synonym:
SELECT * FROM my_table;
I'm trying to think of the business issue that gets solved by putting a synonym on a db_link, and the only thing I can think of is that you need to deploy constant code that will be selecting from some_Table#some_dblink, and although the table names are constant different users may be looking across different db_links. Or you just want to be able to swap which db_link you are operating across with a simple synonym repoint.
Here's the problem: it can't be done that way. db_link synonyms are not allowed.
Your only solution is to have the code instead reference the tables by synonyms, and set private synonyms to point across the correct db_link. That way your code continues to "Select from REMOTE_TABLE1" and you just can flip which DB_LINK you are getting that remote table from.
Is it a pain to have to set/reset 100+ private synonyms? Yep. But if it is something you need to do often then bundle up a procedure to do it for you where you pass in the db_link name and it cycles through and resets the synonyms for you.
While I understand that this question is 3+ years old, someone might be able to benefit from a different answer in the future.
Let's imagine that I have 4 databases, 2 for production and 2 for dev / testing.
Prod DBs: PRDAPP1DB1 and PRDAPP2DB1
Dev DBs: DEVAPP1DB1 and DEVAPP2DB1
The "APP2" databases are running procedures to extract and import data from the APP1 databases. In these procedures, there are various select statements, such as:
declare
iCount INTEGER;
begin
insert into tbl_impdata1
select sysdate, col1, col2, substr(col3,1,10), substr(col3,15,3)
from tbl1#dblink2; -- Where dblink2 points to DEVAPP1DB1
...
<more statements here>
...
EXCEPTION
<exception handling code here>
end;
Now that is okay for development but the dblink2 constantly needs to be changed to dblink1 when deploying the updated procedure to production.
As it was pointed out, synonyms cannot be used for this purpose.
But instead, create the db links with the same name, different connection string.
E.g. on production:
CREATE DATABASE LINK "MyDBLINK" USING 'PRDAPP1DB1';
And on dev:
CREATE DATABASE LINK "MyDBLINK" USING 'DEVAPP1DB1';
And then in the procedures, change all "#dblink1" and "#dblink2" to "#mydblink" and it all should be transparent from there.
If you are trying to have the DB link accessible for multiple schemas (users) the answer is to create a public db link
example:
CREATE PUBLIC DATABASE LINK dblink1 CONNECT TO user IDENTIFIED BY password USING 'tnsalias';
After that any schema can issue a:
SELECT * FROM TABLE#dblink1
When building a deploy script for a Visual Studio 2010 SQL database project, is there any way to instruct the process to use DROP and CREATE for stored procedures and other DML instead of always ALTERing them?
As an example, if I change a stored procedure, the deploy script will generate something like this...
ALTER PROCEDURE MyProc
AS
SELECT yadda...
I want the deploy script to create something like this instead
IF EXISTS MyProc
DROP MyProc
CREATE PROCEDURE MyProc
AS
SELECT yadda....
It would make version controlled upgrade scripts a bit easier to manage, and deployed changes would perform better. Also if this is not possible, a way to at least issue a RECOMPILE with the ALTER would help some.
This question asks something that seems similar, but I do not want this behavior for tables, just DML.
I'm not familiar enough with database projects to give an answer about whether it's possible to do a DROP and CREATE. However, in general I find that CREATE and ALTER is better than DROP and CREATE.
By CREATE and ALTER I mean something like:
IF NOT EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_NAME = 'MyProc'
AND ROUTINE_TYPE = 'PROCEDURE')
BEGIN;
-- CREATE PROC has to be the first statement in a batch so
-- cannot appear within a conditional block. To get around
-- this, make the statement a string and use sp_ExecuteSql.
DECLARE #DummyCreateText NVARCHAR(100);
SET #DummyCreateText = 'CREATE PROC dbo.MyProc AS SELECT 0;';
EXEC sp_ExecuteSql #DummyCreateText;
END;
GO
ALTER PROCEDURE dbo.MyProc
AS
SELECT yadda...
The advantage of CREATE and ALTER over DROP and CREATE is that the stored proc is only created once. Once created it is never dropped so there is no chance of permissions getting dropped and not recreated.
In a perfect world the permissions on the stored proc would be applied via a database role so it would be easy to reapply them after dropping and recreating the stored proc. In reality, however, I often find that after a few years other applications may start using the same stored proc or well-meaning DBAs may apply new permissions for some reason. So
I've found that DROP and CREATE tend to cause applications to break after a few years (and it's always worse when it's someone else's application that you know nothing about). CREATE and ALTER avoids these problems.
By the way, the dummy create statement, "CREATE PROC dbo.MyProc AS SELECT 0" , works with any stored procedure. If the real stored procedure is going to have parameters or return a recordset with multiple columns that can all be specified in the ALTER PROC statement. The CREATE PROC statement just has to create the simplest stored procedure possible. (of course, the name of the stored proc in the CREATE PROC statement will need to change to match the name of your stored proc)
So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts.
I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software.
The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema.
Now, I have two issues in creating my target schema and emptying it.
I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff.
I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back.
Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.
UPDATE
The main script that you are required to run for manually installing oracle sample schemas is mkplug.sql. Here is the line that loads the schemas up from a dmp file:
host imp "'sys/&&password_sys AS SYSDBA'" transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
Well, I tried modifying this line (after patching up path related issues on mkplug.sql and all other sql files) to this:
host imp "'sys/&&password_sys AS SYSDBA'" rows=n transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
And... it did NOT help. The schema got created with row data, despite rows=n attribute :(
Since you're already familiar with exp/imp (or expdp/impdp) from the Oracle scripts that use the .dmp file, why not just:
Create the empty TR_xxx schemas
Populate the TR_xxx schema from the
xxx .dmp file with the FROMUSER/TOUSER
options and ROWS=N (similar options
exist for expdp/impdp)
[Edit after reading your comment about the transportable tablespaces]
I didn't know that the Oracle scripts were using transportable tablespaces and that multiple schemas were being imported from a single file. This is probably the most straightforward way to create your new empty TR schemas:
Start with the standard, populated
database built with the Oracle
scripts
Create no-data export files on a
schema-by-schema basis (OE shown) by:
exp sys/&&password_sys AS SYSDBA
file=oe_nodata.dmp
log=oe_nodata_exp.log owner=OE rows=N
grants=N
(You should only have to do this once
and this dmp file can be reused)
Now, your script should:
Drop any TR_ users with the CASCADE
option
Re-create the TR_ users
Populate the schema objects (OE
shown) by:
host imp "'sys/&&password_sys AS
SYSDBA'" file=oe_nodata.dmp
log=tr_oe_imp.log fromuser=OE
touser=TR_OE
Here is an anonymos block which - for a given schema - disables triggers and foreign keys, truncates all the tables and then re-enables triggers and foreign keys. It uses truncate for speed but obviously this means no rollback: so be careful which schema name you supply! It's easy enough to convert that call into a delete from statement if you prefer.
The script is a fine example of cut'n'paste programming, and would no doubt benefit from some refactoring to remove the repetition.
begin
<< dis_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' disable';
end loop dis_triggers;
<< dis_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' disable constraint '||fkeys.constraint_name;
end loop dis_fkeys;
<< zap_tables >>
for tabs in ( select owner, table_name
from all_tables
where owner = '&&schema_name' )
loop
execute immediate 'truncate table '||tabs.owner||'.'||tabs.table_name
||' reuse storage';
end loop zap_tables;
<< en_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' enable constraint '||fkeys.constraint_name;
end loop en_fkeys;
<< en_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' enable';
end loop en_triggers;
end;
/