Creating re-runnable Oracle DDL SQL Scripts - oracle

Our development team does all of their development on their local machines, databases included. When we make changes to schema's we save the SQL to a file that is then sent to the version control system (If there is a better practice for this I'd be open to hearing about that as well).
When working on SQL Server we'd wrap our updates around "if exists" statements to make them re-runnable. I am not working on an Oracle 10g project and I can't find any oracle functions that do the same thing. I was able to find this thread on dbaforums.org but the answer here seems a bit kludgy.

I am assuming this is for some sort of Automating the Build process and redoing the build from scratch if something fails.
As Shannon pointed out, PL/SQL objects such as Procedures, functions and Packages have the "create or replace" option, so a second recompile/re-run would be ok. Grants should be fine too.
As for Table creations and DDLs, you could take one of the following approaches.
1) Do not add any drop commands to the scripts and ask your development team to come up with the revert-script for the individual modules.
So for each create table that they add to the build, they will have an equivalent "DROP TABLE.." added to a script say."build_rollback.sql". If your build fails , you can run this script before running the build from scratch.
2)The second (and most frequently used approach I have seen) is to include the DROP table just before the create table statement and then Ignore the"Table or view does not exist" errors in the build log. Something like..
DROP TABLE EMP;
CREATE TABLE EMP (
.......
.......
);
The thread you posted has a major flaw. The most important one is that you always create tables incrementally. Eg, your database already has 100 tables and you are adding 5 more as part of this release. The script spools the DROP Create for all 100 tables and then executes it which does not make a lot of sense (unless you are building your database for the first time).

An SQL*Plus script will continue past errors unless otherwise configured to.
So you could have all of your scripts use :
DROP TABLE TABLE_1;
CREATE TABLE TABLE_1 (...
This is an option in PowerDesigner, I know.
Another choice would be to write a PL/SQL script which scrubs a schema, iterating over all existing tables, views, packages, procedures, functions, sequences, and synonyms in the schema, issuing the proper DDL statement to drop them.
I'd consider decomposing the SQL to create the database; one giant script containing everything for the schema sounds murderous to maintain in a shared environment. Dividing at a Schema / Object Type / Name level might be prudent, keeping fully dependent object types (like Tables and Indexes) together.

Related

Is using DDL statements in ETL script right approach?

I'm working on redesigning DWH solution previously based on Teradata with lots of BTEQ scripts performing transformations on mirror tables loaded from source DBs. New solutions will be based on Snowflake and as a transformation tool set of SQL (Snowflake) scripts is being prepared.
Is that a right approach to use in ETL scripts DDL statements which create e.g. temporary table which than et the end of script is dropped?
In my opinion such table should be created before running this script instead of creating it in the script on fly. One argument opts that DDL statements on Snowflake commits transaction and that's why I want to avoid DDL statements in transformation scripts. Please help me finding pros and cons of using DDL statements in ETL process and back me up that I'm right or convince that I'm wrong.
If you are wanting transactions to cover all your SELECT/INSERT/MERGE steps of your transformation step of your ELT, you need to not create/drop any tables, as those will commit your open transactions.
We get around this by have pre-existing worker tables per task/deployment that are create/truncated prior to the transaction section of our ELT process. And our tooling does not allow a task to run at the same time.
Thus we load into a landing table, we transform into temporary tables, then we multi-table merge into the final tables. With only the last steps needing to be in transactions.

What's a way I can save a trigger "template" in oracle?

Let's say I created a table test_table in development just to test a trigger, this trigger would then be reused in many other tables (future and existing).
So I code the trigger, test it, all good! But at the moment, if I want to replicate it, I will have to copy it from test_table's triggers and edit it.
So if someone deletes the table accidentally, the trigger is gone, and I don't have it saved nowhere else. Or if I just want to delete random test tables in our database, I can't.
What's a recommended way to save a trigger as a "template" in oracle? So I can reuse it in other tables and have it not be dependant of a random test table, or any table.
There are a lot of ways you can keep a copy of your TRIGGER SQLText.
Here's a few examples.
In Version Control:
You can use any of the many version control tools to maintain a versioned history for any code you like, including SQL, PL/SQL, etc. You can rewind time, view differences over time, track changes to the template, even allow concurrent development.
As a Function:
If you want the template to live in the database, you can create a FUNCTION (or PACKAGE)that takes as parameters the target USER and TABLE, and it replaces the USER and TABLE values in its template to generate the SQLTEXT required to create or replace the template TRIGGER on the target TABLE. You can make it EDITIONABLE as needed.
In a Table:
You can always just create a TABLE that holds template TRIGGER SQLText as a CLOB or VARCHAR2. It would need to be somewhere where it isn't likele to be "randomly" deleted, though. You can AUDIT changes to the TABLE's data, to see the template change over time. Oracle has tons of auditing options.
In the logs:
You can just log (all) DDL out. If you ENABLE_DDL_LOGGING, the log xml will have a copy of every DDL statement, categorized, along with when and where it came from.

Oracle Package erroring when a table doesn't exist

I'm using Oracle 9g at the moment and writing a package, which I'm fairly new to. I have some procedures in the package that load data into tables from external tables. It drops those tables first, recreates it with some transformations from the external table, and then creates the indexes for it. I can't just reference the external tables as I need indexes and a few changes to the data.
I then have some other procedures later on in the package that reference these temporary tables to do their work. It all works fine, except if, when running the procedures in order, the procedure that creates the table gets interrupted after it drops it, but before it creates it.
No if I make a change to the package body it compiles with errors as the later procedure that reference that table inform me that the table does not exist. Nor can I now run any of the procedures due to this validation error.
Can anyone advise of any best practices or how best to do these without getting these validation errors? Or is there a way to turn off this validation somehow?
Many thanks,
Dan
Make two packages. One that drops and creates the tables and the other with the data manipulation code. The second package will become invalid once you drop the tables but the first one will still be usable.

Script Oracle tables (DDL) with data insert statements into single/multiple sql files

I am needing to export the tables for a given schema, into DDL scripts and Insert statements - and have it scripted such that, the order of dependencies/constraints is maintained.
I came across this article suggesting how to archive the database with data - http://www.dba-oracle.com/t_archiving_data_in_file_structures.htm - not sure if the article is applicable for oracle 10g/11g.
I have seen "export table with data" features in "Sql Developer", "Toad for Oracle", "DreamCoder for Oracle" etc, but i would need to do this one table at a time, and will still need to figure out the right order of script execution manually.
Are there any tools/scripts that can utilize oracle metadata and generate DDL script with data?
Note that some of the tables have CLOB datatype columns - so the tool/script would need to be able to handle these columns.
P.S. I am needing something similar to the "Generate Scripts" feature in SQL Server 2008, where one can specify "script data" option and get back a self-sufficient script with DDL and data, generated in the order of table constraints. Please see: http://www.kodyaz.com/articles/sql-server-script-data-with-generate-script-wizard.aspx
Thanks for your help!
Firstly, recognise that this isn't necessarily possible. A view can use a function in a package that also selects from the view. Another issue is that you might need to load data into tables and then apply constraints, even though this might be slower than the other way round.
In short, you will need to do some work here.
Work out the dependencies in your system. ALL_DEPENDENCIES is the primary mechanism.
Then use DBMS_METADATA.GET_DDL to extract the DDL statements. For small data volumes, I'd extract the constraints separately for applying after the data load.
In current versions you can create external tables to unload data from regular tables into OS files (and obviously go the other way round). But if you've got exotic datatypes (BLOB, RAW, XMLTYPEs, User Defined Types....) it will be more challenging.
I suggest that you use Oracle standard export and import (exp/imp) here, is there a reason why you won't consider it? Note in addition you can use the "indexfile" option on the import to output the SQL statements (unfortunately this doesn't include the inserts) to a file instead of actually executing them.

How are FORCED views significant if their base tables don't exist?

In context of Oracle 9i database, usually for a view to be created an underlying base table is mandatory, however this constraint can be violated if the FORCED clause is used while creating the view.
What is the significance of these views if their base tables does not exist?
In which conditions these views will be of help?
you would use the FORCE keyword if you are creating several views that reference each other and you don't want to spend time to determine in which order they should be created.
See also:
streamlining creating of packages and views
On a large project this can be very useful: I create a table, someone else creates a packaged function, and you create a view that accesses the table and the packaged function. We then all put our DDL into the source control / release system and the DBA installs all the code on the test / production system. The FORCE keyword ensures that the view gets created, though possibly in an invalid state, even if its DDL is run before the table and/or package DDL.

Resources