Suppress ORA-00942 errors in ddl create scripts - oracle

Say you generate ddl to create all your db tables etc via Hibernate SchemaExport etc. What you get is a script which starts with drop statements at the beginning. Not a problem, as I want this. But running this script produces a crapload of ORA-00942 errors running on an Oracle db.
Since they're not really errors if the tables just didn't exist yet, I'd like my create script to be error free when it executes so it's easy to determine what (if any) failed.
What are my options? I DO want drop statements generated since tables may or may not exist yet, but I don't want a million ORA-s coming back at me that I have to check (to determine if they're actual errors) just because it couldn't drop a table that's brand new.

"Say you generate ddl to create all
your db tables etc via Hibernate
SchemaExport etc. What you get is a
script which starts with drop
statements at the beginning. Not a
problem, as I want this. But running
this script produces a crapload of
ORA-00942 errors running on an Oracle
db."
Ideally we should maintain our schema properly, using source control and configuration management best practices. In this scenario we know beforehand whether the schema we run our scripts against contains those tables. We don't get errors because we don't attempt to drop tables which don't exist.
However it is not always possible to do this. One alternate approach is to have two scripts. The first script just has the DROP TABLE statements, prefaced with a friendly
PROMPT It is safe to ignore any ORA-00942 errors in the following statements
The second script has all the CREATE TABLE statements and leads off with
PROMPT All the statements in this script should succeed. So investigate any errors
Another option is to use the data dictionary:
begin
for r in ( select table_name from user_tables )
loop
execute immediate 'drop table '||r.table_name
||' cascade constraints';
end loop;
end;
Be careful with this one. It is the nuclear option and will drop every table in your schema.

If you get a script of drop statements, and Hibernate won't do it for you then wrap the DROP TABLE statements in an IF to test if the table exists before dropping it:
IF EXISTS(SELECT NULL
FROM TABLE_XYZ) THEN
DROP TABLE TABLE_XYZ;
END IF;

Related

CREATE TABLE statement in Oracle with the existence check

This question is inspired by this.
As stated, I don't want a solution from PL/SQL. I want a 1 or 2 SQL statements that will check for table existence and if its not exist - create it.
Such statement(s) will be plugged into C++ application (not a script) and so I want a plain SQL solution. If such solution is not exist (please say so), I'd like to have a simple string I can plug into C++ code and use either SQLExecute() or a native Oracle client API to execute such a string.
Trying to google for a solution I am getting a results that can be used either in the shell script or a stored procedure. As I explain here and in the previous question - my situation is completely different - I work in C++ and want an appropriate solution.
There is no single SQL statement that will create a table only if it does not exist in Oracle 11g.
It is not obvious to me why you're objecting to a PL/SQL based solution. If you're using raw ODBC calls in C++, you can pass a PL/SQL block to SQLPrepare just as you would pass a plain SQL statement. Given that PL/SQL blocks work almost exactly like a pure SQL statement, it would be unusual to categorically reject a PL/SQL based solution.
If you are going to categorically reject PL/SQL, you can certainly take the logic from any of the PL/SQL based solutions and implement that in a couple of SQL statement executed from your application. For example, you can query dba_| all_| user_tables (depending on your privileges, whether you are creating tables in other schemas, etc.) to determine whether the table exists and then conditionally execute your DDL
select owner, table_name
from dba_tables
where owner = <<schema that will own the table>
and table_name = <<name of the table>>
If that returns no rows you can then execute your DDL.
Of course, you can also just execute your DDL statement and catch the ORA-00955 name is already used by an existing object error in C++.

Minimal Oracle DDL to bump DBA_OBJECTS.LAST_DDL_TIME

I'm using an external tool that scans tables in my database. It uses dba_objects.last_ddl_time to determine which tables have been scanned. Obviously, this strategy does not work if the table data is modified in between scans so sometimes I have to help it...
I need a way to "bump" the Last DDL time without actually changing anything.
I'm looking for the simplest possible instant DDL statement that can be executed on any table, knowing just the table name.
I have sysdba privileges.
Edit:
For example, I can use comment on table xxx is 'Boom'; but then I lose the original comment. I know how to fix this, but then it is no longer an small and easy statement I can quickly time in sql*plus
Changing LOGGING/NOLOGGING is pretty fast (though not instant).
If you set the LOGGING attribute back to itself, it will notch the LAST_DDL_TIME without making any real change to the table. This example below tries to touch every table except sys tabels (presumably you'd want more limits here)
BEGIN
FOR TABLE_POINTER IN (SELECT OWNER, TABLE_NAME, DECODE(LOGGING,'YES','LOGGING','NOLOGGING') DO_LOGGING
FROM DBA_TABLES WHERE OWNER NOT IN ('SYSTEM','SYS','SYSBACKUP','MDSYS' --etc. other restrictions here
))
LOOP
EXECUTE IMMEDIATE UTL_LMS.FORMAT_MESSAGE('ALTER TABLE %s.%s %s',TABLE_POINTER.OWNER, TABLE_POINTER.TABLE_NAME, TABLE_POINTER.DO_LOGGING);
END LOOP;
END;
/
EDIT: The above wouldn't work with temp tables. An alternative such as setting PCT_FREE to itself or another suitable attribute may be preferable. You may need to handle IOTs, Partitioned Tables, etc. differently than the rest of the tables as well.

Is there a way to make Visual Studio 2010 database projects use DROP and CREATE instead of ALTER for DML

When building a deploy script for a Visual Studio 2010 SQL database project, is there any way to instruct the process to use DROP and CREATE for stored procedures and other DML instead of always ALTERing them?
As an example, if I change a stored procedure, the deploy script will generate something like this...
ALTER PROCEDURE MyProc
AS
SELECT yadda...
I want the deploy script to create something like this instead
IF EXISTS MyProc
DROP MyProc
CREATE PROCEDURE MyProc
AS
SELECT yadda....
It would make version controlled upgrade scripts a bit easier to manage, and deployed changes would perform better. Also if this is not possible, a way to at least issue a RECOMPILE with the ALTER would help some.
This question asks something that seems similar, but I do not want this behavior for tables, just DML.
I'm not familiar enough with database projects to give an answer about whether it's possible to do a DROP and CREATE. However, in general I find that CREATE and ALTER is better than DROP and CREATE.
By CREATE and ALTER I mean something like:
IF NOT EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_NAME = 'MyProc'
AND ROUTINE_TYPE = 'PROCEDURE')
BEGIN;
-- CREATE PROC has to be the first statement in a batch so
-- cannot appear within a conditional block. To get around
-- this, make the statement a string and use sp_ExecuteSql.
DECLARE #DummyCreateText NVARCHAR(100);
SET #DummyCreateText = 'CREATE PROC dbo.MyProc AS SELECT 0;';
EXEC sp_ExecuteSql #DummyCreateText;
END;
GO
ALTER PROCEDURE dbo.MyProc
AS
SELECT yadda...
The advantage of CREATE and ALTER over DROP and CREATE is that the stored proc is only created once. Once created it is never dropped so there is no chance of permissions getting dropped and not recreated.
In a perfect world the permissions on the stored proc would be applied via a database role so it would be easy to reapply them after dropping and recreating the stored proc. In reality, however, I often find that after a few years other applications may start using the same stored proc or well-meaning DBAs may apply new permissions for some reason. So
I've found that DROP and CREATE tend to cause applications to break after a few years (and it's always worse when it's someone else's application that you know nothing about). CREATE and ALTER avoids these problems.
By the way, the dummy create statement, "CREATE PROC dbo.MyProc AS SELECT 0" , works with any stored procedure. If the real stored procedure is going to have parameters or return a recordset with multiple columns that can all be specified in the ALTER PROC statement. The CREATE PROC statement just has to create the simplest stored procedure possible. (of course, the name of the stored proc in the CREATE PROC statement will need to change to match the name of your stored proc)

Oracle sample data problems

So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts.
I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software.
The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema.
Now, I have two issues in creating my target schema and emptying it.
I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff.
I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back.
Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.
UPDATE
The main script that you are required to run for manually installing oracle sample schemas is mkplug.sql. Here is the line that loads the schemas up from a dmp file:
host imp "'sys/&&password_sys AS SYSDBA'" transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
Well, I tried modifying this line (after patching up path related issues on mkplug.sql and all other sql files) to this:
host imp "'sys/&&password_sys AS SYSDBA'" rows=n transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
And... it did NOT help. The schema got created with row data, despite rows=n attribute :(
Since you're already familiar with exp/imp (or expdp/impdp) from the Oracle scripts that use the .dmp file, why not just:
Create the empty TR_xxx schemas
Populate the TR_xxx schema from the
xxx .dmp file with the FROMUSER/TOUSER
options and ROWS=N (similar options
exist for expdp/impdp)
[Edit after reading your comment about the transportable tablespaces]
I didn't know that the Oracle scripts were using transportable tablespaces and that multiple schemas were being imported from a single file. This is probably the most straightforward way to create your new empty TR schemas:
Start with the standard, populated
database built with the Oracle
scripts
Create no-data export files on a
schema-by-schema basis (OE shown) by:
exp sys/&&password_sys AS SYSDBA
file=oe_nodata.dmp
log=oe_nodata_exp.log owner=OE rows=N
grants=N
(You should only have to do this once
and this dmp file can be reused)
Now, your script should:
Drop any TR_ users with the CASCADE
option
Re-create the TR_ users
Populate the schema objects (OE
shown) by:
host imp "'sys/&&password_sys AS
SYSDBA'" file=oe_nodata.dmp
log=tr_oe_imp.log fromuser=OE
touser=TR_OE
Here is an anonymos block which - for a given schema - disables triggers and foreign keys, truncates all the tables and then re-enables triggers and foreign keys. It uses truncate for speed but obviously this means no rollback: so be careful which schema name you supply! It's easy enough to convert that call into a delete from statement if you prefer.
The script is a fine example of cut'n'paste programming, and would no doubt benefit from some refactoring to remove the repetition.
begin
<< dis_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' disable';
end loop dis_triggers;
<< dis_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' disable constraint '||fkeys.constraint_name;
end loop dis_fkeys;
<< zap_tables >>
for tabs in ( select owner, table_name
from all_tables
where owner = '&&schema_name' )
loop
execute immediate 'truncate table '||tabs.owner||'.'||tabs.table_name
||' reuse storage';
end loop zap_tables;
<< en_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' enable constraint '||fkeys.constraint_name;
end loop en_fkeys;
<< en_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' enable';
end loop en_triggers;
end;
/

Explain Plan for Query in a Stored Procedure

I have a stored procedure that consists of a single select query used to insert into another table based on some minor math that is done to the arguments in the procedure. Can I generate the plan used for this query by referencing the procedure somehow, or do I have to copy and paste the query and create bind variables for the input parameters?
Use SQL Trace and TKPROF. For example, open SQL*Plus, and then issue the following code:-
alter session set tracefile_identifier = 'something-unique'
alter session set sql_trace = true;
alter session set events '10046 trace name context forever, level 8';
select 'right-before-my-sp' from dual;
exec your_stored_procedure
alter session set sql_trace = false;
Once this has been done, go look in your database's UDUMP directory for a TRC file with "something-unique" in the filename. Format this TRC file with TKPROF, and then open the formatted file and search for the string "right-before-my-sp". The SQL command issued by your stored procedure should be shortly after this section, and immediately under that SQL statement will be the plan for the SQL statement.
Edit: For the purposes of full disclosure, I should thank all those who gave me answers on this thread last week that helped me learn how to do this.
From what I understand, this was done on purpose. The idea is that individual queries within the procedure are considered separately by the optimizer, so EXPLAIN PLAN doesn't make sense against a stored proc, which could contain multiple queries/statements.
The current answer is NO, you can't run it against a proc, and you must run it against the individual statements themselves. Tricky when you have variables and calculations, but that's the way it is.
Many tools, such as Toad or SQL Developer, will prompt you for the bind variable values when you execute an explain plan. You would have to do so manually in SQL*Plus or other tools.
You could also turn on SQL tracing and execute the stored procedure, then retrieve the explain plan from the trace file.
Be careful that you do not just retrieve the explain plan for the SELECT statement. The presence of the INSERT clause can change the optimizer goal from first rows to all rows.

Resources