Select from a table which might not exist - oracle

We have an issue that needs a bad hack to get around. Let me give you some context:
We have an app the overrides a customers configuration settings upon uninstalling/reinstalling it. It gets installed with default values, overriding any settings the customer put in.
The solution by management would be to create two scripts, one for each step:
Create a temporary table and copy over the configuration settings into it before uninstalling the app.
Once the app is re-installed, copy over the values from the temporary table back into the original table to retain their settings.
I'm not very fond of their solution, but I have to go with it.
I have step 1 down, but I'm having trouble dealing with the situation of running the second script (step 2) without the first script (step 1) being run before.
In essence, the temporary table would not be there when the second script compiles if someone else in a different department forgets to run the first one.
This is the code I'm currently using for the second script.
DECLARE
lvnTableExists NUMBER(1);
lvbTempTableCopied BOOLEAN;
lvsTempTable VARCHAR2(21) := 'TEMP_TABLE';
BEGIN
-- CalcTypVarValue Table Copy
SELECT COUNT(*)
INTO lvnTableExists
FROM ALL_TABLES x
WHERE x.Table_Name = lvsTempTable ;
IF lvnTableExists = 1 THEN
FOR CalcRow IN (SELECT * FROM Temp_Table) LOOP -- Temp_Table will not exist if first script didn't run, causing a compile error
UPDATE SomeOtherTable c
SET c.foo= CalcRow.foo,
c.bar= CalcRow.bar,
c.DateLastMaint = SYSDATE
WHERE c.bob= CalcRow.bob
AND c.bill= CalcRow .bill;
END LOOP;
lvbTempTableCopied := TRUE;
ELSE
lvbTempTableCopied := FALSE;
END IF;
EXCEPTION
WHEN OTHERS THEN
...
...
My problem is that if Temp_Table doesn't exist at all, then I'll get a compile time error, so the script won't run at all. I need it to run so I can take action on whether to do something else if the table doesn't exist based on lvbTempTableCopied.
I've heard of bypassing it with something like FOR CalcRow IN (EXECUTE IMMEDIATE 'SELECT * FROM ' || lvsTempTable), but I can't use it within a FOR IN LOOP like that.
How would I use EXECUTE IMMEDIATE to bypass the compile time error?

You can do it dynamically using REF CURSOR, see sample code below,
DECLARE
TYPE cur_typ IS REF CURSOR;
c cur_typ;
v_table_exists VARCHAR2(1);
type temp1_rec is record (col1 VARCHAR2(100), col2 VARCHAR2(100));
v_temp temp1_rec;
BEGIN
SELECT 'Y'
INTO v_table_exists
FROM all_tables
WHERE table_name = 'TEMP1';
--dynamic query with parameters
OPEN c FOR 'SELECT col1, col2 FROM temp1 WHERE :param1=:param2' USING 'PARAM1', 'PARAM1' ;
LOOP
FETCH c INTO v_temp;
EXIT WHEN c%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(v_temp.col1);
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
END;
/
CREATE TABLE TEMP1
(COL1 VARCHAR2(100),
col2 VARCHAR2(100));
INSERT INTO temp1
VALUES('123123123asdfasdfsfa', 'JHASDKLFJLASDFLAS');

Related

Create insert record dynamically by changing pk of existing record for passed in table

I want to pass a table name and schema into a procedure, and have it generate insert, update and delete statements for the particular table. This is part of an automated testing solution (in a development environment) in which I need to test some change data capture. I want to make this dynamic as it is going to be need to be done for lots of different tables over a long period of time, and I need to call it via a REST request through ORDS, so don't want to have to make an endpoint for every table.
Update and delete are fairly easy, however I am struggling with the insert statement. Some of the tables being passed in have hundreds of columns with various constraints, fks etc. so I think it makes sense to just manipulate an existing record by changing only the primary key. I need to be able to modify the primary key to a new value known to me beforehand (e.g. '-1').
Ideally I would create a dynamic rowtype, and select into where rownum = 1, then loop round the primary keys found from all_constraints, and update the rowtype.pk with my new value, before inserting this into the table. Essentially the same as this but without knowing the table in advance.
e.g. rough idea
PROCEDURE manipulate_records(p_owner in varchar2, p_table in varchar2)
IS
cursor c_pk is
select column_name
from all_cons_columns
where owner = p_owner
and constraint_name in (select constraint_name
from all_constraints
where table_name = p_table
and constraint_type = 'P');
l_row tbl_passed_in%ROWTYPE --(I know this isn't possible but ideally)
BEGIN
-- dynamic sql or refcursor to collect a record
select * into tbl_passed_in from tablename where rownum = 1;
-- now loop through pks and reassign their values to my known value
for i in c_pk loop
...if matches then reassign;
...
end loop;
-- now insert the record into the table passed in
END manipulate_records;
I have searched around but haven't found any examples which fit this exact use case, where an unknown column needs to be modified and insert into a table.
Depending on how complex your procedure is, you might be able to store it as a template in a CLOB. Then pull it in, replace table and owner, then compile it.
DECLARE
prc_Template VARCHAR2(4000);
vc_Owner VARCHAR2(0008);
vc_Table VARCHAR2(0008);
BEGIN
vc_Table := 'DUAL';
vc_Owner := 'SYS';
-- Pull code into prc_Template from CLOB, but this demonstrates the concept
prc_Template := 'CREATE OR REPLACE PROCEDURE xyz AS r_Dual <Owner>.<Table>%ROWTYPE; BEGIN NULL; END;';
prc_Template := REPLACE(prc_Template,'<Owner>',vc_Owner);
prc_Template := REPLACE(prc_Template,'<Table>',vc_Table);
-- Create the procedure
EXECUTE IMMEDIATE prc_Template;
END;
Then you have the appropriate ROWTYPE available:
CREATE OR REPLACE PROCEDURE xyz AS r_Dual SYS.DUAL%ROWTYPE; BEGIN NULL; END;
But you can't create the procedure and run it in the same code block.

What is the solution for the errors in this PL/SQL code? [duplicate]

The database-schema (Source and target) are very large (each has over 350 tables). I have got the task to somehow merge these two tables into one. The data itself (whats in the tables) has to be migrated. I have to be careful that there are no double entries for primary keys before or while merging the schemata. Has anybody ever done that already and would be able to provide me his solution or could anyone help me get a approach to the task? My approaches all failed and my advisor just tells me to get help online :/
To my approach:
I have tried using the "all_constraints" table to get all pks from my db.
SELECT cols.table_name, cols.column_name, cols.position, cons.status, cons.owner
FROM all_constraints cons, all_cons_columns cols
WHERE cols.owner = 'DB'
AND cons.constraint_type = 'P'
AND cons.constraint_name = cols.constraint_name
AND cons.owner = cols.owner
ORDER BY cols.table_name, cols.position;
I also "know" that there has to be a sequence for the primary keys to add values to it:
CREATE SEQUENCE seq_pk_addition
MINVALUE 1
MAXVALUE 99999999999999999999
START WITH 1
INCREMENT BY 1
CACHE 20;
Because I am a noob if it comes to pl/sql (or sql in general)
So how/what I should do next? :/
Here is a link for an ERD of the database: https://ufile.io/9tdoj
virus scan: https://www.virustotal.com/#/file/dbe5f418115e50313a2268fb33a924cc8cb57a43bc85b3bbf5f6a571b184627e/detection
As promised to help in my comment, i had prepared a dynamic code which you can try to get the data merged with the source and target tables. The logic is as below:
Step1: Get all the table names from the SOURCE schema. In the query below you can you need to replace the schema(owner) name respectively. For testing purpose i had taken only 1 table so when you run it,remove the table name filtering clause.
Step2: Get the constrained columns names for the table. This is used to prepared the ON clause which would be later used for MERGE statement.
Step3: Get the non-constrainted column names for the table. This would be used in UPDATE clause while using MERGE.
Step4: Prepare the insert list when the data doesnot match ON conditon of MERGE statement.
Read my inline comments to understand each step.
CREATE OR REPLACE PROCEDURE COPY_TABLE
AS
Type OBJ_NME is table of varchar2(100) index by pls_integer;
--To hold Table name
v_obj_nm OBJ_NME ;
--To hold Columns of table
v_col_nm OBJ_NME;
v_othr_col_nm OBJ_NME;
on_clause VARCHAR2(2000);
upd_clause VARCHAR2(4000);
cntr number:=0;
v_sql VARCHAR2(4000);
col_list1 VARCHAR2(4000);
col_list2 VARCHAR2(4000);
col_list3 VARCHAR2(4000);
col_list4 varchar2(4000);
col_list5 VARCHAR2(4000);
col_list6 VARCHAR2(4000);
col_list7 VARCHAR2(4000);
col_list8 varchar2(4000);
BEGIN
--Get Source table names
SELECT OBJECT_NAME
BULK COLLECT INTO v_obj_nm
FROM all_objects
WHERE owner LIKE 'RU%' -- Replace `RU%` with your Source schema name here
AND object_type = 'TABLE'
and object_name ='TEST'; --remove this condition if you want this to run for all tables
FOR I IN 1..v_obj_nm.count
loop
--Columns with Constraints
SELECT column_name
bulk collect into v_col_nm
FROM user_cons_columns
WHERE table_name = v_obj_nm(i);
--Columns without Constraints remain columns of table
SELECT *
BULK COLLECT INTO v_othr_col_nm
from (
SELECT column_name
FROM user_tab_cols
WHERE table_name = v_obj_nm(i)
MINUS
SELECT column_name
FROM user_cons_columns
WHERE table_name = v_obj_nm(i));
--Prepare Update Clause
FOR l IN 1..v_othr_col_nm.count
loop
cntr:=cntr+1;
upd_clause := 't1.'||v_othr_col_nm(l)||' = t2.' ||v_othr_col_nm(l);
upd_clause:=upd_clause ||' and ' ;
col_list1:= 't1.'||v_othr_col_nm(l) ||',';
col_list2:= col_list2||col_list1;
col_list5:= 't2.'||v_othr_col_nm(l) ||',';
col_list6:= col_list6||col_list5;
IF (cntr = v_othr_col_nm.count)
THEN
--dbms_output.put_line('YES');
upd_clause:=rtrim(upd_clause,' and');
col_list2:=rtrim( col_list2,',');
col_list6:=rtrim( col_list6,',');
END IF;
dbms_output.put_line(col_list2||col_list6);
--dbms_output.put_line(upd_clause);
End loop;
--Update caluse ends
cntr:=0; --Counter reset
--Prepare ON clause
FOR k IN 1..v_col_nm.count
loop
cntr:=cntr+1;
--dbms_output.put_line(v_col_nm.count || cntr);
on_clause := 't1.'||v_col_nm(k)||' = t2.' ||v_col_nm(k);
on_clause:=on_clause ||' and ' ;
col_list3:= 't1.'||v_col_nm(k) ||',';
col_list4:= col_list4||col_list3;
col_list7:= 't2.'||v_col_nm(k) ||',';
col_list8:= col_list8||col_list7;
IF (cntr = v_col_nm.count)
THEN
--dbms_output.put_line('YES');
on_clause:=rtrim(on_clause,' and');
col_list4:=rtrim( col_list4,',');
col_list8:=rtrim( col_list8,',');
end if;
dbms_output.put_line(col_list4||col_list8);
-- ON clause ends
--Prepare merge Statement
v_sql:= 'MERGE INTO '|| v_obj_nm(i)||' t1--put target schema name before v_obj_nm
USING (SELECT * FROM '|| v_obj_nm(i)||') t2-- put source schema name befire v_obj_nm here
ON ('||on_clause||')
WHEN MATCHED THEN
UPDATE
SET '||upd_clause ||
' WHEN NOT MATCHED THEN
INSERT
('||col_list2||','
||col_list4||
')
VALUES
('||col_list6||','
||col_list8||
')';
dbms_output.put_line(v_sql);
execute immediate v_sql;
end loop;
End loop;
END;
/
Execution:
exec COPY_TABLE
Output:
anonymous block completed
PS: i have tested this with a table with 2 columns out of which i was having unique key constraint .The DDL of table is as below:
At the end i wish you could understand my code(you being a noob) and implement something similar if the above fails for your requirement.
CREATE TABLE TEST
( COL2 NUMBER,
COLUMN1 VARCHAR2(20 BYTE),
CONSTRAINT TEST_UK1 UNIQUE (COLUMN1)
) ;
Oh dear! Normally, such a question would be quickly closed as "too broad", but we need to support victims of evil advisors!
As for the effort, I would need a week full time for an experienced expert plus two days quality checking for an experierenced QA engineer.
First of all, there is no way that such a complex data merge will work on the first try. That means that you'll need test copies of both schemas that can be easily rebuild. And you'll need a place to try it out. Normally this is done with an export of both schemas and an empty dev database.
Next, you need both schemas close enough to be able to compare the data. I'd do it with an import of the export files mentione above. If the schema names are identical than rename one during import.
Next, I'd doublecheck if the structure is really identical, with queries like
SELECT a.owner, a.table_name, b.owner, b.table_name
FROM all_tables a
FULL JOIN all_tables b
ON a.table_name = b.table_name
AND a.owner = 'SCHEMAA'
AND b.owner = 'SCHEMAB'
WHERE a.owner IS NULL or b.owner IS NULL;
Next, I'd check if the primary and unique keys have overlaps:
SELECT id FROM schemaa.table1
INTERSECT
SELECT id FROM schemab.table1;
As there are 300+ tables, I'd generate those queries:
DECLARE
stmt VARCHAR2(30000);
n NUMBER;
schema_a CONSTANT VARCHAR2(128 BYTE) := 'SCHEMAA';
schema_b CONSTANT VARCHAR2(128 BYTE) := 'SCHEMAB';
BEGIN
FOR c IN (SELECT owner, constraint_name, table_name,
(SELECT LISTAGG(column_name,',') WITHIN GROUP (ORDER BY position)
FROM all_cons_columns c
WHERE s.owner = c.owner
AND s.constraint_name = c.constraint_name) AS cols
FROM all_constraints s
WHERE s.constraint_type IN ('P')
AND s.owner = schema_a)
LOOP
dbms_output.put_line('Checking pk '||c.constraint_name||' on table '||c.table_name);
stmt := 'SELECT count(*) FROM '||schema_a||'.'||c.table_name
||' JOIN '||schema_b||'.'||c.table_name
|| ' USING ('||c.cols||')';
--dbms_output.put_line('Query '||stmt);
EXECUTE IMMEDIATE stmt INTO n;
dbms_output.put_line('Found '||n||' overlapping primary keys in table '||c.table_name);
END LOOP;
END;
/
1st of all, for 350 tables, most probably, would need an dynamic SQL.
Declare a CURSOR or a COLLECTION - table of VARCHAR2 with all table names.
Declare a string variable to build the dynamic SQL.
loop through the entire list of the tables name and, for each table generates a string which will be executed as SQL with EXECUTE IMMEDIATE command.
The dynamic SQL which will be built, should insert the values from source table into the target table. In case the PK already exists, in target table, should be checked the field which represents the last updated date and if in source table it is bigger than in target table, then perform an update in target table, else, do nothing.

Efficient way to get updated column names on an after update trigger

I've come up with the following trigger to extract all the column names which are updated when a table row update statement is executed...
but the problem is if there are more columns(atleast 100 cols), the performance/efficiency comes into concern
sample trigger code:
set define off;
create or replace TRIGGER TEST_TRIGG
AFTER UPDATE ON A_AAA
FOR EACH ROW
DECLARE
mytable varchar2(32) := 'A_AAA';
mycolumn varchar2(32);
updatedcols varchar2(3000);
cursor s1 (mytable varchar2) is
select column_name from user_tab_columns where table_name = mytable;
begin
open s1 (mytable);
loop
fetch s1 into mycolumn;
exit when s1%NOTFOUND;
IF UPDATING( mycolumn ) THEN
updatedcols := updatedcols || ',' || mycolumn;
END IF;
end loop;
close s1;
--do a few things with the list of updated columns
dbms_output.put_line('updated cols ' || updatedcols);
end;
/
Is there any alternative way to get the list?
Maybe with v$ tables (v$transaction or anything similar)?
No its the best way to get UPDATED column by UPDATING()
and you can change your code using implicit cursor like this, it will be a little bit faster
set define off;
create or replace TRIGGER TEST_TRIGG
AFTER UPDATE ON A_AAA
FOR EACH ROW
DECLARE
updatedcols varchar2(3000);
begin
for r in (select column_name from user_tab_columns where table_name ='A_AAA')
loop
IF UPDATING(r.column_name) THEN
updatedcols := updatedcols || ',' || r.column_name;
END IF;
end loop;
dbms_output.put_line('updated cols ' || updatedcols);
end;
/
Faced with a similar task, we ended up writing a pl/sql procedure which lists the columns of the table and generates the full trigger body for us, with static code referencing :new.col and :old.col. The execution of such trigger should probably be faster (though we didn't compare).
However, the downside is that when you later add a new column to the table, it's easy to forget to update the trigger body. It probably can be managed somehow with a monitoring job or elsehow, but for now it works for us.
P.S. I became curious what that updating('COL') feature does, and checked it now. I found out that it returns true if the column is present in the update statement, even if the value of the column actually didn't change (:old.col is equal to :new:col). This might generate unneeded history records, if the table is being updated by something like Java Hibernate library, which (by default) always specifies all columns in the update statements it generates. In such a case you might want to actually compare the values from inside the trigger body and insert the history record only in case the new value differs from the old value.

FORALL+ EXECUTE IMMEDIATE + INSERT Into tbl SELECT

I have got stuck in below and getting syntax error - Please help.
Basically I am using a collection to store few department ids and then would like to use these department ids as a filter condition while inserting data into emp table in FORALL statement.
Below is sample code:
while compiling this code i am getting error, my requirement is to use INSERT INTO table select * from table and cannot avoid it so please suggest.
create or replace Procedure abc(dblink VARCHAR2)
CURSOR dept_id is select dept_ids from dept;
TYPE nt_dept_detail IS TABLE OF VARCHAR2(25);
l_dept_array nt_dept_detail;
Begin
OPEN dept_id;
FETCH dept_id BULK COLLECT INTO l_dept_array;
IF l_dept_array.COUNT() > 0 THEN
FORALL i IN 1..l_dept_array.COUNT SAVE EXCEPTIONS
EXECUTE IMMEDIATE 'INSERT INTO stg_emp SELECT
Dept,''DEPT_10'' FROM dept_emp'||dblink||' WHERE
dept_id = '||l_dept_array(i)||'';
COMMIT;
END IF;
CLOSE dept_id;
end abc;
Why are you bothering to use cursors, arrays etc in the first place? Why can't you just do a simple insert as select?
Problems with your procedure as listed above:
You don't declare procedures like Procedure abc () - for a standalone procedure, you would do create or replace procedure abc as, or in a package: procedure abc is
You reference a variable called "dblink" that isn't declared anywhere.
You didn't put end abc; at the end of your procedure (I hope that was just a mis-c&p?)
You're effectively doing a simple insert as select, but you're way over-complicating it, plus you're making your code less performant.
You've not listed the column names that you're trying to insert into; if stg_emp has more than two columns or ends up having columns added, your code is going to fail.
Assuming your dblink name isn't known until runtime, then here's something that would do what you're after:
create Procedure abc (dblink in varchar2)
is
begin
execute immediate 'insert into stg_emp select dept, ''DEPT_10'' from dept_emp#'||dblink||
' where dept_id in (select dept_ids from dept)';
commit;
end abc;
/
If, however, you do know the dblink name, then you'd just get rid of the execute immediate and do:
create Procedure abc (dblink in varchar2)
is
begin
insert into stg_emp -- best to list the column names you're inserting into here
select dept, 'DEPT_10'
from dept_emp#dblink
where dept_id in (select dept_ids from dept);
commit;
end abc;
/
There appears te be a lot wrong with this code.
1) why the execute immediate? Is there any explicit requirement for that? No, than don't use it
2) where is the dblink variable declared?
3) as Boneist already stated, why not a simple subselect in the insert statement?
INSERT INTO stg_emp SELECT
Dept,'DEPT_10' FROM dept_emp#dblink WHERE
dept_id in (select dept_ids from dept );
For one, it would make the code actually readable ;)

PL/SQL compile conditionally on existence of database object

Is it possible to have conditional compilation in Oracle, where the condition is the existence of a database object (specifically, a table or view or synonym)? I'd like to be able to do something like this:
sp_some_procedure is
$IF /*check if A exists.*/ then
/* read from and write to A as well as other A-related non-DML stuff...*/
$ELSE /*A doesn't exist yet, so avoid compiler errors*/
dbms_output.put_line('Reminder: ask DBA to create A!')
$ENDIF
end;
Yes it is. Here a sample where the first stored procedure wants to select from XALL_TABLES, but if this table doesn't exist, select from dual. Finally, because I haven't got an XALL_TABLES object, the first stored procedure selects from dual. The second one, does the same thing on the ALL_TABLES object. Because the ALL_TABLES exists, the second stored procedure selects from all_tables but not from DUAL.
This kind of construction is useful where the package have to be deployed on all your database and use tables that are not deployed everywhere ... (ok, perhaps there is a conceptual problem, but it happens).
--conditionals compilation instructions accept only static condition (just with constants)
--passing sql bind variable doesn't work
--To pass a value to a conditional compilation instruction, I bypasses the use of input parameters of the script
--these 4 next lines affect a value to the first and the second input parameter of the script
--If your originally script use input script parameter, use the next free parameter ...
column param_1 new_value 1 noprint
select nvl(max(1), 0) param_1 from all_views where owner = 'SYS' and view_name = 'XALL_TABLES';
column param_2 new_value 2 noprint
select nvl(max(1), 0) param_2 from all_views where owner = 'SYS' and view_name = 'ALL_TABLES';
CREATE or replace PACKAGE my_pkg AS
function test_xall_tables return varchar2;
function test_all_tables return varchar2;
END my_pkg;
/
CREATE or replace PACKAGE BODY my_pkg AS
function test_xall_tables return varchar2 is
vch varchar2(50);
begin
$IF (&1 = 0) $THEN
select 'VIEW XALL_TABLES D''ONT EXISTS' into vch from dual;
$ELSE
select max('VIEW XALL_TABLES EXISTS') into vch from XALL_TABLES;
$END
return vch;
end test_xall_tables;
function test_all_tables return varchar2 is
vch varchar2(50);
begin
$IF (&2 = 0) $THEN
select 'VIEW ALL_TABLES D''ONT EXISTS' into vch from dual;
$ELSE
select max('VIEW ALL_TABLES EXISTS') into vch from ALL_TABLES;
$END
return vch;
end test_all_tables;
END my_pkg;
/
the test :
select my_pkg.test_xall_tables from dual;
give
VIEW XALL_TABLES D'ONT EXISTS
select my_pkg.test_all_tables from dual;
give
VIEW ALL_TABLES EXISTS
I would use 'EXECUTE IMMEDIATE' and a EXCEPTION clause.
Use dynamic SQL to create package constants to track which objects exist, and then use those constants in conditional compilation.
--E.g., say there are two possible tables, but only one of them exists.
--create table table1(a number);
create table table2(a number);
--Create a package with boolean constants to track the objects.
--(Another way to do this is to use ALTER SESSION SET PLSQL_CCFLAGS)
declare
table1_exists_string varchar2(10) := 'true';
table2_exists_string varchar2(10) := 'true';
temp number;
begin
begin
execute immediate 'select max(1) from table1 where rownum <= 1' into temp;
exception when others then
table1_exists_string := 'false';
end;
begin
execute immediate 'select max(1) from table2 where rownum <= 1' into temp;
exception when others then
table2_exists_string := 'false';
end;
execute immediate '
create or replace package objects is
table1_exists constant boolean := '||table1_exists_string||';
table2_exists constant boolean := '||table2_exists_string||';
end;
';
end;
/
--Look at the results in the source:
select * from user_source where name = 'OBJECTS';
--Create the object that refers to the tables.
create or replace function compile_test return varchar2 is
v_test number;
begin
$if objects.table1_exists $then
select max(1) into v_test from table1;
return 'table1 exists';
$elsif objects.table2_exists $then
select max(1) into v_test from table2;
return 'table 2 exists';
$else
return 'neither table exists';
$end
end;
/
--Check the dependencies - only TABLE2 is dependent.
select * from user_dependencies where name = 'COMPILE_TEST';
--Returns 'table 2 exists'.
select compile_test from dual;
Mixing dynamic SQL, dynamic PL/SQL, and conditional compilation is usually a very evil idea. But it will allow you to put all of your ugly dynamic SQL in one installation package, and maintain real dependency tracking.
This may work well in a semi-dynamic environment; for example a program that is installed with different sets of objects but does not frequently change between them.
(Also, if the whole point of this is just to replace scary error messages with friendly warnings, in my opinion that is a very bad idea. If your system is going to fail, the failure should be obvious so it can be immediately fixed. Most people ignore anything that starts with "Reminder...".)
No - that is not possible... but if you create a stored procedure referencing a non-existent DB object and try to compile it the compilation will show errors... the stored procedure will be there but "invalid"... and the compilation errors are accessible for the DBA whenever he looks at it... so I would just go ahead and create all needed stored procedures, if any compilation errors arise ask the DBA (sometimes the object exists but the stored procedure need permissions to access it...)... after the reason for the error(s) is fixed you can just recompile the stored procedure (via ALTER PROCEDURE MySchema.MyProcName COMPILE;) and all is fine...
IF you don't want code to be there you can just DROP the strored procedure and/or replace is via CREATE OR REPLACE... with dbms_output.put_line('Reminder: ask DBA to create A!') in the body.
The only other alternative is as kevin points out EXECUTE IMMEDIATE with proper EXCEPTION handling...
What I would do is check the existence via all_objects, something like:
declare
l_check_sql varchar2(4000);
l_cnt number;
begin
l_check_sql := q'{
select count(1)
from all_objects
where object_name = 'MY_OBJ'
and owner = 'MY_OWNER'
}';
execute immediate l_check_sql into l_cnt;
if (l_cnt > 0) then
-- do something referring to MY_OBJ
else
-- don't refer to MY_OBJ
end if;
end;

Resources