Oracle Triggers not showing in DBA_SOURCE - oracle

In our application, only about 25% of the database triggers show up in DBA_SOURCE. I know I can force the others to show up if I make an actual modification (like adding and removing a space) and then recompile the trigger, but I've got about 400 triggers to modify (it's rather a big application). Just recompiling the triggers with alter trigger <triggername> compile; didn't accomplish anything.
Without the triggers being in DBA_SOURCE, we can't do text searches on the trigger code.
Is there some simpler way to accomplish this? And is there some way to prevent the problem in the future?
We're on Oracle 10.2.0.5.0.

I believe you can find the source in all_triggers. Unfortunately, the data is in a LONG variable (Oracle example of do as I say, not as I do). So, the easiest thing would be to create a scratch table to use, populate it with the data converted to CLOB, and then search:
CREATE TABLE tr (trigger_name VARCHAR2(32), trigger_body CLOB);
INSERT INTO tr
(SELECT trigger_name, TO_LOB(trigger_body)
FROM all_triggers
WHERE owner = 'xxx');
SELECT trigger_name
FROM tr
WHERE trigger_body LIKE '%something%';
I'm not sure why the dba_source view is only sparsely populated for triggers. It's that way on my 10.2.0.4 database as well.
EDIT:
Here is a short script you can use to recreate all your triggers, at which point they should all be in dba_source:
CREATE TABLE temp_sql (sql1 CLOB, sql2 CLOB);
INSERT INTO temp_sql (sql1, sql2) (
SELECT 'CREATE OR REPLACE TRIGGER '||
DESCRIPTION||' '||CASE WHEN when_clause IS NULL THEN NULL ELSE 'WHEN('||when_clause||')' END sql1,
to_lob(trigger_body) sql2
FROM all_triggers
WHERE table_owner = 'theowner');
DECLARE
v_sql VARCHAR2(32760);
BEGIN
FOR R IN (SELECT sql1||' '||sql2 S FROM temp_sql) LOOP
v_sql := R.s;
EXECUTE IMMEDIATE v_sql;
END LOOP;
END;
/

We had the same issue. It's a migration issue from older versions of Oracle.
Triggers were not included in DBA_SOURCE in an earlier version (8?, 9i?) and did not get added to DBA_SOURCE when migrating to newer versions. A recompile did not put them into DBA_SOURCE. But if you drop and recreate the triggers, they will be included in DBA_SOURCE.
So my guess is you have some old triggers and have migrated the database in place to newer versions.

Who owns the triggers?
and, of course, you tried
select owner, object_name
from all_objects
where object_type = 'TRIGGER'
and owner in ('schema1','schema2')

Related

How to create a table and insert in the same statement

I'm new to Oracle and I'm struggling with this:
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT(*) INTO cnt FROM all_tables WHERE table_name like 'Newtable';
IF(cnt=0) THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable ....etc';
END IF;
COMMIT;
SELECT COUNT(*) INTO cnt FROM Newtable where id='something'
IF (cnt=0) THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable ....etc';
END IF;
END;
This keeps crashing and gives me the "PL/SQL: ORA-00942:table or view does not exist" on the insert-line. How can I avoid this? Or what am I doing wrong? I want these two statements (in reality it's a lot more of course) in a single transaction.
It isn't the insert that is the problem, it's the select two lines before. You have three statements within the block, not two. You're selecting from the same new table that doesn't exist yet. You've avoided that in the insert by making that dynamic, but you need to do the same for the select:
EXECUTE IMMEDIATE q'[SELECT COUNT(*) FROM Newtable where id='something']'
INTO cnt;
SQL Fiddle.
Creating a table at runtime seems wrong though. You said 'for safety issues the table can only exist if it's filled with the correct dataset', which doesn't entirely make sense to me - even if this block is creating and populating it in one go, anything that relies on it will fail or be invalidated until this runs. If this is part of the schema creation then making it dynamic doesn't seem to add much. You also said you wanted both to happen in one transaction, but the DDL will do an implicit commit, you can't roll back DDL, and your manual commit will start a new transaction for the insert(s) anyway. Perhaps you mean the inserts shouldn't happen if the table creation fails - but they would fail anyway, whether they're in the same block or not. It seems a bit odd, anyway.
Also, using all_tables for the check could still cause this to behave oddly. If that table exists in another schema, you create will be skipped, but you select and insert might still fail as they might not be able to see, or won't look for, the other schema version. Using user_tables or adding an owner check might be a bit safer.
Try the following approach, i.e. create and insert are in two different blocks
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt
FROM all_tables
WHERE table_name LIKE 'Newtable';
IF (cnt = 0)
THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable(c1 varchar2(256))';
END IF;
END;
DECLARE
cnt2 NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt2
FROM newtable
WHERE c1 = 'jack';
IF (cnt2 = 0)
THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable values(''jill'')';
END IF;
END;
Oracle handles the execution of a block in two steps:
First it parses the block and compiles it in an internal representation (so called "P code")
It then runs the P code (it may be interpreted or compiled to machine code, depending on your architecture and Oracle version)
For compiling the code, Oracle must know the names (and the schema!) of the referenced tables. Your table doesn't exist yet, hence there is no schema and the code does not compile.
To your intention to create the tables in one big transaction: This will not work. Oracle always implicitly commits the current transaction before and after a DDL statement (create table, alter table, truncate table(!) etc.). So after each create table, Oracle will commit the current transaction and starts a new one.

create trigger for copying values between tables in oracle

I am new to the sql. I want to create a trigger for copying values between tables.
basically, the task I want to finish is forwarding students' message table values to specific staff_maibox
here is the code.
drop trigger forward_msg_to_staff;
create or replace trigger forward_msg_to_staff
update on message
for each row
declare
message_id VARCHAR2(10);
client_id NUMBER(10);
staff_id NUMBER(5);
message_date DATE;
message_title VARCHAR2(20);
staff_mailbox VARCHAR2(255);
begin
insert into staff_mailbox(message_id, client_id, staff_id, message_date, message_title, staff_mailbox)
values(:new.message_id, :new.client_id, :new.staff_id, :sysdate, :new.message_title, :old.staff_mailbox)
end;
/
is this code correct?
Please advise. thanks in advance.
You're getting an error because you're missing either the BEFORE or AFTER keyword from the CREATE TRIGGER statement.
These are required as indicated in the documentation:
Additionally:
There's no need to declare all the variables, you're not using them
:sysdate is incorrect, you're not binding it. You can just use sysdate instead as you would in standard SQL or PL/SQL.
You're missing a semi-colon after the VALUES clause of the INSERT statement.
Putting this together your trigger may look like this
create or replace trigger forward_msg_to_staff
after update on message
for each row
begin
insert into staff_mailbox( message_id, client_id, staff_id, message_date
, message_title, staff_mailbox )
values ( :new.message_id, :new.client_id, :new.staff_id, sysdate
, :new.message_title, :old.staff_mailbox );
end forward_msg_to_staff;
/
Note that I've used the trigger name in the END as well. This is for convenience only, it makes it obvious where the trigger ends...
If you want to see what errors your're getting when you're creating a trigger use show errors as a_horse_with_no_name suggests. This shows any compilation errors, which is invaluable for tracking them down.

Why is Oracle losing data during commit?

I have a fairly standard SQL Query as follows:
TRUNCATE TABLE TABLE_NAME;
INSERT INTO TABLE_NAME
(
UPRN,
SAO_START_NUMBER,
SAO_START_SUFFIX,
SAO_END_NUMBER,
SAO_END_SUFFIX,
SAO_TEXT,
PAO_START_NUMBER,
PAO_START_SUFFIX,
PAO_END_NUMBER,
PAO_END_SUFFIX,
PAO_TEXT,
STREET_DESCRIPTOR,
TOWN_NAME,
POSTCODE,
XY_COORD,
EASTING,
NORTHING,
ADDRESS
)
SELECT
BASIC_LAND_AND_PROPERTY_UNIT.UPRN,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_NUMBER AS SAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_SUFFIX AS SAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_NUMBER AS SAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_SUFFIX AS SAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_TEXT AS SAO_TEXT,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_NUMBER AS PAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_SUFFIX AS PAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_NUMBER AS PAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_SUFFIX AS PAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_TEXT AS PAO_TEXT,
STREET_DESCRIPTOR.STREET_DESCRIPTOR AS STREET_DESCRIPTOR,
STREET_DESCRIPTOR.TOWN_NAME AS TOWN_NAME,
LAND_AND_PROPERTY_IDENTIFIER.POSTCODE AS POSTCODE,
BASIC_LAND_AND_PROPERTY_UNIT.GEOMETRY AS XY_COORD,
BASIC_LAND_AND_PROPERTY_UNIT.X_COORDINATE AS EASTING,
BASIC_LAND_AND_PROPERTY_UNIT.Y_COORDINATE AS NORTHING,
decode(SAO_START_NUMBER,null,null,SAO_START_NUMBER||SAO_START_SUFFIX||' ')
||decode(SAO_END_NUMBER,null,null,SAO_END_NUMBER||SAO_END_SUFFIX||' ')
||decode(SAO_TEXT,null,null,SAO_TEXT||' ')
||decode(PAO_START_NUMBER,null,null,PAO_START_NUMBER||PAO_START_SUFFIX||' ')
||decode(PAO_END_NUMBER,null,null,PAO_END_NUMBER||PAO_END_SUFFIX||' ')
||decode(PAO_TEXT,null,null,'STREET RECORD',null,PAO_TEXT||' ')
||decode(STREET_DESCRIPTOR,null,null,STREET_DESCRIPTOR||' ')
||decode(POST_TOWN,null,null,POST_TOWN||' ')
||Decode(Postcode,Null,Null,Postcode) As Address
From (Land_And_Property_Identifier
Inner Join Basic_Land_And_Property_Unit
On Land_And_Property_Identifier.Uprn = Basic_Land_And_Property_Unit.Uprn)
Inner Join Street_Descriptor
On Land_And_Property_Identifier.Usrn = Street_Descriptor.Usrn
Where Land_And_Property_Identifier.Postally_Addressable='Y';
If I run this query in SQL Developer, it runs fine with 1.8million features inserted (select count(*) from TABLE_NAME within the session confirms this).
But when I run the commit, the data disappears! select count(*) from TABLE_NAME now returns 0 results.
We've done a number of things to try and see what's going on:
During the Truncate, tablespace is freed up, and during the insert its filled again. There is no change during the commit. This implies the data is in the database.
If I do the exact same query but with and rownum < 100 appended to the end, the commit works. Same with 1000.
I found this question: oracle commit kills and had our DBA try the "SQL Trace". This produced a >4GB file which when parsed with TKPROF produced a 120 page report but we don't know how to read it and there's nothing obviously wrong in there.
Our error logs have nothing in them. And obviously no error during the commit itself.
There's a trigger/sequence which does increment by 1.8million during the process.
I've repeated this about 4 times now, but the result is always the same.
So my question is simple - what's happening to the data during the commit? How can we find out? Thanks.
Note: This has run fine in the past so I don't believe there's anything wrong with the SQL per-se.
Edit: Issue resolved by recreating the table from scratch. Now when I insert it only takes 500 seconds compared to the previous 2000. And commiting is instantaneous; when it was broken the commit took 4000 seconds!
I still have no idea why it happened though.
For those asking, the Create Table syntax:
CREATE TABLE TABLE_NAME
(
ADDRESS VARCHAR2(4000),
UPRN NUMBER(12),
SAO_START_NUMBER NUMBER(4),
SAO_START_SUFFIX VARCHAR2(1),
SAO_END_NUMBER NUMBER(4),
SAO_END_SUFFIX VARCHAR2(1),
SAO_TEXT VARCHAR2(90),
PAO_START_NUMBER NUMBER(4),
PAO_START_SUFFIX VARCHAR2(1),
PAO_END_NUMBER NUMBER(4),
PAO_END_SUFFIX VARCHAR2(1),
PAO_TEXT VARCHAR2(90),
STREET_DESCRIPTOR VARCHAR2(100),
TOWN_NAME VARCHAR2(30),
POSTCODE VARCHAR2(8),
XY_COORD MDSYS.SDO_GEOMETRY,
EASTING NUMBER(7),
NORTHING NUMBER(7)
)
CREATE INDEX TABLE_NAME_ADD_IDX ON TABLE_NAME (ADDRESS);
Do you still lose the data if you wrap the transaction in an anonymous block?
My guess is that you are opening two SQL windows in SQL Developer and that means two separate sessions. i.e. Running SQL code in window 1 and doing commit; in window 2 will not commit changes done in window 1.
Truncate table does an implicit commit. So the table will be empty until insert + commit finishes.
begin
execute immediate 'truncate table table_name reuse storage'; --use "reuse" if you know the data will be of similar size
-- implicit commit has occured and the table is empty for all sessions
insert into table_name (lots)
select lots from table2;
commit;
end;
You should use truncate with reuse storage, so that the database doesn't go a free all the blocks just to acquire the same number of blocks in the insert.
If you want/need to have the data available at all times a better (but longer) method is
begin
savepoint letsgo;
delete from table_name;
insert into table_name (lots)
select lots from table2;
commit;
exception
when others then
rollback to letsgo;
end;
Probably you had a trigger which you didn't noticed. Can you check the oracle's recyclebin table which might be storing the history of your dropped table and trigger?
Select * from recyclebin;
References : http://www.oraclebin.com/2012/12/recyclebinflashback.html

DDL statements in PL/SQL?

I am trying the code below to create a table in PL/SQL:
DECLARE
V_NAME VARCHAR2(20);
BEGIN
EXECUTE IMMEDIATE 'CREATE TABLE TEMP(NAME VARCHAR(20))';
EXECUTE IMMEDIATE 'INSERT INTO TEMP VALUES(''XYZ'')';
SELECT NAME INTO V_NAME FROM TEMP;
END;
/
The SELECT statement fails with this error:
PL/SQL: ORA-00942: table or view does not exist
Is it possible to CREATE, INSERT and SELECT all in a single PL/SQL Block one after other?
I assume you're doing something like the following:
declare
v_temp varchar2(20);
begin
execute immediate 'create table temp(name varchar(20))';
execute immediate 'insert into temp values(''XYZ'')';
select name into v_name from temp;
end;
At compile time the table, TEMP, does not exist. It hasn't been created yet. As it doesn't exist you can't select from it; you therefore also have to do the SELECT dynamically. There isn't actually any need to do a SELECT in this particular situation though you can use the returning into syntax.
declare
v_temp varchar2(20)
begin
execute immediate 'create table temp(name varchar2(20))';
execute immediate 'insert into temp
values(''XYZ'')
returning name into :1'
returning into v_temp;
end;
However, needing to dynamically create tables is normally an indication of a badly designed schema. It shouldn't really be necessary.
I can recommend René Nyffenegger's post "Why is dynamic SQL bad?" for reasons why you should avoid dynamic SQL, if at all possible, from a performance standpoint. Please also be aware that you are much more open to SQL injection and should use bind variables and DBMS_ASSERT to help guard against it.
If you run the program multiple time you will get an error even after modifying the program to run the select statement as dynamic SQL or using a returning into clause.
Because when you run the program first time it will create the table without any issue but when you run it next time as the table already created first time and you don't have a drop statement it will cause an error: "Table already exists in the Database".
So my suggestion is before creating a table in a pl/sql program always check if there is any table with the same name already exists in the database or not. You can do this check using a Data dictionary views /system tables which store the metadata depending on your database type.
For Example in Oracle you can use following views to decide if a tables needs to be created or not:
DBA_TABLES ,
ALL_TABLES,
USER_TABLES

CLOB vs. VARCHAR2 and are there other alternatives?

I am using DevArt's dotConnect and Entity Developer for my application. I've created the tables using the Entity-First feature.
I notice that many of the column types are set to CLOB. I only have experience with MySQL and Microsoft SQL server, so I am not sure about using CLOB for the application. I did some reading, and found out that CLOB are for large chunk of data.
The questions are:
Is using CLOB for most fields, such as the user's gender (which should be a varchar (1) ) or the full name, feasible? The steps for converting a CLOB field to VARCHAR2 requires dropping the column then re-creating it, and is buggy in DevArt's Entity Explorer, so I would like avoid it if possible. Edit: I just found out that if you set a maximum length for a string field it'll automatically be a VARCHAR2.
Are there any equivalents for TINYTEXT in Oracle?
It is a very bad idea to use a CLOB datatype for a column which ought to be VARCHAR2(1). Apart from the overheads (which are actually minimal, as Oracle will treat inline CLOBs of < 4000 characters as VARCHAR2) we should always strive to use the most accurate representation of our data in the schema: it's just good practice.
This really seems like a problem with the DevArt tool, or perhaps your understanding of how to use it (no offence). There ought to be some way for you to specify the datatype of an entity's attribute and/or a way of mapping those specifications to Oracle's physical datatypes. I apologise if this seems a little vague, I'm not familar with the product.
So, this is the basic problem:
SQL> desc t69
Name Null? Type
----------------------------------------- -------- --------
COL1 CLOB
SQL>
SQL> alter table t69 modify col1 varchar2(1)
2 /
alter table t69 modify col1 varchar2(1)
*
ERROR at line 1:
ORA-22859: invalid modification of columns
SQL>
We can fix it by using DDL to alter the table structure. Because the schema has many such columns it is worthwhile automating the process. This function drops the existing column and recreates it as a VARCHAR2. It offers the option to migrate data in the CLOB column to the VARCHAR2 column; you probably don't need this, but it's there for completeness. (This is not production quality code - it needs error handling, managing NOT NULL constraints, etc)
create or replace procedure clob2vc
( ptab in user_tables.table_name%type
, pcol in user_tab_columns.column_name%type
, pcol_size in number
, migrate_data in boolean := true )
is
begin
if migrate_data
then
execute immediate 'alter table '||ptab
||' add tmp_col varchar2('|| pcol_size|| ')';
execute immediate
'update '||ptab
||' set tmp_col = substr('||pcol||',1,'||pcol_size||')';
end if;
execute immediate 'alter table '||ptab
||' drop column '|| pcol;
if migrate_data
then
execute immediate 'alter table '||ptab
||' rename column tmp_col to '|| pcol;
else
execute immediate 'alter table '||ptab
||' add '||pcol||' varchar2('|| pcol_size|| ')';
end if;
end;
/
So, let's change that column...
SQL> exec clob2vc ('T69', 'COL1', 1)
PL/SQL procedure successfully completed.
SQL> desc t69
Name Null? Type
----------------------------------------- -------- ---------------
COL1 VARCHAR2(1)
SQL>
Calling this procedure can be automated or scripted in the usual ways.
Using a CLOB for something like a Gender column would, at a minimum, be extremely unusual. If the DDL this tool generates specifies that the LOB data should be stored inline rather than out of line, I wouldn't expect to be any horrible performance issues. But you probably will create problems for other tools accessing the database that don't handle LOBs particularly well.
There is no equivalent in Oracle to Tinytext in MySQL. A CLOB is a CLOB.
A simpler solution is to go to Model Explorer -> Model.Store -> Tables/Views, find the necessary column and change the type of this field to VARCHAR2.
Then run the Update Database from Model wizard to persist the changes to the database.
Don't forget to set the MaxLength facet (however, the problem with it is already fixed in the upcoming Beta build).

Resources