Why is Oracle losing data during commit? - oracle

I have a fairly standard SQL Query as follows:
TRUNCATE TABLE TABLE_NAME;
INSERT INTO TABLE_NAME
(
UPRN,
SAO_START_NUMBER,
SAO_START_SUFFIX,
SAO_END_NUMBER,
SAO_END_SUFFIX,
SAO_TEXT,
PAO_START_NUMBER,
PAO_START_SUFFIX,
PAO_END_NUMBER,
PAO_END_SUFFIX,
PAO_TEXT,
STREET_DESCRIPTOR,
TOWN_NAME,
POSTCODE,
XY_COORD,
EASTING,
NORTHING,
ADDRESS
)
SELECT
BASIC_LAND_AND_PROPERTY_UNIT.UPRN,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_NUMBER AS SAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_SUFFIX AS SAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_NUMBER AS SAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_SUFFIX AS SAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_TEXT AS SAO_TEXT,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_NUMBER AS PAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_SUFFIX AS PAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_NUMBER AS PAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_SUFFIX AS PAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_TEXT AS PAO_TEXT,
STREET_DESCRIPTOR.STREET_DESCRIPTOR AS STREET_DESCRIPTOR,
STREET_DESCRIPTOR.TOWN_NAME AS TOWN_NAME,
LAND_AND_PROPERTY_IDENTIFIER.POSTCODE AS POSTCODE,
BASIC_LAND_AND_PROPERTY_UNIT.GEOMETRY AS XY_COORD,
BASIC_LAND_AND_PROPERTY_UNIT.X_COORDINATE AS EASTING,
BASIC_LAND_AND_PROPERTY_UNIT.Y_COORDINATE AS NORTHING,
decode(SAO_START_NUMBER,null,null,SAO_START_NUMBER||SAO_START_SUFFIX||' ')
||decode(SAO_END_NUMBER,null,null,SAO_END_NUMBER||SAO_END_SUFFIX||' ')
||decode(SAO_TEXT,null,null,SAO_TEXT||' ')
||decode(PAO_START_NUMBER,null,null,PAO_START_NUMBER||PAO_START_SUFFIX||' ')
||decode(PAO_END_NUMBER,null,null,PAO_END_NUMBER||PAO_END_SUFFIX||' ')
||decode(PAO_TEXT,null,null,'STREET RECORD',null,PAO_TEXT||' ')
||decode(STREET_DESCRIPTOR,null,null,STREET_DESCRIPTOR||' ')
||decode(POST_TOWN,null,null,POST_TOWN||' ')
||Decode(Postcode,Null,Null,Postcode) As Address
From (Land_And_Property_Identifier
Inner Join Basic_Land_And_Property_Unit
On Land_And_Property_Identifier.Uprn = Basic_Land_And_Property_Unit.Uprn)
Inner Join Street_Descriptor
On Land_And_Property_Identifier.Usrn = Street_Descriptor.Usrn
Where Land_And_Property_Identifier.Postally_Addressable='Y';
If I run this query in SQL Developer, it runs fine with 1.8million features inserted (select count(*) from TABLE_NAME within the session confirms this).
But when I run the commit, the data disappears! select count(*) from TABLE_NAME now returns 0 results.
We've done a number of things to try and see what's going on:
During the Truncate, tablespace is freed up, and during the insert its filled again. There is no change during the commit. This implies the data is in the database.
If I do the exact same query but with and rownum < 100 appended to the end, the commit works. Same with 1000.
I found this question: oracle commit kills and had our DBA try the "SQL Trace". This produced a >4GB file which when parsed with TKPROF produced a 120 page report but we don't know how to read it and there's nothing obviously wrong in there.
Our error logs have nothing in them. And obviously no error during the commit itself.
There's a trigger/sequence which does increment by 1.8million during the process.
I've repeated this about 4 times now, but the result is always the same.
So my question is simple - what's happening to the data during the commit? How can we find out? Thanks.
Note: This has run fine in the past so I don't believe there's anything wrong with the SQL per-se.
Edit: Issue resolved by recreating the table from scratch. Now when I insert it only takes 500 seconds compared to the previous 2000. And commiting is instantaneous; when it was broken the commit took 4000 seconds!
I still have no idea why it happened though.
For those asking, the Create Table syntax:
CREATE TABLE TABLE_NAME
(
ADDRESS VARCHAR2(4000),
UPRN NUMBER(12),
SAO_START_NUMBER NUMBER(4),
SAO_START_SUFFIX VARCHAR2(1),
SAO_END_NUMBER NUMBER(4),
SAO_END_SUFFIX VARCHAR2(1),
SAO_TEXT VARCHAR2(90),
PAO_START_NUMBER NUMBER(4),
PAO_START_SUFFIX VARCHAR2(1),
PAO_END_NUMBER NUMBER(4),
PAO_END_SUFFIX VARCHAR2(1),
PAO_TEXT VARCHAR2(90),
STREET_DESCRIPTOR VARCHAR2(100),
TOWN_NAME VARCHAR2(30),
POSTCODE VARCHAR2(8),
XY_COORD MDSYS.SDO_GEOMETRY,
EASTING NUMBER(7),
NORTHING NUMBER(7)
)
CREATE INDEX TABLE_NAME_ADD_IDX ON TABLE_NAME (ADDRESS);

Do you still lose the data if you wrap the transaction in an anonymous block?
My guess is that you are opening two SQL windows in SQL Developer and that means two separate sessions. i.e. Running SQL code in window 1 and doing commit; in window 2 will not commit changes done in window 1.
Truncate table does an implicit commit. So the table will be empty until insert + commit finishes.
begin
execute immediate 'truncate table table_name reuse storage'; --use "reuse" if you know the data will be of similar size
-- implicit commit has occured and the table is empty for all sessions
insert into table_name (lots)
select lots from table2;
commit;
end;
You should use truncate with reuse storage, so that the database doesn't go a free all the blocks just to acquire the same number of blocks in the insert.
If you want/need to have the data available at all times a better (but longer) method is
begin
savepoint letsgo;
delete from table_name;
insert into table_name (lots)
select lots from table2;
commit;
exception
when others then
rollback to letsgo;
end;

Probably you had a trigger which you didn't noticed. Can you check the oracle's recyclebin table which might be storing the history of your dropped table and trigger?
Select * from recyclebin;
References : http://www.oraclebin.com/2012/12/recyclebinflashback.html

Related

How to find the column used in the dynamic query without executing whole query

Problem Statement
I have a dynamic SQL which i need to store in a table ,but before
storing the sql i need to validate the sql with the list of columns
stored in another table.
Without executing the query , is it possible to find name of columns in the select ?
Approach1
Only option i can think of is ,try to use explain plan of the query and read the meta data in the data dictionaries table .But unfortunately i am not able to find any table with such data.Please let me know if you know such views?
Approach2
Use DBMS_SQL.DESCRIBE_COLUMNS package to find the column name ,but i believe this will execute the whole query.
You don't need to execute the query to get the column names, you just need to parse it; e.g. as a simple example:
set serveroutput on
declare
l_statement varchar2(4000) := 'select * from employees';
l_c pls_integer;
l_col_cnt pls_integer;
l_desc_t dbms_sql.desc_tab;
begin
l_c := dbms_sql.open_cursor;
dbms_sql.parse(c=>l_c, statement=>l_statement, language_flag=>dbms_sql.native);
dbms_sql.describe_columns(c=>l_c, col_cnt=>l_col_cnt, desc_t=>l_desc_t);
for i in 1..l_col_cnt loop
dbms_output.put_line(l_desc_t(i).col_name);
end loop;
dbms_sql.close_cursor(l_c);
exception
when others then
if (dbms_sql.is_open(l_c)) then
dbms_sql.close_cursor(l_c);
end if;
raise;
end;
/
which outputs:
EMPLOYEE_ID
FIRST_NAME
LAST_NAME
EMAIL
PHONE_NUMBER
HIRE_DATE
JOB_ID
SALARY
COMMISSION_PCT
MANAGER_ID
DEPARTMENT_ID
PL/SQL procedure successfully completed.
You can do whatever validation you need on the column names inside the loop.
Bear in mind that you'll only see (and validate) the column names or aliases for column expressions, which won't necessarily reflect the data that is actually being retrieved. Someone could craft a query that pulls any data from anywhere it has permission to access, but then gives the columns/expression aliases that are considered valid.
If you're trying to restrict access to specific data then look into other mechanisms like views, virtual private database, etc.
DBMS_SQL.PARSE will not execute a SELECT statement but it will execute a DDL statement. If the string 'select * from employees' is replaced by 'drop table employees' the code will fail but the table will still get dropped.
If you're only worried about the performance of retrieving the metadata then Alex Poole's answer will work fine.
If you're worried about running the wrong statement types then you'll want to make some adjustments to Alex Poole's answer.
It is surprisingly difficult to tell if a statement is a SELECT instead of something else. A simple condition checking that the string begins with select will work 99% of the time but getting from 99% to 100% is a huge amount of work. Simple regular expressions cannot keep up with all the different keywords, comments, alternative quoting format, spaces, etc.
/*comment in front -- */ select * from dual
select * from dual
with asdf as (select * from dual) select * from asdf;
((((((select * from dual))))));
If you need 100% accuracy I recommend you use my open source PLSQL_LEXER. Once installed you can reliably test the command types like this:
select
statement_classifier.get_command_name(' /*comment*/ ((select * from dual))') test1,
statement_classifier.get_command_name('alter table asdf move compress') test2
from dual;
TEST1 TEST2
----- -----
SELECT ALTER TABLE

Combining Multiple Statements in a Stored Procedure

I'm new to Oracle, what I'm trying to achieve I could in SQL but I'm having a tough time in doing it in Oracle.
So within a Stored Procedure, I'm trying to truncate a table, then insert values, lastly run a SELECT statement against the table.
Here is what I have but it doesn't work, when I run this script, it runs with no error but it seems it only goes through the first (TRUNCATE) statement and that's it.
I would like it create the Store Procedure (which it does) and then show me the contents of the table from the SELECT statement.
CREATE OR REPLACE procedure MYSTOREDPROCEDURE is
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE MYTABLE';
INSERT INTO MYTABLE
(COL1,
COL2,
COL3)
SELECT COL1, COL2, COL3 FROM MYOTHERTABLE;
end ;
/
SELECT * FROM MYTABLE
END MYSTOREDPROCEDURE;
For clarification SQL is a language implmemented by many RDBMS including SQL Server, PostgreSQL etc. If you're doing this in Oracle you are using SQL. However, most RDBMS have also added a procedural extension to SQL such as T-SQL (SQL Server), pgPL/SQL (PostgreSQL) and PL/SQL (Oracle). In this case you're attempting to use PL/SQL.
From what you're attempting to do I assume you're used to SQL Server and temporary tables. It is less common and less necessary to use temporary tables in Oracle.
Firstly, EXECUTE IMMEDIATE 'TRUNCATE TABLE MYTABLE';. It should rarely be necessary to perform DDL in a stored procedure in Oracle; it would normally be indicative of a flaw in the data-model. In this case it appears as though you're using an actual table as a temporary table. I wouldn't do this. If you need a temporary table use a global temporary table.
Secondly, SELECT * FROM MYTABLE. You can't do this in PL/SQL. If you need to select some data you have to use SELECT <columns> INTO <variables> FROM .... If you do this it won't show you the contents of the table.
From your description of what you're attempting you only need to do the following:
SELECT COL1, COL2, COL3 FROM MYOTHERTABLE;
There is no need for PL/SQL (stored procedures) at all.
I suspect you intended to execute the SELECT * FROM MYTABLE after the procedure was compiled and executed. The above could be restrucured as:
CREATE OR REPLACE procedure MYSTOREDPROCEDURE is
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE MYTABLE';
INSERT INTO MYTABLE (COL1, COL2, COL3)
SELECT COL1, COL2, COL3
FROM MYOTHERTABLE;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error in MYSTOREDPROCEDURE : ' || SQLCODE || ' - ' || SQLERRM);
RAISE;
END MYSTOREDPROCEDURE;
/
EXECUTE MYSTOREDPROCEDURE
/
SELECT * FROM MYTABLE
/
Share and enjoy.
My DBA won't let me have TRUNCATE privileges even though truncate removes high water marks compared to delete MYTABLE; Your code will fail without these privileges. Both #Ben and #Bob Jarvis provide sound advice with the missing INTO and nicer form. One addition to Bob Jarvis' answer is that you need to start the command with SET SERVEROUTPUT ON SIZE 100000;. Otherwise, his nicely crafted error message will not be displayed. The serveroutput command is used for DBMS_OUTPUT and only lasts as long as your database session lasts.

How to create a table and insert in the same statement

I'm new to Oracle and I'm struggling with this:
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT(*) INTO cnt FROM all_tables WHERE table_name like 'Newtable';
IF(cnt=0) THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable ....etc';
END IF;
COMMIT;
SELECT COUNT(*) INTO cnt FROM Newtable where id='something'
IF (cnt=0) THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable ....etc';
END IF;
END;
This keeps crashing and gives me the "PL/SQL: ORA-00942:table or view does not exist" on the insert-line. How can I avoid this? Or what am I doing wrong? I want these two statements (in reality it's a lot more of course) in a single transaction.
It isn't the insert that is the problem, it's the select two lines before. You have three statements within the block, not two. You're selecting from the same new table that doesn't exist yet. You've avoided that in the insert by making that dynamic, but you need to do the same for the select:
EXECUTE IMMEDIATE q'[SELECT COUNT(*) FROM Newtable where id='something']'
INTO cnt;
SQL Fiddle.
Creating a table at runtime seems wrong though. You said 'for safety issues the table can only exist if it's filled with the correct dataset', which doesn't entirely make sense to me - even if this block is creating and populating it in one go, anything that relies on it will fail or be invalidated until this runs. If this is part of the schema creation then making it dynamic doesn't seem to add much. You also said you wanted both to happen in one transaction, but the DDL will do an implicit commit, you can't roll back DDL, and your manual commit will start a new transaction for the insert(s) anyway. Perhaps you mean the inserts shouldn't happen if the table creation fails - but they would fail anyway, whether they're in the same block or not. It seems a bit odd, anyway.
Also, using all_tables for the check could still cause this to behave oddly. If that table exists in another schema, you create will be skipped, but you select and insert might still fail as they might not be able to see, or won't look for, the other schema version. Using user_tables or adding an owner check might be a bit safer.
Try the following approach, i.e. create and insert are in two different blocks
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt
FROM all_tables
WHERE table_name LIKE 'Newtable';
IF (cnt = 0)
THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable(c1 varchar2(256))';
END IF;
END;
DECLARE
cnt2 NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt2
FROM newtable
WHERE c1 = 'jack';
IF (cnt2 = 0)
THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable values(''jill'')';
END IF;
END;
Oracle handles the execution of a block in two steps:
First it parses the block and compiles it in an internal representation (so called "P code")
It then runs the P code (it may be interpreted or compiled to machine code, depending on your architecture and Oracle version)
For compiling the code, Oracle must know the names (and the schema!) of the referenced tables. Your table doesn't exist yet, hence there is no schema and the code does not compile.
To your intention to create the tables in one big transaction: This will not work. Oracle always implicitly commits the current transaction before and after a DDL statement (create table, alter table, truncate table(!) etc.). So after each create table, Oracle will commit the current transaction and starts a new one.

ORA-02070: database does not support in this context

I have a query like
INSERT INTO sid_rem#dev_db
(sid)
select sid from v$session
Now, when i execute this query i get
ORA-02070: database does not support in this context
This error happens only when I insert data from v$session into some remote db. Its working fine for any other table.
Anyone know why this issue and any workaround for this?
Works using gv$session instead of v$session:
INSERT INTO sid_rem#dev_db(sid)
select sid from gv$session;
gv$ views are global views, that is, they are not restricted to one node(instance), but see the entire database(RAC). v$ views are subviews of gv$.
Searching on the internet I found this has something to do with distributed transactions.
Thread on ora-code.com
Late answer but might still be useful. I've found this error occurs when trying to select from system views across a database link where the system view contains columns of type LONG. If the query can be reworded to avoid the LONG columns these joins will work fine.
Example:
SELECT dc_prod.*
FROM dba_constraints#prod_link dc_prod
INNER JOIN dba_constraints dc_dev
ON (dc_dev.CONSTRAINT_NAME = dc_prod.CONSTRAINT_NAME)
will fail with an ORA-02070 due to accessing the LONG column SEARCH_CONDITION, but
SELECT dc_prod.*
FROM (SELECT OWNER,
CONSTRAINT_NAME,
CONSTRAINT_TYPE,
TABLE_NAME,
-- SEARCH_CONDITION,
R_OWNER,
R_CONSTRAINT_NAME,
DELETE_RULE,
STATUS,
DEFERRABLE,
DEFERRED,
VALIDATED,
GENERATED,
BAD,
RELY,
LAST_CHANGE,
INDEX_OWNER,
INDEX_NAME,
INVALID,
VIEW_RELATED
FROM dba_constraints#prod_link) dc_prod
INNER JOIN (SELECT OWNER,
CONSTRAINT_NAME,
CONSTRAINT_TYPE,
TABLE_NAME,
-- SEARCH_CONDITION,
R_OWNER,
R_CONSTRAINT_NAME,
DELETE_RULE,
STATUS,
DEFERRABLE,
DEFERRED,
VALIDATED,
GENERATED,
BAD,
RELY,
LAST_CHANGE,
INDEX_OWNER,
INDEX_NAME,
INVALID,
VIEW_RELATED
FROM dba_constraints) dc_dev
ON (dc_dev.CONSTRAINT_NAME = dc_prod.CONSTRAINT_NAME)
works fine because the LONG column SEARCH_CONDITION from DBA_CONSTRAINTS is not accessed.
Share and enjoy.
I don't know why this is happening, it's probably in the documentation somewhere but my Oracle-Docs-Fu seems to have deserted me today.
One possible work-around is to use a global temporary table
SQL> create table tmp_ben ( sid number );
Table created.
SQL> connect schema/pw#db2
Connected.
SQL> create table tmp_ben ( sid number );
Table created.
SQL> insert into tmp_ben#db1 select sid from v$session;
insert into tmp_ben#db1 select sid from v$session
*
ERROR at line 1:
ORA-02070: database does not support in this context
SQL> create global temporary table tmp_ben_test ( sid number );
Table created.
SQL> insert into tmp_ben_test select sid from v$session;
73 rows created.
SQL> insert into tmp_ben#db1 select * from tmp_ben_test;
73 rows created.

Oracle Triggers not showing in DBA_SOURCE

In our application, only about 25% of the database triggers show up in DBA_SOURCE. I know I can force the others to show up if I make an actual modification (like adding and removing a space) and then recompile the trigger, but I've got about 400 triggers to modify (it's rather a big application). Just recompiling the triggers with alter trigger <triggername> compile; didn't accomplish anything.
Without the triggers being in DBA_SOURCE, we can't do text searches on the trigger code.
Is there some simpler way to accomplish this? And is there some way to prevent the problem in the future?
We're on Oracle 10.2.0.5.0.
I believe you can find the source in all_triggers. Unfortunately, the data is in a LONG variable (Oracle example of do as I say, not as I do). So, the easiest thing would be to create a scratch table to use, populate it with the data converted to CLOB, and then search:
CREATE TABLE tr (trigger_name VARCHAR2(32), trigger_body CLOB);
INSERT INTO tr
(SELECT trigger_name, TO_LOB(trigger_body)
FROM all_triggers
WHERE owner = 'xxx');
SELECT trigger_name
FROM tr
WHERE trigger_body LIKE '%something%';
I'm not sure why the dba_source view is only sparsely populated for triggers. It's that way on my 10.2.0.4 database as well.
EDIT:
Here is a short script you can use to recreate all your triggers, at which point they should all be in dba_source:
CREATE TABLE temp_sql (sql1 CLOB, sql2 CLOB);
INSERT INTO temp_sql (sql1, sql2) (
SELECT 'CREATE OR REPLACE TRIGGER '||
DESCRIPTION||' '||CASE WHEN when_clause IS NULL THEN NULL ELSE 'WHEN('||when_clause||')' END sql1,
to_lob(trigger_body) sql2
FROM all_triggers
WHERE table_owner = 'theowner');
DECLARE
v_sql VARCHAR2(32760);
BEGIN
FOR R IN (SELECT sql1||' '||sql2 S FROM temp_sql) LOOP
v_sql := R.s;
EXECUTE IMMEDIATE v_sql;
END LOOP;
END;
/
We had the same issue. It's a migration issue from older versions of Oracle.
Triggers were not included in DBA_SOURCE in an earlier version (8?, 9i?) and did not get added to DBA_SOURCE when migrating to newer versions. A recompile did not put them into DBA_SOURCE. But if you drop and recreate the triggers, they will be included in DBA_SOURCE.
So my guess is you have some old triggers and have migrated the database in place to newer versions.
Who owns the triggers?
and, of course, you tried
select owner, object_name
from all_objects
where object_type = 'TRIGGER'
and owner in ('schema1','schema2')

Resources