I'm doing some cleaning up in a bunch of old solutions. As part of the cleanup, I'm looking at removing some old triggers from an Oracle database. The triggers were originally designed by colleagues of mine, and put in place by a third party consultant. I don't have any direct access to the Oracle database except through a server link from a Sql Server where I do have access.
So I'm listing the triggers like this:
select * from openquery(SERVERLINKNAME, '
select *
from ALL_TRIGGERS
where owner like ''%OURUSERNAME%''
order by trigger_name
')
This works ok, but the problem is that the TRIGGER_BODY field from ALL_TRIGGERS is of type LONG, and the data in the field gets cut off at some point between the Oracle server and my SSMS resultset. So I can only see the first 100 chars from this column.
How can select the whole TRIGGER_BODY field?
Searching through Google for oracle convert long to varchar gives quite a few results, many of which suggests using functions, (temporary) tables etc. All of these are out of the question in my specific case since I'm not allowed to create any objects in the Oracle database/server.
I did finally find a sample that I was able to modify for my use case. The sample is from this page, by someone calling himself Sayan Malakshinov. After modifying his sample, I ended up with this:
select * from openquery(SERVERLINKNAME, '
select *
from
xmltable( ''/ROWSET/ROW'' passing dbms_xmlgen.getXMLType(''
select
trigger_name,
TRIGGER_BODY
from ALL_TRIGGERS
where TRIGGER_BODY is not null
and owner = ''''OURUSERNAME''''
'')
columns
trigger_name varchar2(80),
TRIGGER_BODY varchar2(4000)
)
')
This omits some columns from ALL_TRIGGERS but I get the whole trigger body (since none of the triggers are longer than 4000 chars).
In case you like to have it a little easier to read an execute parts of it, I transformed the version from user1429080 using literal escape syntax using § and [/] for escaping the nested strings:
select * from openquery(SERVERLINKNAME, q'§
select *
from
xmltable( '/ROWSET/ROW' passing dbms_xmlgen.getXMLType(q'[
select
trigger_name,
TRIGGER_BODY
from ALL_TRIGGERS
where TRIGGER_BODY is not null
and owner = 'OURUSERNAME'
]')
columns
trigger_name varchar2(80),
TRIGGER_BODY varchar2(4000)
)
§')
And just in case you want to inspect the code in sql result views (that often support displaying only single CHR(13) as visual line feeds) this is very useful:
select translate( trigger_body, CHR(10)||CHR(13), CHR(13)||CHR(13) ) as body
from xmltable(
'/ROWSET/ROW' passing dbms_xmlgen.getXMLType(
q'[
select TRIGGER_BODY from ALL_TRIGGERS
where TRIGGER_NAME='<MY_TRIG_NAME>' -- or any other condition
]')
columns TRIGGER_BODY varchar2(4000))
Related
I'm trying to merge x number of identical tables into one table. The reason we did this is because we want to have, for example 50 columns per table in the database. Tables are created externally via SCADA software called Ignition.
Every time table is created in the database, we want to view the data as one regardless of how many tables the data came from provided that all tables will have the same first three letters for example, Table_1, Table_2, Table_3....so on.
The query/procedure we want to have is like:
step 1: since the tables are unknown we can't do it by simple union, merge insert etc., so we must find all table_name with 'Table' prefix.
SELECT table_name FROM all_tables where table_name like 'Table%'
step 2: this is where the magic begins, it should query one by one each listed table_name in first step, then collect all the data and merge into one table or view.
I tried many ways using PL/SQL but don't know how to proceed with step 2. Is there any way to get what we want to achieve? any possible solutions would be great! :)
Thanks!
Assuming that you are selecting only the common columns from all the tables, you could create a dynamic view, which does a UNION ALL of all the tables starting with "Table" prefix.
DECLARE
v_select CLOB;
BEGIN
SELECT
LISTAGG('SELECT col1,col2,col3 FROM ' || table_name,
' UNION ALL ' || CHR(10) ) WITHIN GROUP
(
ORDER BY table_name
)
INTO v_select
FROM user_tables WHERE table_name LIKE 'TABLE_%';
IF
v_select IS NOT NULL
THEN
EXECUTE IMMEDIATE ' CREATE OR REPLACE VIEW v_all_tabs as ' || v_select;
END IF;
END;
/
Then select from the view by executing the above block(or put it into a procedure ) each time there's a new table added.
select * from v_all_tabs;
If there's a chance of your SQL string exceeding 4000 characters, instead of a single LISTAGG, you could append each select through a simple assignment in PL/SQL in a cursor loop.
Problem Statement
I have a dynamic SQL which i need to store in a table ,but before
storing the sql i need to validate the sql with the list of columns
stored in another table.
Without executing the query , is it possible to find name of columns in the select ?
Approach1
Only option i can think of is ,try to use explain plan of the query and read the meta data in the data dictionaries table .But unfortunately i am not able to find any table with such data.Please let me know if you know such views?
Approach2
Use DBMS_SQL.DESCRIBE_COLUMNS package to find the column name ,but i believe this will execute the whole query.
You don't need to execute the query to get the column names, you just need to parse it; e.g. as a simple example:
set serveroutput on
declare
l_statement varchar2(4000) := 'select * from employees';
l_c pls_integer;
l_col_cnt pls_integer;
l_desc_t dbms_sql.desc_tab;
begin
l_c := dbms_sql.open_cursor;
dbms_sql.parse(c=>l_c, statement=>l_statement, language_flag=>dbms_sql.native);
dbms_sql.describe_columns(c=>l_c, col_cnt=>l_col_cnt, desc_t=>l_desc_t);
for i in 1..l_col_cnt loop
dbms_output.put_line(l_desc_t(i).col_name);
end loop;
dbms_sql.close_cursor(l_c);
exception
when others then
if (dbms_sql.is_open(l_c)) then
dbms_sql.close_cursor(l_c);
end if;
raise;
end;
/
which outputs:
EMPLOYEE_ID
FIRST_NAME
LAST_NAME
EMAIL
PHONE_NUMBER
HIRE_DATE
JOB_ID
SALARY
COMMISSION_PCT
MANAGER_ID
DEPARTMENT_ID
PL/SQL procedure successfully completed.
You can do whatever validation you need on the column names inside the loop.
Bear in mind that you'll only see (and validate) the column names or aliases for column expressions, which won't necessarily reflect the data that is actually being retrieved. Someone could craft a query that pulls any data from anywhere it has permission to access, but then gives the columns/expression aliases that are considered valid.
If you're trying to restrict access to specific data then look into other mechanisms like views, virtual private database, etc.
DBMS_SQL.PARSE will not execute a SELECT statement but it will execute a DDL statement. If the string 'select * from employees' is replaced by 'drop table employees' the code will fail but the table will still get dropped.
If you're only worried about the performance of retrieving the metadata then Alex Poole's answer will work fine.
If you're worried about running the wrong statement types then you'll want to make some adjustments to Alex Poole's answer.
It is surprisingly difficult to tell if a statement is a SELECT instead of something else. A simple condition checking that the string begins with select will work 99% of the time but getting from 99% to 100% is a huge amount of work. Simple regular expressions cannot keep up with all the different keywords, comments, alternative quoting format, spaces, etc.
/*comment in front -- */ select * from dual
select * from dual
with asdf as (select * from dual) select * from asdf;
((((((select * from dual))))));
If you need 100% accuracy I recommend you use my open source PLSQL_LEXER. Once installed you can reliably test the command types like this:
select
statement_classifier.get_command_name(' /*comment*/ ((select * from dual))') test1,
statement_classifier.get_command_name('alter table asdf move compress') test2
from dual;
TEST1 TEST2
----- -----
SELECT ALTER TABLE
I created a table in oracle like
CREATE TABLE suppliers AS (SELECT * FROM companies WHERE id > 1000);
I would like to know the complete select statement which was used to create this table.
I have already tried get_ddl but it is not giving the select statement. Can you please let me know how to get the select statement?
If you're lucky one of these statements will show the DDL used to generate the table:
select *
from gv$sql
where lower(sql_fulltext) like '%create table suppliers%';
select *
from dba_hist_sqltext
where lower(sql_text) like '%create table%';
I used the word lucky because GV$SQL will usually only have results for a few hours or days, until the data is purged from the shared pool. DBA_HIST_SQLTEXT will only help if you have AWR enabled, the statement was run in the last X days that AWR is configured to hold data (the default is 8), the statement was run after the last snapshot collection (by default it happens every hour), and the statement ran long enough for AWR to think it's worth saving.
And for each table Oracle does not always store the full SQL. For security reasons, DDL statements are often truncated in the data dictionary. Don't be surprised if the text suddenly cuts off after the first N characters.
And depending on how the SQL is called the case and space may be different. Use lower and lots of wildcards to increase the chance of finding the statement.
TRY THIS:
select distinct table_name
from
all_tab_columns where column_name in
(
select column_name from
all_tab_columns
where table_name ='SUPPLIERS'
)
you can find table which created from table
I have a fairly standard SQL Query as follows:
TRUNCATE TABLE TABLE_NAME;
INSERT INTO TABLE_NAME
(
UPRN,
SAO_START_NUMBER,
SAO_START_SUFFIX,
SAO_END_NUMBER,
SAO_END_SUFFIX,
SAO_TEXT,
PAO_START_NUMBER,
PAO_START_SUFFIX,
PAO_END_NUMBER,
PAO_END_SUFFIX,
PAO_TEXT,
STREET_DESCRIPTOR,
TOWN_NAME,
POSTCODE,
XY_COORD,
EASTING,
NORTHING,
ADDRESS
)
SELECT
BASIC_LAND_AND_PROPERTY_UNIT.UPRN,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_NUMBER AS SAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_SUFFIX AS SAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_NUMBER AS SAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_SUFFIX AS SAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_TEXT AS SAO_TEXT,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_NUMBER AS PAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_SUFFIX AS PAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_NUMBER AS PAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_SUFFIX AS PAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_TEXT AS PAO_TEXT,
STREET_DESCRIPTOR.STREET_DESCRIPTOR AS STREET_DESCRIPTOR,
STREET_DESCRIPTOR.TOWN_NAME AS TOWN_NAME,
LAND_AND_PROPERTY_IDENTIFIER.POSTCODE AS POSTCODE,
BASIC_LAND_AND_PROPERTY_UNIT.GEOMETRY AS XY_COORD,
BASIC_LAND_AND_PROPERTY_UNIT.X_COORDINATE AS EASTING,
BASIC_LAND_AND_PROPERTY_UNIT.Y_COORDINATE AS NORTHING,
decode(SAO_START_NUMBER,null,null,SAO_START_NUMBER||SAO_START_SUFFIX||' ')
||decode(SAO_END_NUMBER,null,null,SAO_END_NUMBER||SAO_END_SUFFIX||' ')
||decode(SAO_TEXT,null,null,SAO_TEXT||' ')
||decode(PAO_START_NUMBER,null,null,PAO_START_NUMBER||PAO_START_SUFFIX||' ')
||decode(PAO_END_NUMBER,null,null,PAO_END_NUMBER||PAO_END_SUFFIX||' ')
||decode(PAO_TEXT,null,null,'STREET RECORD',null,PAO_TEXT||' ')
||decode(STREET_DESCRIPTOR,null,null,STREET_DESCRIPTOR||' ')
||decode(POST_TOWN,null,null,POST_TOWN||' ')
||Decode(Postcode,Null,Null,Postcode) As Address
From (Land_And_Property_Identifier
Inner Join Basic_Land_And_Property_Unit
On Land_And_Property_Identifier.Uprn = Basic_Land_And_Property_Unit.Uprn)
Inner Join Street_Descriptor
On Land_And_Property_Identifier.Usrn = Street_Descriptor.Usrn
Where Land_And_Property_Identifier.Postally_Addressable='Y';
If I run this query in SQL Developer, it runs fine with 1.8million features inserted (select count(*) from TABLE_NAME within the session confirms this).
But when I run the commit, the data disappears! select count(*) from TABLE_NAME now returns 0 results.
We've done a number of things to try and see what's going on:
During the Truncate, tablespace is freed up, and during the insert its filled again. There is no change during the commit. This implies the data is in the database.
If I do the exact same query but with and rownum < 100 appended to the end, the commit works. Same with 1000.
I found this question: oracle commit kills and had our DBA try the "SQL Trace". This produced a >4GB file which when parsed with TKPROF produced a 120 page report but we don't know how to read it and there's nothing obviously wrong in there.
Our error logs have nothing in them. And obviously no error during the commit itself.
There's a trigger/sequence which does increment by 1.8million during the process.
I've repeated this about 4 times now, but the result is always the same.
So my question is simple - what's happening to the data during the commit? How can we find out? Thanks.
Note: This has run fine in the past so I don't believe there's anything wrong with the SQL per-se.
Edit: Issue resolved by recreating the table from scratch. Now when I insert it only takes 500 seconds compared to the previous 2000. And commiting is instantaneous; when it was broken the commit took 4000 seconds!
I still have no idea why it happened though.
For those asking, the Create Table syntax:
CREATE TABLE TABLE_NAME
(
ADDRESS VARCHAR2(4000),
UPRN NUMBER(12),
SAO_START_NUMBER NUMBER(4),
SAO_START_SUFFIX VARCHAR2(1),
SAO_END_NUMBER NUMBER(4),
SAO_END_SUFFIX VARCHAR2(1),
SAO_TEXT VARCHAR2(90),
PAO_START_NUMBER NUMBER(4),
PAO_START_SUFFIX VARCHAR2(1),
PAO_END_NUMBER NUMBER(4),
PAO_END_SUFFIX VARCHAR2(1),
PAO_TEXT VARCHAR2(90),
STREET_DESCRIPTOR VARCHAR2(100),
TOWN_NAME VARCHAR2(30),
POSTCODE VARCHAR2(8),
XY_COORD MDSYS.SDO_GEOMETRY,
EASTING NUMBER(7),
NORTHING NUMBER(7)
)
CREATE INDEX TABLE_NAME_ADD_IDX ON TABLE_NAME (ADDRESS);
Do you still lose the data if you wrap the transaction in an anonymous block?
My guess is that you are opening two SQL windows in SQL Developer and that means two separate sessions. i.e. Running SQL code in window 1 and doing commit; in window 2 will not commit changes done in window 1.
Truncate table does an implicit commit. So the table will be empty until insert + commit finishes.
begin
execute immediate 'truncate table table_name reuse storage'; --use "reuse" if you know the data will be of similar size
-- implicit commit has occured and the table is empty for all sessions
insert into table_name (lots)
select lots from table2;
commit;
end;
You should use truncate with reuse storage, so that the database doesn't go a free all the blocks just to acquire the same number of blocks in the insert.
If you want/need to have the data available at all times a better (but longer) method is
begin
savepoint letsgo;
delete from table_name;
insert into table_name (lots)
select lots from table2;
commit;
exception
when others then
rollback to letsgo;
end;
Probably you had a trigger which you didn't noticed. Can you check the oracle's recyclebin table which might be storing the history of your dropped table and trigger?
Select * from recyclebin;
References : http://www.oraclebin.com/2012/12/recyclebinflashback.html
In our application, only about 25% of the database triggers show up in DBA_SOURCE. I know I can force the others to show up if I make an actual modification (like adding and removing a space) and then recompile the trigger, but I've got about 400 triggers to modify (it's rather a big application). Just recompiling the triggers with alter trigger <triggername> compile; didn't accomplish anything.
Without the triggers being in DBA_SOURCE, we can't do text searches on the trigger code.
Is there some simpler way to accomplish this? And is there some way to prevent the problem in the future?
We're on Oracle 10.2.0.5.0.
I believe you can find the source in all_triggers. Unfortunately, the data is in a LONG variable (Oracle example of do as I say, not as I do). So, the easiest thing would be to create a scratch table to use, populate it with the data converted to CLOB, and then search:
CREATE TABLE tr (trigger_name VARCHAR2(32), trigger_body CLOB);
INSERT INTO tr
(SELECT trigger_name, TO_LOB(trigger_body)
FROM all_triggers
WHERE owner = 'xxx');
SELECT trigger_name
FROM tr
WHERE trigger_body LIKE '%something%';
I'm not sure why the dba_source view is only sparsely populated for triggers. It's that way on my 10.2.0.4 database as well.
EDIT:
Here is a short script you can use to recreate all your triggers, at which point they should all be in dba_source:
CREATE TABLE temp_sql (sql1 CLOB, sql2 CLOB);
INSERT INTO temp_sql (sql1, sql2) (
SELECT 'CREATE OR REPLACE TRIGGER '||
DESCRIPTION||' '||CASE WHEN when_clause IS NULL THEN NULL ELSE 'WHEN('||when_clause||')' END sql1,
to_lob(trigger_body) sql2
FROM all_triggers
WHERE table_owner = 'theowner');
DECLARE
v_sql VARCHAR2(32760);
BEGIN
FOR R IN (SELECT sql1||' '||sql2 S FROM temp_sql) LOOP
v_sql := R.s;
EXECUTE IMMEDIATE v_sql;
END LOOP;
END;
/
We had the same issue. It's a migration issue from older versions of Oracle.
Triggers were not included in DBA_SOURCE in an earlier version (8?, 9i?) and did not get added to DBA_SOURCE when migrating to newer versions. A recompile did not put them into DBA_SOURCE. But if you drop and recreate the triggers, they will be included in DBA_SOURCE.
So my guess is you have some old triggers and have migrated the database in place to newer versions.
Who owns the triggers?
and, of course, you tried
select owner, object_name
from all_objects
where object_type = 'TRIGGER'
and owner in ('schema1','schema2')