In T-SQL I used to debug in script or stored procedure like below (very simple example), to check the data of the tables :
declare #my_variable nvarchar(4) = 'test';
....
update t
set t.my_column = case when t.my_column_3 = 1
then 1 + 1
when t.my_column_4 = 2
then 2 + 2
...
end
from my_table t
where t.column_2 = 'test';
--check the updated data
select t.my_column, t.my_column_2, t.my_column_3, t.my_column_4
from my_table t
where t.column_2 = 'test';
update t
set t.my_column_5 = case when t.my_column_6 + t.my_column_7 = t.my_column_8
then 'OK'
else 'NOT OK'
end,
....
from my_table t
where t.column_2 = 'test_2';
--check the updated data
select t.my_column_5, t.my_column_6, t.my_column_7, t.my_column_8
from my_table t
where t.column_2 = 'test_2';
etc..
I know that in Oracle is quite different but is there a similar way to check the data of some tables in a middle of a pl/sql script.
I did some research and apparently we can use dbms_output but it's not as simple as a select * from my_table like in T-SQL.
Also sometimes, after running a pl/sql script (manipulate the data etc), I need to display the datas of two or more tables in my script (for controlling purpose) is it possible ?
All suggestions are welcome.
Thank you.
Related
Is there a hint to generate execution plan ignoring the existing one from the shared pool?
There is not a hint to create an execution plan that ignores plans in the shared pool. A more common way of phrasing this question is: how do I get Oracle to always perform a hard parse?
There are a few weird situations where this behavior is required. It would be helpful to fully explain your reason for needing this, as the solution varies depending why you need it.
Strange performance problem. Oracle performs some dynamic re-optimization of SQL statements after the first run, like adaptive cursor sharing and cardinality feedback. In the rare case when those features backfire you might want to disable them.
Dynamic query. You have a dynamic query that used Oracle data cartridge to fetch data in the parse step, but Oracle won't execute the parse step because the query looks static to Oracle.
Misunderstanding. Something has gone wrong and this is an XY problem.
Solutions
The simplest way to solve this problem are by using Thorsten Kettner's solution of changing the query each time.
If that's not an option, the second simplest solution is to flush the query from the shared pool, like this:
--This only works one node at a time.
begin
for statements in
(
select distinct address, hash_value
from gv$sql
where sql_id = '33t9pk44udr4x'
order by 1,2
) loop
sys.dbms_shared_pool.purge(statements.address||','||statements.hash_value, 'C');
end loop;
end;
/
If you have no control over the SQL, and need to fix the problem using a side-effect style solution, Jonathan Lewis and Randolf Geist have a solution using Virtual Private Database, that adds a unique predicate to each SQL statement on a specific table. You asked for something weird, here's a weird solution. Buckle up.
-- Create a random predicate for each query on a specific table.
create table hard_parse_test_rand as
select * from all_objects
where rownum <= 1000;
begin
dbms_stats.gather_table_stats(null, 'hard_parse_test_rand');
end;
/
create or replace package pkg_rls_force_hard_parse_rand is
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2;
end pkg_rls_force_hard_parse_rand;
/
create or replace package body pkg_rls_force_hard_parse_rand is
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2
is
s_predicate varchar2(100);
n_random pls_integer;
begin
n_random := round(dbms_random.value(1, 1000000));
-- s_predicate := '1 = 1';
s_predicate := to_char(n_random, 'TM') || ' = ' || to_char(n_random, 'TM');
-- s_predicate := 'object_type = ''TABLE''';
return s_predicate;
end force_hard_parse;
end pkg_rls_force_hard_parse_rand;
/
begin
DBMS_RLS.ADD_POLICY (USER, 'hard_parse_test_rand', 'hard_parse_policy', USER, 'pkg_rls_force_hard_parse_rand.force_hard_parse', 'select');
end;
/
alter system flush shared_pool;
You can see the hard-parsing in action by running the same query multiple times:
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
Now there are three entries in GV$SQL for each execution. There's some odd behavior in Virtual Private Database that parses the query multiple times, even though the final text looks the same.
select *
from gv$sql
where sql_text like '%hard_parse_test_rand%'
and sql_text not like '%quine%'
order by 1;
I think there is no hint indicating that Oracle shall find a new execution plan everytime it runs the query.
This is something we'd want for select * from mytable where is_active = :active, with is_active being 1 for very few rows and 0 for maybe billions of other rows. We'd want an index access for :active = 1 and a full table scan for :active = 0 then. Two different plans.
As far as I know, Oracle uses bind variable peeking in later versions, so with a look at the statistics it really comes up with different execution plans for different bind varibale content. But in older versions it did not, and thus we'd want some hint saying "make a new plan" there.
Oracle only re-used an execution plan for exactly the same query. It sufficed to add a mere blank to get a new plan. Hence a solution might be to generate the query everytime you want to run it with a random number included in a comment:
select /* 1234567 */ * from mytable where is_active = :active;
Or just don't use bind variables, if this is the problem you want to address:
select * from mytable where is_active = 0;
select * from mytable where is_active = 1;
I have a couple of questions arounbd ref_cursors. Below is a ref_cursor that returns a a single row to a Unix calling script based on what is passed in and although the select looks a little untidy, it works as expected.
My first question is that in the select I join to a lookup table to retrieve a single lookup value 'trigram' and on testing found that this join will occasionally fail as no value exists. I have tried to capture this with no_data_found and when others exception but this does not appear to be working.
Ideally if the join fails I would still like to return the values to the ref_cursor but add something like 'No Trigram' into the trigram field - primarily I want to capture exception.
My second question is more general about ref_cursors - While initially I have created this in its own procedure, it is likely to get called by the main processing procedure a number of times, one of the conditions requires a seperate select but the procedure would only ever return one ref_cur when called, can the procdure ref_cur out be associated with 2 queries.
CREATE OR REPLACE PROCEDURE OPC_OP.SiteZone_status
(in_site_id IN AW_ACTIVE_ALARMS.site_id%TYPE
,in_zone_id IN AW_ACTIVE_ALARMS.zone_id%TYPE
,in_mod IN AW_ACTIVE_ALARMS.module%TYPE
,p_ResultSet OUT TYPES.cursorType
)
AS
BEGIN
OPEN p_ResultSet FOR
SELECT a.site_id,'~',a.zone_id,'~',b.trigram,'~',a.module,'~',a.message_txt,'~',a.time_stamp
FROM AW_ACTIVE_ALARMS a, AW_TRIGRAM_LOCATION b
WHERE a.site_id = b.site_id
AND a.zone_id = b.zone_id
AND a.site_id = in_site_id
AND a.zone_id = in_zone_id
AND a.module LIKE substr(in_mod,1,3)||'%'
AND weight = (select max(weight) from AW_ACTIVE_ALARMS c
WHERE c.site_id = in_site_id
AND c.zone_id = in_zone_id
AND c.module LIKE substr(in_mod,1,3)||'%');
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.PUT_LINE('No Data Found');
END SiteZone_status;
I have modified my code to adopt answers provided and this now works as expected as a standalone procedure within my package, which when called via a UNIX script using:
v_process_alarm=$(sqlplus -s user/pass <
set colsep ','
set linesize 500
set pages 0 feedback off;
set serveroutput on;
VARIABLE resultSet REFCURSOR
EXEC alarm_pkg.rtn_active_alarm($site,$zone,$module, :resultSet);
PRINT :resultSet
EOF
)
However the procedure returning the ref cursor is to be called from the main processing procedure as I only want to return values if certain criteria are met. I have add an out refcurosr to my main procedure and set a variable to match, I then call my ref cursor procedure from here but this fails to compile
with the message 'Wrong number or types of argument in call'
My question is what is the correct way to call a procedure that has out refcursor from within a procedure and then return these values from there back to the calling script.
Oracle doesn't know whether a query will return rows until you fetch from the cursor. And it is not an error for a query to return 0 rows. So you will never get a no_data_found exception from opening a cursor. You'll only get that if you do something like a select into a local variable in which case a query that returns either 0 or more than 1 row is an error.
It sounds like you want to do an outer join to the AW_TRIGRAM_LOCATION table rather than a current inner join. This will return data from the other tables even if there is no matching row in aw_trigram_location. That would look something like this (I have no idea why every other column is a hard-coded tilde character, that seems exceptionally odd)
SELECT a.site_id,'~',
a.zone_id,'~',
nvl(b.trigram, 'No Trigram Found'),'~',
a.module,'~',
a.message_txt,'~',
a.time_stamp
FROM AW_ACTIVE_ALARMS a
LEFT OUTER JOIN AW_TRIGRAM_LOCATION b
ON( a.site_id = b.site_id AND
a.zone_id = b.zone_id )
WHERE a.site_id = in_site_id
AND a.zone_id = in_zone_id
AND a.module LIKE substr(in_mod,1,3)||'%'
AND weight = (select max(weight)
from AW_ACTIVE_ALARMS c
WHERE c.site_id = in_site_id
AND c.zone_id = in_zone_id
AND c.module LIKE substr(in_mod,1,3)||'%');
I'm not quite sure that I understand your last question. You can certainly put logic in your procedure to run a different query depending on an input parameter. Something like
IF( <<some condition>> )
THEN
OPEN p_ResultSet FOR <<query 1>>
ELSE
OPEN p_ResultSet FOR <<query 2>>
END IF;
Whether it makes sense to do this rather than adding additional predicates or creating separate procedures is a question you'd have to answer.
You can use a left outer join to your look-up table, which is clearer if you use ANSI join syntax rather than Oracle's old syntax. If there is no record in AW_TRIGRAM_LOCATION then b.trigram will be null, and you can then use NVL to assign a dummy value:
OPEN p_ResultSet FOR
SELECT a.site_id,'~',a.zone_id,'~',NVL(b.trigram, 'No Trigram'),'~',
a.module,'~',a.message_txt,'~',a.time_stamp
FROM AW_ACTIVE_ALARMS a
LEFT JOIN AW_TRIGRAM_LOCATION b
ON b.site_id = a.site_id
AND b.zone_id = a.zone_id
WHERE a.zone_id = in_zone_id
AND a.module LIKE substr(in_mod,1,3)||'%'
AND weight = (select max(weight) from AW_ACTIVE_ALARMS c
WHERE c.site_id = in_site_id
AND c.zone_id = in_zone_id
AND c.module LIKE substr(in_mod,1,3)||'%');
You won't get NO_DATA_FOUND opening a cursor, only when you fetch from it (depending on what is actually consuming this). It's a bad idea to catch WHEN OTHERS anyway - you would want to catch WHEN NO_DATA_FOUND, though it wouldn't help here. And using dbms_output to report an error relies on the client enabling its display, which you can't generally assume.
I usually do this query every day:
select * from table1
where name = 'PAUL';
select * from table2
where id_user = 012345;
select * from table25
where name = 'PAUL';
select * from table99
where name = 'PAUL';
select * from table28
where id_user = 012345;
.
.
.
I do this query every day. Today is 15 tables, tomorrow maybe 20, the only change is the name or id of the user to search. I have to go placing the username or id in each of the queries, so I need a query where there is a variable and assign the name and id.
.
.
There is a way to simplify this query and make it better? Such as:
DECLARE
l_name table1.name %type;
l_id_user table28.id_user %type;
BEGIN
l_id_user := 012345;
l_name := 'PAUL';
select * from table1
where name in ('l_name');
select * from table2
where id_user in (l_id_user);
.
.
.
END;
I tried that way but fails. I need this query because in most cases need to see up to 20 tables or more.
If what you want is an easy way to repeatedly run a sequence of select statements in SQL*Plus - but with varying criteria you can do it like this:
ACCEPT l_name PROMPT "Input user name: "
ACCEPT l_id_user PROMPT "Input user id: "
select * from table1
where name = '&l_name';
select * from table2
where id_user = &l_id_user;
select * from table25
where name = '&l_name';
select * from table99
where name = '&l_name';
select * from table28
where id_user = &l_id_user;
In a pl/sql block declare ... begin .. end; select statements behave differently from when you run them stand alone in SQL*Plus:
They won't automatically print output.
You must use e.g. a FOR LOOP or a variation of the SELECT INTO syntax to capture the result for subsequent processing. You should seek some documentation on pl/sql programming.
You should understand that pl/sql is definitely not designed for screen output.
It is possible to print out string values using dbms_output.put_line, but you are completely on your own with regards to formatting a row of values into a suitable string.
If your result contains more than a few columns formatting for screen output becomes very cumbersome as you wont have any formatting utilities like the java String.format() to assist you.
And if your result contains more than one row you have to provide some sort of loop construct allowing you to print out the individual rows.
Conlusion:
These are the drawbacks of pl/sql - I really don't see any benefits - unless your aim is to apply some logic to the data being read rather than just printing it to the screen or a file.
I have a simple INSERT query where I need to use UPDATE instead when the primary key is a duplicate. In MySQL this seems easier, in Oracle it seems I need to use MERGE.
All examples I could find of MERGE had some sort of "source" and "target" tables, in my case, the source and target is the same table. I was not able to make sense of the examples to create my own query.
Is MERGE the only way or maybe there's a better solution?
INSERT INTO movie_ratings
VALUES (1, 3, 5)
It's basically this and the primary key is the first 2 values, so an update would be like this:
UPDATE movie_ratings
SET rating = 8
WHERE mid = 1 AND aid = 3
I thought of using a trigger that would automatically execute the UPDATE statement when the INSERT was called but only if the primary key is a duplicate. Is there any problem doing it this way? I need some help with triggers though as I'm having some difficulty trying to understand them and doing my own.
MERGE is the 'do INSERT or UPDATE as appropriate' statement in Standard SQL, and probably therefore in Oracle SQL too.
Yes, you need a 'table' to merge from, but you can almost certainly create that table on the fly:
MERGE INTO Movie_Ratings M
USING (SELECT 1 AS mid, 3 AS aid, 8 AS rating FROM dual) N
ON (M.mid = N.mid AND M.aid = N.aid)
WHEN MATCHED THEN UPDATE SET M.rating = N.rating
WHEN NOT MATCHED THEN INSERT( mid, aid, rating)
VALUES(N.mid, N.aid, N.rating);
(Syntax not verified.)
A typical way of doing this is
performing the INSERT and catch a DUP_VAL_ON_INDEX and then perform an UPDATE instead
performing the UPDATE first and if SQL%Rows = 0 perform an INSERT
You can't write a trigger on a table that does another operation on the same table. That's causing an Oracle error (mutating tables).
I'm a T-SQL guy but a trigger in this case is not a good solution. Most triggers are not good solutions. In T-SQL, I would simply perform an IF EXISTS (SELECT * FROM dbo.Table WHERE ...) but in Oracle, you have to select the count...
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT(*)
INTO cnt
FROM mytable
WHERE id = 12345;
IF( cnt = 0 )
THEN
...
ELSE
...
END IF;
END;
It would appear that MERGE is what you need in this case:
MERGE INTO movie_ratings mr
USING (
SELECT rating, mid, aid
WHERE mid = 1 AND aid = 3) mri
ON (mr.movie_ratings_id = mri.movie_ratings_id)
WHEN MATCHED THEN
UPDATE SET mr.rating = 8 WHERE mr.mid = 1 AND mr.aid = 3
WHEN NOT MATCHED THEN
INSERT (mr.rating, mr.mid, mr.aid)
VALUES (1, 3, 8)
Like I said, I'm a T-SQL guy but the basic idea here is to "join" the movie_rating table against itself. If there's no performance hit on using the "if exists" example, I'd use it for readability.
We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's
Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues.
what other alternatives are there? Collections? Cursors?
Our typical use of GTT's is like so:
Insert into GTT
INSERT INTO some_gtt_1
(column_a,
column_b,
column_c)
(SELECT someA,
someB,
someC
FROM TABLE_A
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
Update the GTT
UPDATE some_gtt_1 a
SET column_a = (SELECT b.data_a FROM some_table_b b
WHERE a.column_b = b.data_b AND a.column_c = 'C')
WHERE column_a IS NULL OR column_a = ' ';
and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries.
I have a three part question:
Can someone show how to transform
the above sample queries to
collections and/or cursors?
Since
with GTT's you can work natively
with SQL...why go away from the
GTTs? are they really that bad.
What should be the guidelines on
When to use and When to avoid GTT's
Let's answer the second question first:
"why go away from the GTTs? are they
really that bad."
A couple of days ago I was knocking up a proof of concept which loaded a largish XML file (~18MB) into an XMLType. Because I didn't want to store the XMLType permanently I tried loading it into a PL/SQL variable (session memory) and a temporary table. Loading it into a temporary table took five times as long as loading it into an XMLType variable (5 seconds compared to 1 second). The difference is because temporary tables are not memory structures: they are written to disk (specifically your nominated temporary tablespace).
If you want to cache a lot of data then storing it in memory will stress the PGA, which is not good if you have lots of sessions. So it's a trade-off between RAM and time.
To the first question:
"Can someone show how to transform the
above sample queries to collections
and/or cursors?"
The queries you post can be merged into a single statement:
SELECT case when a.column_a IS NULL OR a.column_a = ' '
then b.data_a
else column_a end AS someA,
a.someB,
a.someC
FROM TABLE_A a
left outer join TABLE_B b
on ( a.column_b = b.data_b AND a.column_c = 'C' )
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
(I have simply transposed your logic but that case() statement could be replaced with a neater nvl2(trim(a.column_a), a.column_a, b.data_a) ).
I know you say your queries are more complicated but your first port of call should be to consider rewriting them. I know how seductive it is to break a gnarly query into lots of baby SQLs stitched together with PL/SQL but pure SQL is way more efficient.
To use a collection it is best to define the types in SQL, because it gives us the flexibility to use them in SQL statements as well as PL/SQL.
create or replace type tab_a_row as object
(col_a number
, col_b varchar2(23)
, col_c date);
/
create or replace type tab_a_nt as table of tab_a_row;
/
Here's a sample function, which returns a result set:
create or replace function get_table_a
(p_arg in number)
return sys_refcursor
is
tab_a_recs tab_a_nt;
rv sys_refcursor;
begin
select tab_a_row(col_a, col_b, col_c)
bulk collect into tab_a_recs
from table_a
where col_a = p_arg;
for i in tab_a_recs.first()..tab_a_recs.last()
loop
if tab_a_recs(i).col_b is null
then
tab_a_recs(i).col_b := 'something';
end if;
end loop;
open rv for select * from table(tab_a_recs);
return rv;
end;
/
And here it is in action:
SQL> select * from table_a
2 /
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 12-JUN-10
SQL> var rc refcursor
SQL> exec :rc := get_table_a(1)
PL/SQL procedure successfully completed.
SQL> print rc
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 something 12-JUN-10
SQL>
In the function it is necessary to instantiate the type with the columns, in order to avoid the ORA-00947 exception. This is not necessary when populating a PL/SQL table type:
SQL> create or replace procedure pop_table_a
2 (p_arg in number)
3 is
4 type table_a_nt is table of table_a%rowtype;
5 tab_a_recs table_a_nt;
6 begin
7 select *
8 bulk collect into tab_a_recs
9 from table_a
10 where col_a = p_arg;
11 end;
12 /
Procedure created.
SQL>
Finally, guidelines
"What should be the guidelines on When
to use and When to avoid GTT's"
Global temp tables are very good when we need share cached data between different program units in the same session. For instance if we have a generic report structure generated by a single function feeding off a GTT which is populated by one of several procedures. (Although even that could also be implemented with dynamic ref cursors ...)
Global temporary tables are also good if we have a lot of intermediate processing which is just too complicated to be solved with a single SQL query. Especially if that processing must be applied to subsets of the retrieved rows.
But in general the presumption should be that we don't need to use a temporary table. So
Do it in SQL unless it is too hard it which case ...
... Do it in PL/SQL variables (usually collections) unless it takes too much memory it which case ...
... Do it with a Global Temporary Table
Generally I'd use a PL/SQL collection for storing small volumes of data (maybe a thousand rows). If the data volumes were much larger, I'd use a GTT so that they don't overload the process memory.
So I might select a few hundred rows from the database into a PL/SQL collection, then loop through them to do some calculation/delete a few or whatever, then insert that collection into another table.
If I was dealing with hundreds of thousands of rows, I would try to push as much of the 'heavy lifting' processing into large SQL statements. That may or may not require GTT.
You can use SQL level collection objects as something that translates quite easily between SQL and PL/SQL
create type typ_car is object (make varchar2(10), model varchar2(20), year number(4));
/
create type typ_coll_car is table of typ_car;
/
select * from table (typ_coll_car(typ_car('a','b',1999), typ_car('A','Z',2000)));
MAKE MODEL YEAR
---------- -------------------- ---------------
a b 1,999.00
A Z 2,000.00
declare
v_car1 typ_car := typ_car('a','b',1999);
v_car2 typ_car := typ_car('A','Z',2000);
t_car typ_coll_car := typ_coll_car();
begin
t_car := typ_coll_car(v_car1, v_car2);
FOR i in (SELECT * from table(t_car)) LOOP
dbms_output.put_line(i.year);
END LOOP;
end;
/