literal string works but variables take forever - oracle

I have a query that works when I have fixed values. ie:
select
count(*)
from
address a
where
a.primary_name like upper('cambourne court') and
a.secondary_name like upper('flat 9');
However replace the upper('flat 9') with a variable which is second_name:=upper('flat 9') and the search now returns all 111 addresses in 'cambourne court'.
Why would this be?
EDIT: This is the complete address.sql file (with comments removed)
declare
address_details address%rowtype;
current_loc varchar2(32);
prime_name varchar2(255);
prime_number varchar2(255);
second_name varchar2(255);
street_name varchar2(255);
town_name varchar2(255);
success boolean;
the_count number;
begin
prime_name:=upper('&&primary_name');
prime_number:=upper('&&primary_number');
second_name:=upper('&&secondary_name');
street_name:=upper('&&street_name');
town_name:=upper('&&town_name');
success:=true;
-- error checking here (removed for brevity)
if success then
current_loc := 'finding address';
select
count(*)
into
the_count
from
dependency d,
address a,
street s
where
d.dep_obj_id1 = 2 and
d.dep_obj_id2 = 1 and
a.loc_id = d.dep_id1 and
s.loc_id = d.dep_id2 and
a.primary_name like prime_name and
a.secondary_name like second_name and
s.name like street_name and
s.town like town_name;
end if;
dbms_output.put_line('success: address found '||the_count);
exception
when too_many_rows then
dbms_output.put_line('failure: too many rows while '||current_loc);
when no_data_found then
dbms_output.put_line('failure: no rows found while '||current_loc);
when others then
dbms_output.put_line('failure: general error while '||current_loc);
end;
/
Update: I restarted SQL*Plus which seemed to have fixed the break.
Replacing prime_name and second_name with the actual strings means the code runs in less than a second. With variables means it takes more than 2 minutes.

Your symptoms correspond to having a PL/SQL variable with the same name as a column in the table.
[Edit]
feeling somewhat guilty with an upvote that wasn't the correct answer, so I tried to reproduce and don't get your results:
SQL> select * from address
2 ;
PRIMARY_NAME SECONDARY_NAME
------------------------------ ------------------------------
CAMBOURNE COURT FLAT 9
CAMBOURNE COURT FLAT 10
SQL> declare
2 second_name varchar2(30) := upper('flat 9');
3 x pls_integer;
4 cursor c is
5 select
6 count(*)
7 from address a
8 where
9 a.primary_name like upper('cambourne court') and
10 a.secondary_name like upper('flat 9')
11 ;
12 begin
13 select count(*) into x
14 from address a
15 where
16 a.primary_name like upper('cambourne court') and
17 a.secondary_name like upper('flat 9');
18 dbms_output.put_line('literal: '||x);
19 select count(*) into x
20 from address a
21 where
22 a.primary_name like upper('cambourne court') and
23 a.secondary_name like second_name;
24 dbms_output.put_line('variable: '||x);
25 end;
26 /
literal: 1
variable: 1
PL/SQL procedure successfully completed.

The 111 records suggests second_name doesn't contain the value you expect; how are you capturing &&secondary_name, and can you check the value it actually has before and after your omitted validation section? From the results it seems to contain '%' rather than 'flat 9', but I assume you've already checked that.
The speed issue suggests the optimiser is changing behaviour in a way that's changing the join order and/or the indexes being used. By default that could be joining every street row with every every address record that has a Cambourne Court and only then doing the dependency checks, but it will vary quite a bit based on what indexes it thinks it can use and any stats that are available. The difference is that with the literals, even though you're using like there are no wildcards so it may know it can use an index on the primary_name and/or secondary_name; in the variable version it can't know that when the query is parsed so has to assume worse-case, which would be '%'. Which it may actually be getting if it's returning 111 addresses.
Without doing an explain plan it's hard to guess exactly what's going on, but you could try adding some optimiser hints to at least try and get the join order right, and even to use an index - though that should possibly not stay in place if you can ever have values starting with %. That might tell you what's being done differently.

The explain plan may be suggestive.
After running it, find the sql_id from v$sql for that statemnet
select sql_text, sql_id from v$sql where lower(sql_text) like '%address%street%';
Then plug that into
select * from table(dbms_xplan.display_cursor('1mmy8g93um377'));
What you should see at the bottom is something like this, which would show whether there were any oddities in the plan (eg using a column in one of the tables, using a function...).
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("A"."LOC_ID"="D"."DEP_ID1" AND "S"."LOC_ID"="D"."DEP_ID2")
4 - filter(("A"."PRIMARY_NAME" LIKE :B4 AND "A"."SECONDARY_NAME" LIKE
:B3))
6 - filter(("S"."NAME" LIKE :B2 AND "S"."TOWN" LIKE :B1))
7 - filter(("D"."DEP_OBJ_ID1"=2 AND "D"."DEP_OBJ_ID2"=1))

Alex has pointed the probable cause. Tables are indexed and using "like" with a variable is a case of index deactivation. Optimizers treat "like" expressions with constants that have no wildcards or placeholders as "=", so indexes if present are considered.
Drop your index on those columns and you'll get same bad performance with constants or variables. Actually don't do it, just autotrace and compare plans.
Regards,

Related

Using EXECUTE IMMEDIATE based on entries of table

I (using Oracle 12c, PL/SQL) need to update an existing table TABLE1 based on information stored in a table MAP. In a simplified version, MAP looks like this:
COLUMN_NAME
MODIFY
COLUMN1
N
COLUMN2
Y
COLUMN3
N
...
...
COLUMNn
Y
COLUMN1 to COLUMNn are column names in TABLE1 (but there are more columns, not just these). Now I need to update a column in TABLE1 if MODIFY in table MAP contains a 'Y' for that columns' name. There are other row conditions, so what I would need would be UPDATE statements of the form
UPDATE TABLE1
SET COLUMNi = value_i
WHERE OTHER_COLUMN = 'xyz_i';
where COLUMNi runs through all the columns of TABLE1 which are marked with MODIFY = 'Y' in MAP. value_i and xyz_i also depend on information stored in MAP (not displayed in the example).
The table MAP is not static but changes, so I do not know in advance which columns to update. What I did so far is to generate the UPDATE-statements I need in a query from MAP, i.e.
SELECT <Text of UPDATE-STATEMENT using row information from MAP> AS SQL_STMT
FROM MAP
WHERE MODIFY = 'Y';
Now I would like to execute these statements (possibly hundreds of rows). Of course I could just copy the contents of the query into code and execute, but is there a way to do this automatically, e.g. using EXECUTE IMMEDIATE? It could be something like
BEGIN
EXECUTE IMMEDIATE SQL_STMT USING 'xyz_i';
END;
only that SQL_STMT should run through all the rows of the previous query (and 'xyz_i' varies with the row as well). Any hints how to achieve this or how one should approach the task in general?
EDIT: As response to the comments, a bit more background how this problem emerges. I receive an empty n x m Matrix (empty except row and column names, think of them as first row and first column) quarterly and need to populate the empty fields from another process.
The structure of the initial matrix changes, i.e. there may be new/deleted columns/rows and existing columns/rows may change their position in the matrix. What I need to do is to take the old version of the matrix, where I already have filled the empty spaces, and translate this into the new version. Then, the populating process merely looks if entries have changed and if so, alters them.
The situation from the question arises after I have translated the old version into the new one, before doing the delta. The new matrix, populated with the old information, is TABLE1. The delta process, over which I have no control, gives me column names and information to be entered into the cells of the matrix (this is table MAP). So I need to find the column in the matrix labeled by the delta process and then to change values in rows (which ones is specified via other information provided by the delta process)
Dynamic SQL it is; here's an example, see if it helps.
This is a table whose contents should be modified:
SQL> select * from test order by id;
ID NAME SALARY
---------- ---------- ----------
1 Little 100
2 200
3 Foot 0
4 0
This is the map table:
SQL> select * from map;
COLUMN CB_MODIFY VALUE WHERE_CLAUSE
------ ---------- ----- -------------
NAME Y Scott where id <= 3
SALARY N 1000 where 1 = 1
Procedure loops through all columns that are set to be modified, composes the dynamic update statement and executes it:
SQL> declare
2 l_str varchar2(1000);
3 begin
4 for cur_r in (select m.column_name, m.value, m.where_clause
5 from map m
6 where m.cb_modify = 'Y'
7 )
8 loop
9 l_str := 'update test set ' ||
10 cur_r.column_name || ' = ' || chr(39) || cur_r.value || chr(39) || ' ' ||
11 cur_r.where_clause;
12 execute immediate l_str;
13 end loop;
14 end;
15 /
PL/SQL procedure successfully completed.
Result:
SQL> select * from test order by id;
ID NAME SALARY
---------- ---------- ----------
1 Scott 100
2 Scott 200
3 Scott 0
4 0
SQL>

Execute immediate use in select * from in oracle

I am trying to get maximum length of column data in oracle by executing a dynamic query as part of Select statement , except seems i don't think we can use execute immediate in an select clause. Could i get some help with syntax or understanding as to better way to do this.
SELECT
owner OWNER,
table_name,
column_name,
'select max(length('||column_name||')) from '||table_name||';' max_data_length
FROM
dba_tab_columns
WHERE
AND ( data_type = 'NUMBER'
OR data_type = 'INTEGER' )
the 4th column in above query spits out a sql string rather than computing the value and returning it.
Here is some food for thought. Note that I am only looking for numeric columns that don't already have precision specified in the catalog. (If you prefer, you can audit all numeric columns and compare the declared precision against the actual precision used by your data.)
I am also looking only in specific schemas. Instead, you may give a list of schemas to be ignored; I hope you are not seriously considering making any changes to SYS, for example, even if it does (and it does!) have numeric columns without specified precision.
The catalog doesn't store INTEGER in the data type; instead, it stores that as NUMBER(38) So I am not searching for data type INTEGER in DBA_TAB_COLUMNS. But this raises an interesting question - perhaps you should search for all columns where DATA_PRECISION is null (as in my code below), but also for DATA_PRECISION = 38.
In the code below I use DBMS_OUTPUT to display the findings directly to the screen. You will probably want to do something smarter with this; either create a table function, or create a table and insert the findings in it, or perhaps even issue DDL already (note that those also require dynamic SQL).
This still leaves you to deal with scale. Perhaps you can get around that with a specification like NUMBER(prec, *) - not sure if that will meet your needs. But the idea is similar; you will just need to write code carefully, like I did for precision (accounting for the decimal point and the minus sign, for example).
Long story short, here is what I ran on my system, and the output it produced.
declare
prec number;
begin
for rec in (
select owner, table_name, column_name
from all_tab_columns
where owner in ('SCOTT', 'HR')
and data_type = 'NUMBER'
and data_precision is null
)
loop
execute immediate
'select max(length(translate(to_char(' || rec.column_name ||
'), ''0-.'', ''0'')))
from ' || rec.owner || '.' || rec.table_name
into prec;
dbms_output.put_line('owner: ' || lpad(rec.owner, 12, ' ') ||
' table name: ' || lpad(rec.table_name, 12, ' ') ||
' column_name: ' || lpad(rec.column_name, 12, ' ') ||
' precision: ' || prec);
end loop;
end;
/
PL/SQL procedure successfully completed.
owner: HR table name: REGIONS column_name: REGION_ID precision: 1
owner: HR table name: COUNTRIES column_name: REGION_ID precision: 1
owner: SCOTT table name: SALGRADE column_name: GRADE precision: 1
owner: SCOTT table name: SALGRADE column_name: LOSAL precision: 4
owner: SCOTT table name: SALGRADE column_name: HISAL precision: 4
PL/SQL procedure successfully completed.
EDIT
Here are several additional points (mostly, corrections) based on extended conversations with Sayan Malakshinov in comments to my answer and to his.
Most importantly, even if we can figure out max precision of numeric columns, that doesn't seem directly related to the ultimate goal of this whole thing, which is to determine the correct Postgre data types for the existing Oracle columns. For example in Postgre, unlike Oracle, it is important to distinguish between integer and non-integer. Unless scale is explicitly 0 in Oracle, we don't know that the column is "integers only"; we could find that out, through a similar dynamic SQL approach, but we would be checking for non-integer values, not precision.
Various corrections: My query is careless with regard to quoted identifiers (schema name, table name, column name). See the proper use of double-quotes in the dynamic query in Sayan's answer; my dynamic query should be modified to use double-quotes in the same way his does.
In my approach I pass numbers through TO_CHAR and then remove minus sign and decimal period. Of course, one's system may use comma, or other symbols, for decimal separator; the safer approach is to remove everything that is not a digit. That can be done with
translate(col_name, '0123456789' || col_name, '0123456789')
The query also doesn't handle very large or very small numbers, which can be stored in the Oracle database, but can only be represented in scientific notation when passed through TO_CHAR().
In any case, since "max precision" doesn't seem directly related to the ultimate goal of mapping to correct data types in Postgre, I am not changing the code - leaving it in the original form.
Thanks to Sayan for pointing out all these issues.
One more thing - *_TAB_COLUMNS contains information about view columns too; very likely those should be ignored for the task at hand. Very easy to do, as long as we realize it needs to be done.
Reading carefully that AWS article and since both previous answers (including mine) use rough estimate (length+to_char without format model and vsize operate decimal length, not bytes), I decided to write another full answer.
Look at this simple example:
with
function f_bin(x number) return varchar2 as
bi binary_integer;
e_overflow exception;
pragma exception_init(e_overflow, -1426);
begin
bi:=x;
return case when bi=x then 'ok' else 'error' end;
exception when e_overflow then return 'error';
end;
function f_check(x number, f varchar2) return varchar2 as
begin
return case when to_number(to_char(abs(x),f),f) = abs(x) then 'ok' else 'error' end;
exception when VALUE_ERROR then return 'error';
end;
a(a1) as (
select * from table(sys.odcinumberlist(
1,
0.1,
-0.1,
-7,
power(2,15)-1,
power(2,16)-1,
power(2,31)-1,
power(2,32)-1
))
)
select
a1,
f_check(a1,'fm0XXX') byte2,
f_check(a1,'fm0XXXXXXX') byte4,
f_bin(a1) ff_signed_binary_int,
to_char(abs(a1),'fm0XXXXXXXXXXXXXXX') f_byte8,
f_check(a1,'fm0XXXXXXXXXXXXXXX') byte8,
vsize(a1) vs,
dump(a1) dmp
from a;
Result:
A1 BYTE2 BYTE4 FF_SIGNED_ F_BYTE8 BYTE8 VS DMP
---------- ---------- ---------- ---------- ---------------- ---------- ---------- ----------------------------------------
1 ok ok ok 0000000000000001 ok 2 Typ=2 Len=2: 193,2
.1 error error error 0000000000000000 error 2 Typ=2 Len=2: 192,11
-.1 error error error 0000000000000000 error 3 Typ=2 Len=3: 63,91,102
-7 ok ok ok 0000000000000007 ok 3 Typ=2 Len=3: 62,94,102
32767 ok ok ok 0000000000007FFF ok 4 Typ=2 Len=4: 195,4,28,68
65535 ok ok ok 000000000000FFFF ok 4 Typ=2 Len=4: 195,7,56,36
2147483647 error ok ok 000000007FFFFFFF ok 6 Typ=2 Len=6: 197,22,48,49,37,48
4294967295 error ok error 00000000FFFFFFFF ok 6 Typ=2 Len=6: 197,43,95,97,73,96
Here I used PL/SQL functions for readability and to make it more clear.
Function f_bin casts an input number parameter to PL/SQL binary_integer (signed int4) and compares the result with input parameter, ie it checks if it loses accuracy. Defined exception shows that it can raise an exception 1426 "numeric overflow".
Function f_check does double conversion to_number(to_char(...)) of the input value and checks if it's still equal to the input value. Here I use hexadecimal format mask (XX = 1 byte), so it checks if an input number can fit an in this format mask. Hexadecimal format model works with non-negative numbers, so we need to use abs() here.
F_BYTE8 shows formatted value that uses a function from the column BYTE8, so you can easily see the loss of accuracy here.
All the above were just for readability, but we can make the same using just pure SQL:
with
a(a1) as (
select * from table(sys.odcinumberlist(
1,
0.1,
-0.1,
-7,
power(2,15)-1,
power(2,16)-1,
power(2,31)-1,
power(2,32)-1
))
)
select
a1,
case when abs(a1) = to_number(to_char(abs(a1),'fmXXXXXXXXXXXXXXX') default null on conversion error,'fmXXXXXXXXXXXXXXX')
then ceil(length(to_char(abs(a1),'fmXXXXXXXXXXXXXXX'))/2)
else -1
end xx,
vsize(a1) vs,
dump(a1) dmp
from a;
Result:
A1 XX VS DMP
---------- ---------- ---------- ----------------------------------------
1 1 2 Typ=2 Len=2: 193,2
.1 -1 2 Typ=2 Len=2: 192,11
-.1 -1 3 Typ=2 Len=3: 63,91,102
-7 1 3 Typ=2 Len=3: 62,94,102
32767 2 4 Typ=2 Len=4: 195,4,28,68
65535 2 4 Typ=2 Len=4: 195,7,56,36
2147483647 4 6 Typ=2 Len=6: 197,22,48,49,37,48
4294967295 4 6 Typ=2 Len=6: 197,43,95,97,73,96
As you can see, here I return -1 in case of conversion errors to byte8 and number of non-zero bytes otherwize.
Obviusly it can be simplified even more: you can just check range limits and that x=trunc(x) or mod(x,1)=0.
Looks like that is what you need:
VSIZE returns the number of bytes in the internal representation of expr. If expr is null, then this function returns null.
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/VSIZE.html
In Oracle INTEGER is just a number(*,0): http://orasql.org/2012/11/10/differences-between-integerint-in-sql-and-plsql/
select
owner,table_name,column_name,
data_type,data_length,data_precision,data_scale,avg_col_len
,x.vs
from (select/*+ no_merge */ c.*
from dba_tab_columns c
where data_type='NUMBER'
and owner not in (select username from dba_users where ORACLE_MAINTAINED='Y')
) c
,xmltable(
'/ROWSET/ROW/VSIZE'
passing dbms_xmlgen.getxmltype('select nvl(max(vsize("'||c.column_name||'")),0) as VSIZE from "'||c.owner||'"."'||c.table_name||'"')
columns vs int path '.'
) x
;
Update: if you read oracle internal number format (exponent+bcd-mantissa and look at a result of dump(x) function, you can see that Oracle stores numbers as a decimal value(4 bits per 1 decimal digit, 2 digits in 1 byte), so for such small ranges you can just take their maximum BCD mantissa+1 exponent as a rough estimation.

Is there a hint to generate execution plan ignoring the existing one from shared pool?

Is there a hint to generate execution plan ignoring the existing one from the shared pool?
There is not a hint to create an execution plan that ignores plans in the shared pool. A more common way of phrasing this question is: how do I get Oracle to always perform a hard parse?
There are a few weird situations where this behavior is required. It would be helpful to fully explain your reason for needing this, as the solution varies depending why you need it.
Strange performance problem. Oracle performs some dynamic re-optimization of SQL statements after the first run, like adaptive cursor sharing and cardinality feedback. In the rare case when those features backfire you might want to disable them.
Dynamic query. You have a dynamic query that used Oracle data cartridge to fetch data in the parse step, but Oracle won't execute the parse step because the query looks static to Oracle.
Misunderstanding. Something has gone wrong and this is an XY problem.
Solutions
The simplest way to solve this problem are by using Thorsten Kettner's solution of changing the query each time.
If that's not an option, the second simplest solution is to flush the query from the shared pool, like this:
--This only works one node at a time.
begin
for statements in
(
select distinct address, hash_value
from gv$sql
where sql_id = '33t9pk44udr4x'
order by 1,2
) loop
sys.dbms_shared_pool.purge(statements.address||','||statements.hash_value, 'C');
end loop;
end;
/
If you have no control over the SQL, and need to fix the problem using a side-effect style solution, Jonathan Lewis and Randolf Geist have a solution using Virtual Private Database, that adds a unique predicate to each SQL statement on a specific table. You asked for something weird, here's a weird solution. Buckle up.
-- Create a random predicate for each query on a specific table.
create table hard_parse_test_rand as
select * from all_objects
where rownum <= 1000;
begin
dbms_stats.gather_table_stats(null, 'hard_parse_test_rand');
end;
/
create or replace package pkg_rls_force_hard_parse_rand is
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2;
end pkg_rls_force_hard_parse_rand;
/
create or replace package body pkg_rls_force_hard_parse_rand is
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2
is
s_predicate varchar2(100);
n_random pls_integer;
begin
n_random := round(dbms_random.value(1, 1000000));
-- s_predicate := '1 = 1';
s_predicate := to_char(n_random, 'TM') || ' = ' || to_char(n_random, 'TM');
-- s_predicate := 'object_type = ''TABLE''';
return s_predicate;
end force_hard_parse;
end pkg_rls_force_hard_parse_rand;
/
begin
DBMS_RLS.ADD_POLICY (USER, 'hard_parse_test_rand', 'hard_parse_policy', USER, 'pkg_rls_force_hard_parse_rand.force_hard_parse', 'select');
end;
/
alter system flush shared_pool;
You can see the hard-parsing in action by running the same query multiple times:
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
Now there are three entries in GV$SQL for each execution. There's some odd behavior in Virtual Private Database that parses the query multiple times, even though the final text looks the same.
select *
from gv$sql
where sql_text like '%hard_parse_test_rand%'
and sql_text not like '%quine%'
order by 1;
I think there is no hint indicating that Oracle shall find a new execution plan everytime it runs the query.
This is something we'd want for select * from mytable where is_active = :active, with is_active being 1 for very few rows and 0 for maybe billions of other rows. We'd want an index access for :active = 1 and a full table scan for :active = 0 then. Two different plans.
As far as I know, Oracle uses bind variable peeking in later versions, so with a look at the statistics it really comes up with different execution plans for different bind varibale content. But in older versions it did not, and thus we'd want some hint saying "make a new plan" there.
Oracle only re-used an execution plan for exactly the same query. It sufficed to add a mere blank to get a new plan. Hence a solution might be to generate the query everytime you want to run it with a random number included in a comment:
select /* 1234567 */ * from mytable where is_active = :active;
Or just don't use bind variables, if this is the problem you want to address:
select * from mytable where is_active = 0;
select * from mytable where is_active = 1;

upgrading 10 g to 11g checklist

There are plenty of stored procedure on 10g platform . ( Almost 500 SPs)
Each SP might have loop, fetch and etc.
I'd like to ask you if there is a cool method to control all the SPs which are currently running on 10g, and guarantee that it works on 11 g.
I have a development server 1 which is 10 g and the other development server is 11 g.
I can use both of them to testify the propose above.
For instance I know that on 10 g if you use loop, and during the loop the update statements do not affect the loop data but 11g.
There might be more cases that I have to consider. Please tell me if you have any brillant idea , otherwise I will check them up one by one manually and it is a lot of time and human control might be weak sometimes.
important note: It is said that if you select some data from a table or tables, and if you use it in a loop, then during de loop, if you update and commit between loop case, it affects the selected data in cursor.(#11g) But this did not happen #10g version. Please correct me if you heard something like that.
The Example Case;
CREATE TABLE vty_musteri(
musterino NUMBER NOT NULL,
subeadi VARCHAR2(61),
kayitzamani VARCHAR2(20)
);
INSERT INTO vty_musteri (musterino, subeadi, kayitzamani )
VALUES (12345, 'AMSTERDAM', '05/30/2012 15:11:13');
COMMIT;
CREATE UNIQUE INDEX vty_musteri_idx ON vty_musteri (musterino);
SELECT * FROM vty_musteri;
CREATE OR REPLACE PROCEDURE krd_upd_silseomusteri_sp(RC1 in out SYS_REFCURSOR) AS
v_musterino NUMBER := 12345;
BEGIN
OPEN RC1 FOR
SELECT m.musterino, m.subeadi, m.kayitzamani
FROM vty_musteri m
WHERE m.musterino = v_musterino;
update vty_musteri
set subeadi = 'PORTO',
kayitzamani = (SELECT TO_CHAR(SYSDATE, 'MM/DD/YYYY HH24:MI:SS')
FROM dual)
where musterino = v_musterino;
COMMIT;
After all run this test on PLSQL:
DECLARE
--test
vRecTip SYS_REFCURSOR;
TYPE vRecTipK IS RECORD(
musterino NUMBER,
subeadi VARCHAR2(61),
kayitzamani VARCHAR2(20)
);
v_SeoTip vRecTipK;
BEGIN
krd_upd_silseomusteri_sp(rc1 => vRecTip);
IF vRecTip%ISOPEN THEN
LOOP
FETCH vRecTip
INTO v_SeoTip;
EXIT WHEN vRecTip%NOTFOUND;
dbms_output.put_line('The Value : ' || v_SeoTip.musterino || ' - ' || v_SeoTip.subeadi || ' - ' || v_SeoTip.kayitzamani);
END LOOP;
END IF;
COMMIT;
END;
END;
If you run this on 10g you will see AMSTERDAM, but on 11G, it is PORTO.
To fix it; I put a hint in the sp like the following:
SELECT /*+ full(m)*/ m.musterino, m.subeadi, m.kayitzamani
Isn't it weird? any alternative sugesstion to get AMSTERDAM ?
One thing we stumbled upon during a migration were queries that weren't supposed to work on 10.x (but did anyway) did no longer work on 11.x
This happens if you have ambigous column references in your query.
Something like this:
SELECT name,
f.some_col,
b.other_col
FROM foo f,
JOIN bar b ON f.id = b.fid
If the column name exists in both tables, 10.x would run the statement - which was a bug.
This bug (BugID: 6760937) was fixed and makes the statement (rightfully) fail in 11.x
Basic PLSQL structures should work exactly the same. Some pitfalls are listed here:
http://www.help2ora.com/index.php/2011/08/04/be-careful-when-migrating-difference-between-oracle-10g-and-11g/
To fix it; I put a hint in the sp like the following:
SELECT /+ full(m)/ m.musterino, m.subeadi, m.kayitzamani
Recently I have done migration to Oracle 11g. Faced few unprecedented issues. I have written a blog post on this. Have a look http://learncodewrite.blogspot.in/2017/04/migrating-to-oracle-11g-from-oracle-10g.html?m=1.

ways to avoid global temp tables in oracle

We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's
Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues.
what other alternatives are there? Collections? Cursors?
Our typical use of GTT's is like so:
Insert into GTT
INSERT INTO some_gtt_1
(column_a,
column_b,
column_c)
(SELECT someA,
someB,
someC
FROM TABLE_A
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
Update the GTT
UPDATE some_gtt_1 a
SET column_a = (SELECT b.data_a FROM some_table_b b
WHERE a.column_b = b.data_b AND a.column_c = 'C')
WHERE column_a IS NULL OR column_a = ' ';
and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries.
I have a three part question:
Can someone show how to transform
the above sample queries to
collections and/or cursors?
Since
with GTT's you can work natively
with SQL...why go away from the
GTTs? are they really that bad.
What should be the guidelines on
When to use and When to avoid GTT's
Let's answer the second question first:
"why go away from the GTTs? are they
really that bad."
A couple of days ago I was knocking up a proof of concept which loaded a largish XML file (~18MB) into an XMLType. Because I didn't want to store the XMLType permanently I tried loading it into a PL/SQL variable (session memory) and a temporary table. Loading it into a temporary table took five times as long as loading it into an XMLType variable (5 seconds compared to 1 second). The difference is because temporary tables are not memory structures: they are written to disk (specifically your nominated temporary tablespace).
If you want to cache a lot of data then storing it in memory will stress the PGA, which is not good if you have lots of sessions. So it's a trade-off between RAM and time.
To the first question:
"Can someone show how to transform the
above sample queries to collections
and/or cursors?"
The queries you post can be merged into a single statement:
SELECT case when a.column_a IS NULL OR a.column_a = ' '
then b.data_a
else column_a end AS someA,
a.someB,
a.someC
FROM TABLE_A a
left outer join TABLE_B b
on ( a.column_b = b.data_b AND a.column_c = 'C' )
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
(I have simply transposed your logic but that case() statement could be replaced with a neater nvl2(trim(a.column_a), a.column_a, b.data_a) ).
I know you say your queries are more complicated but your first port of call should be to consider rewriting them. I know how seductive it is to break a gnarly query into lots of baby SQLs stitched together with PL/SQL but pure SQL is way more efficient.
To use a collection it is best to define the types in SQL, because it gives us the flexibility to use them in SQL statements as well as PL/SQL.
create or replace type tab_a_row as object
(col_a number
, col_b varchar2(23)
, col_c date);
/
create or replace type tab_a_nt as table of tab_a_row;
/
Here's a sample function, which returns a result set:
create or replace function get_table_a
(p_arg in number)
return sys_refcursor
is
tab_a_recs tab_a_nt;
rv sys_refcursor;
begin
select tab_a_row(col_a, col_b, col_c)
bulk collect into tab_a_recs
from table_a
where col_a = p_arg;
for i in tab_a_recs.first()..tab_a_recs.last()
loop
if tab_a_recs(i).col_b is null
then
tab_a_recs(i).col_b := 'something';
end if;
end loop;
open rv for select * from table(tab_a_recs);
return rv;
end;
/
And here it is in action:
SQL> select * from table_a
2 /
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 12-JUN-10
SQL> var rc refcursor
SQL> exec :rc := get_table_a(1)
PL/SQL procedure successfully completed.
SQL> print rc
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 something 12-JUN-10
SQL>
In the function it is necessary to instantiate the type with the columns, in order to avoid the ORA-00947 exception. This is not necessary when populating a PL/SQL table type:
SQL> create or replace procedure pop_table_a
2 (p_arg in number)
3 is
4 type table_a_nt is table of table_a%rowtype;
5 tab_a_recs table_a_nt;
6 begin
7 select *
8 bulk collect into tab_a_recs
9 from table_a
10 where col_a = p_arg;
11 end;
12 /
Procedure created.
SQL>
Finally, guidelines
"What should be the guidelines on When
to use and When to avoid GTT's"
Global temp tables are very good when we need share cached data between different program units in the same session. For instance if we have a generic report structure generated by a single function feeding off a GTT which is populated by one of several procedures. (Although even that could also be implemented with dynamic ref cursors ...)
Global temporary tables are also good if we have a lot of intermediate processing which is just too complicated to be solved with a single SQL query. Especially if that processing must be applied to subsets of the retrieved rows.
But in general the presumption should be that we don't need to use a temporary table. So
Do it in SQL unless it is too hard it which case ...
... Do it in PL/SQL variables (usually collections) unless it takes too much memory it which case ...
... Do it with a Global Temporary Table
Generally I'd use a PL/SQL collection for storing small volumes of data (maybe a thousand rows). If the data volumes were much larger, I'd use a GTT so that they don't overload the process memory.
So I might select a few hundred rows from the database into a PL/SQL collection, then loop through them to do some calculation/delete a few or whatever, then insert that collection into another table.
If I was dealing with hundreds of thousands of rows, I would try to push as much of the 'heavy lifting' processing into large SQL statements. That may or may not require GTT.
You can use SQL level collection objects as something that translates quite easily between SQL and PL/SQL
create type typ_car is object (make varchar2(10), model varchar2(20), year number(4));
/
create type typ_coll_car is table of typ_car;
/
select * from table (typ_coll_car(typ_car('a','b',1999), typ_car('A','Z',2000)));
MAKE MODEL YEAR
---------- -------------------- ---------------
a b 1,999.00
A Z 2,000.00
declare
v_car1 typ_car := typ_car('a','b',1999);
v_car2 typ_car := typ_car('A','Z',2000);
t_car typ_coll_car := typ_coll_car();
begin
t_car := typ_coll_car(v_car1, v_car2);
FOR i in (SELECT * from table(t_car)) LOOP
dbms_output.put_line(i.year);
END LOOP;
end;
/

Resources