How can I get several oracle rows into one? - oracle

In this oracle situation, how can I get the following results in a single query?
Table 1
Customer | Order_Number
1 1
1 2
2 1
Table 2
Customer | Order_Number | Employee | Tag
1 1 Bob on hold
1 1 Larry shipped
1 2 Larry shipped
Results
Customer | Order_Number | Tags
1 1 Bob - on hold; Larry - shipped
1 2 Larry - shipped;
2 1 (Empty or null)
I'm getting tripped up on returning the tags as a single string.

You have not mentioned your DB version. So the answer would completely depend on which version are you on.
If you are on 11g and up, use LISTAGG.
However, if you are on pre 11g release, then you have the following options :
ROW_NUMBER() and SYS_CONNECT_BY_PATH functions in Oracle 9i
COLLECT function in Oracle 10g
STRAGG function suggested by Tom Kyte here http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2196162600402
Note : Never use WM_CONCAT in production system, it is undocumented. Just raise a SR to Oracle support and say you used it, and see the response. And it doesn't exist in 12c.
More examples here http://www.oracle-base.com/articles/misc/string-aggregation-techniques.php

You're in need of LISTAGG.
If your Oracle version is old enough, it can be replaced with user-defined aggregate function, WM_CONCAT or SYS_CONNECT_BY_PATH.

Related

Oracle Applications - How to get the value of zd_edition_name

In Oracle Applications 12c release 1 there is a new column that forms part of many primary keys called zd_edition_name. This relates to editions that you can have for keeping the database up during changes. So you would have two editions, you can make changes to the non-live on and then just live swap over when you are done (my limited understanding - I am not a dba).
My questions is how can I get the value of zd_edition_name, since this is now part of the primary key and also because tables like fnd_descr_flex_col_usage_tl would bring back two rows instead of one if you don't pass the value of zd_edition_name.
Also what does the zd stand for?
EBS and Edition Base Redefinition and Online Patching
The column, zd_edition_name, is just a component of the edition based redefinition feature of an Oracle 11G 2 (or greater) database as you have indicated.
Oracle Applications does not leverage this edition based redefinition database feature until 12.2 EBS.
The apps owned synonym will display the run time value, SET1 or SET2. It will be one value. For EBS 12.1, I would expect the run time value to be SET1.
APPS#db> select
2 zd_edition_name
3 from
4 fnd_descr_flex_col_usage_tl
5 group by zd_edition_name;
ZD_EDITION_NAME
SET1
With the editionable view and the table, we do not have that restriction:
APPS#db>SELECT
2 zd_edition_name
3 FROM
4 applsys.fnd_descr_flex_col_usage_tl
5 GROUP BY
6 zd_edition_name;
ZD_EDITION_NAME
SET2
SET1
In EBS 12.2, one could identify the active file system which should have a correspondence with SET1/SET2 through logging in to the Oracle Apps server(s) and echoing the environment variables:
$FILE_EDITION = patch
$RUN_BASE = /u01/R122_EBS/fs1
$PATCH_BASE = /u01/R122_EBS/fs2
By querying the apps owned synonym, this is unnecessary to know the value of ZD_EDITION_NAME (it is a value associated with the run edition which will be the value).
You can view the editionable objects associated with table with a query like this:
APPS#db>VAR b_object_name varchar2(30);
APPS#db>EXEC :b_object_name:= 'FND_DESCR_FLEX_COL_USAGE_TL';
PL/SQL procedure successfully completed.
APPS#db>SELECT
2 ao.owner,
3 ao.object_name,
4 ao.object_type
5 FROM
6 all_objects ao
7 WHERE
8 1 = 1
9 AND owner IN (
10 'APPS',
11 'APPLSYS'
12 )
13 AND ao.object_name IN (
14 :b_object_name,
15 substr(:b_object_name,1,29)
16 || '#'
17 );
OWNER OBJECT_NAME OBJECT_TYPE
APPLSYS FND_DESCR_FLEX_COL_USAGE_TL TABLE
APPLSYS FND_DESCR_FLEX_COL_USAGE_TL# VIEW
APPS FND_DESCR_FLEX_COL_USAGE_TL SYNONYM
Here are list of versions existing in an EBS instance:
APPS#db>SELECT
2 level,
3 de.edition_name,
4 de.parent_edition_name
5 FROM
6 dba_editions de
7 START WITH
8 de.edition_name = 'ORA$BASE'
9 CONNECT BY
10 PRIOR de.edition_name = de.parent_edition_name
11 ORDER BY
12 de.edition_name;
LEVEL EDITION_NAME PARENT_EDITION_NAME
1 ORA$BASE
2 V_20160703_2120 ORA$BASE
3 V_20160708_1723 V_20160703_2120
...
29 V_20180117_1118 V_20171206_1115
30 V_20180130_0107 V_20180117_1118
For an 12.1 EBS environment, I would expect the starting edition, ORA$BASE, to be the only edition.

Most efficient way to parse a Comma Separated list in ApEx

I am still working on my first solo Oracle ApEx(Application Express) application, so I am sure that this will be old hat for some of you. I tried to look up what I want to do, but I am not sure what to even look up. If there is already a thread that answers this, then I apologize for duplication, but I have searched here for about two hours trying to figure this out.
I am open minded to a solution since I have not already built anything for this part of the application yet, so I am not locked into one set way. If there is a better way, please let me know.
I want to obtain a comma separated (or semi-colon, or colon separated) list from the user. I then want to take that data and write it to a table with each value in its own row.
Example of input:
X12345678, X22345678, X32345678 (and so on)
The numbers that are input will then be looked up on a different table because we use non-identifying PIDM numbers (Anyone that has used Ellucian's Banner will understand). This select statement is crazy simple to retrieve this number:
Select spriden_pidm
from spriden
where spriden_change_ind is null
and spriden_id = :P5_STU_ID
Then, it will be stored in a table thusly:
Example of data storage:
ID | Semester | Creating User | Created Date | Data Origin
012345678 | 201640 | JDOE1 | sysdate | ApEx : 130
022345678 | 201640 | JDOE1 | sysdate | ApEx : 130
And so forth.
Question 1: I am presuming that a loop will be the best way to accomplish this using regular expressions. Would that be a correct presumption?
Question 2: Does ApEx already have something built in that would process this better and/or faster?
ApEx version 5.0, Oracle 12c
APEX_UTIL.string_to_table
and use the comma for the second parameter
As mentioned, since we were working against the clock on a deployment, we ended up writing a loop similar to what the APEX_UTIL.string_to_table (Thanks Rob van Wijk) accomplishes:
declare
v_id varchar2(4000) := :P5_NEW_IDS;
begin
for i in 1..regexp_count( v_id, ',' ) + 1 loop
insert into zresadddrop.zsrintl(zsrintl_pidm,
zsrintl_term_code_eff,
zsrintl_created_by,
zsrintl_created_date,
zsrintl_data_origin)
select distinct spriden_pidm
, :P5_Term_Code
, :app_user
, sysdate
, 'ApEx: '||:app_id
from spriden
where spriden_change_ind is null
and spriden_id = trim(zgeneral.get_token(v_id,i));
end loop;
commit;
end;

Difference in oracle 11.2.0.1.0 and oracle 11.2.0.2.0 while inserting a value into the table using sequence

In Oracle 11.2.0.1.0:
1) I created a table.
create table m1(id number(5,2), version number(5,2), primary key (id));
2) I created a sequence.
CREATE SEQUENCE m1_id_sq;
3) I inserted values into the table.
insert into m1(id, version) values (m1_id_sq.nextval, 1);
4) output.
id version
-------------
2 1
*I understand the reason for id=2 is due to deferred_segment_creation feature introduced from 11.2.0.1.0 onwards.
*I created an user instance in oracle and I ran the above three commands. Not as a master.
Now I follow the same steps
in Oracle 11.2.0.2.0,
but the output I got is,
id version
-------------
1 1
Please explain why the id=1 in oracle 11.2.0.2.0 whereas id=2 in oracle 11.2.0.1.0. Great thanks!
The problem may have to due with the fact that NOORDER is the default with Oracle Sequences, especially if you're running a RAC environment.
http://docs.oracle.com/cd/B12037_01/server.101/b10759/statements_6014.htm
I've learned that with Sequences, if I want to guarantee that they be sequential I usually have to add the following keywords when creating the sequence:
CREATE SEQUENCE m1_id_sq ORDER NOCACHE;
Edit to refer to above comments:
As noted by Alex Poole in the comments above:
"This shouldn't really matter anyway - you'll get gaps in sequences
for other reasons so you shouldn't rely on it starting with 1"
The NOORDER being the default for sequences explains this issue.
Alex Poole also noted a known issue: Oracle Note 1050193.1 (requires an Oracle Support account) related to an issue with deferred_segment_creation=TRUE
ThinkJet also refers to the following articles:
http://orawin.info/blog/2010/04/25/new-features-new-defaults-new-side-effects/
http://orawin.info/blog/2011/11/17/new-defaults-old-side-effects/

Oracle EXECUTE IMMEDIATE changes explain plan of query

I have a stored procedure that I am calling using EXECUTE IMMEDIATE. The issue that I am facing is that the explain plan is different when I call the procedure directly vs when I use EXECUTE IMMEDIATE to call the procedure. This is causing the execution time to increase 5x. The main difference between the plans is that when I use execute immediate the optimizer isn't unnesting the subquery (I'm using a NOT EXISTS condition). We are using Rule Based Optimizer here at work for most queries but this one has a hint to use an index so the CBO is being used (however, we don't collect stats on tables). We are running Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production.
Example:
Fast:
begin
package.procedure;
end;
/
Slow:
begin
execute immediate 'begin package.' || proc_name || '; end;';
end;
/
Query:
SELECT /*+ INDEX(A IDX_A_1) */
a.store_cd,
b.itm_cd itm_cd,
CEIL ( (new_date - a.dt) / 7) week_num,
SUM (a.qty * b.demand_weighting * b.CONVERT) qty
FROM a
INNER JOIN
b
ON (a.itm_cd = b.old_itm_cd)
INNER JOIN
(SELECT g.store_grp_cd, g.store_cd
FROM g, h
WHERE g.store_grp_cd = h.fdo_cd AND h.fdo_type = '1') d
ON (a.store_cd = d.store_cd AND b.store_grp_cd = d.store_grp_cd)
CROSS JOIN
dow
WHERE a.dt BETWEEN dow.new_date - 91 AND dow.new_date - 1
AND a.sls_wr_cd = 'W'
AND b.demand_type = 'S'
AND b.old_itm_cd IS NOT NULL
AND NOT EXISTS
(SELECT
NULL
FROM f
WHERE f.store_grp_cd = a.store_cd
AND b.old_itm_cd = f.old_itm_cd)
GROUP BY a.store_cd, b.itm_cd, CEIL ( (dow.new_date - a.dt) / 7)
Good Explain Plan:
OPERATION OPTIONS OBJECT_NAME OBJECT_TYPE ID PARENT_ID
SELECT STATEMENT 0
SORT GROUP BY 1 0
NESTED LOOPS 2 1
HASH JOIN ANTI 3 2
TABLE ACCESS BY INDEX ROWID H 4 3
NESTED LOOPS 5 4
NESTED LOOPS 6 5
NESTED LOOPS 7 6
TABLE ACCESS FULL B 8 7
TABLE ACCESS BY INDEX ROWID A 9 7
INDEX RANGE SCAN IDX_A_1 UNIQUE 10 9
INDEX UNIQUE SCAN G UNIQUE 11 6
INDEX RANGE SCAN H_UK UNIQUE 12 5
TABLE ACCESS FULL F 13 3
TABLE ACCESS FULL DOW 14 2
Bad Explain Plan:
OPERATION OPTIONS OBJECT_NAME OBJECT_TYPE ID PARENT_ID
SELECT STATEMENT 0
SORT GROUP BY 1 0
NESTED LOOPS 2 1
NESTED LOOPS 3 2
NESTED LOOPS 4 3
NESTED LOOPS 5 4
TABLE ACCESS FULL B 6 5
TABLE ACCESS BY INDEX ROWID A 7 5
INDEX RANGE SCAN IDX_A_1 UNIQUE 8 7
TABLE ACCESS FULL F 9 8
INDEX UNIQUE SCAN G UNIQUE 10 4
TABLE ACCESS BY INDEX ROWID H 11 3
INDEX RANGE SCAN H_UK UNIQUE 12 11
TABLE ACCESS FULL DOW 13 2
In the bad explain plan the subquery is not being unnested. I was able to reproduce the bad plan by adding a no_unnest hint to the subquery; however, I couldn't reproduce the good plan using the unnest hint (when running the procedure using execute immediate). Other hints are being considered by the optimizer when using the execute immediate just not the unnest hint.
This issue only occurs when I use execute immediate to call the procedure. If I use execute immediate on the query itself it uses the good plan.
You've used ANSI join syntax which will force the use of the CBO
(see http://jonathanlewis.wordpress.com/2008/03/20/ansi-sql/)
"Once you’re running cost-based with no statistics, there are all sorts of little things that might be enough to cause unexpected behaviour in execution plan."
There's a few steps you can take. the first is a 10046 trace.
Ideally I would start a trace on a single session that executes both the 'good' and 'bad' queries. The trace file should contain both queries with a hard parse. I'd be interested in WHY the second has a hard parse as, if it has the same SQL structure and same parsing user, there's not a lot of reason for the second hard parse. The same session should mean there's no oddities from different memory settings etc.
The SQL doesn't show any use of variables, so there should be no datatype issues. All columns are 'tied' to a table alias, so there seems no scope for confusing variables with columns.
The more extreme step is a 10053 trace. There's a viewer posted on Jonathan Lewis' site. That can allow you to get into the guts of the optimization to try to work out the reason for the differing plans.
In the wider view, 9i is pretty much dead and the RBO is pretty much dead. I'd be seriously evaluating a project to move the app to CBO. There are features that will force the CBO to be used and without stats this manner of problem will keep cropping up.
It turns out that this is a known bug in Oracle 9i. Below is the text from a bug report.
Execute Immediate Gives Bad Query Plan [ID 398605.1]
Modified 09-NOV-2006 Type PROBLEM Status MODERATED
This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process, and therefore has not been subject to an independent technical review.
Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.6
This problem can occur on any platform.
Symptoms
When a procedure is run through execute immediate the plan produced is a different than when procedure is run directly.
Cause
The cause of this problem has been identified and verified in an unpublished Bug 2906307.
It is caused by the fact that SQL statements issued from PLSQL at a recursive
depth greater than 1 may get different execution plans to those issued directly from SQL.
There are multiple optimizer features affected by this bug (for example _unnest_subquery,_pred_move_around=true)
HINTS related to the features may also be ignored.
This bug covers the same basic issue as Bug 2871645 Complex view merging does not occur for
recursive SQL > depth 1 but for features other than complex view merging.
Bug 2906307 is closed as a duplicate of Bug 3182582 SQL STATEMENT RUN SLOWER IN DBMS_JOB THAN IN SQL*PLUS.
It is fixed in 10.2
Solution
For insert statements use hint BYPASS_RECURSIVE_CHECK:
INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO table
References
BUG:2871645 - COMPLEX VIEW MERGING DOES NOT OCCUR FOR RECURSIVE SQL > DEPTH 1
BUG:3182582 - SQL STATEMENT RUN SLOWER IN DBMS_JOB THAN IN SQL*PLUS
It turns out that this is a known bug in Oracle 9i. Below is the text from a bug report.
Execute Immediate Gives Bad Query Plan [ID 398605.1]
Modified 09-NOV-2006 Type PROBLEM Status MODERATED
This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process, and therefore has not been subject to an independent technical review.
Applies to: Oracle Server - Enterprise Edition - Version: 9.2.0.6 This problem can occur on any platform.
Symptoms When a procedure is run through execute immediate the plan produced is a different than when procedure is run directly.
Cause The cause of this problem has been identified and verified in an unpublished Bug 2906307. It is caused by the fact that SQL statements issued from PLSQL at a recursive depth greater than 1 may get different execution plans to those issued directly from SQL. There are multiple optimizer features affected by this bug (for example _unnest_subquery,_pred_move_around=true) HINTS related to the features may also be ignored.
This bug covers the same basic issue as Bug 2871645 Complex view merging does not occur for recursive SQL > depth 1 but for features other than complex view merging.
Bug 2906307 is closed as a duplicate of Bug 3182582 SQL STATEMENT RUN SLOWER IN DBMS_JOB THAN IN SQL*PLUS. It is fixed in 10.2
Solution For insert statements use hint BYPASS_RECURSIVE_CHECK: INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO table
References BUG:2871645 - COMPLEX VIEW MERGING DOES NOT OCCUR FOR RECURSIVE SQL > DEPTH 1 BUG:3182582 - SQL STATEMENT RUN SLOWER IN DBMS_JOB THAN IN SQL*PLUS

Hidden features in Oracle

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I enjoyed the answers and questions about hidden features in sql server
What can you tell us about Oracle?
Hidden tables, inner workings of ..., secret stored procs, package that has good utils...
Since Apex is now part of every Oracle database, these Apex utility functions are useful even if you aren't using Apex:
SQL> declare
2 v_array apex_application_global.vc_arr2;
3 v_string varchar2(2000);
4 begin
5
6 -- Convert delimited string to array
7 v_array := apex_util.string_to_table('alpha,beta,gamma,delta', ',');
8 for i in 1..v_array.count
9 loop
10 dbms_output.put_line(v_array(i));
11 end loop;
12
13 -- Convert array to delimited string
14 v_string := apex_util.table_to_string(v_array,'|');
15 dbms_output.put_line(v_string);
16 end;
17 /
alpha
beta
gamma
delta
alpha|beta|gamma|delta
PL/SQL procedure successfully completed.
"Full table scans are not always bad. Indexes are not always good."
An index-based access method is less efficient at reading rows than a full scan when you measure it in terms of rows accessed per unit of work (typically per logical read). However many tools will interpret a full table scan as a sign of inefficiency.
Take an example where you are reading a few hundred invoices frmo an invoice table and looking up a payment method in a small lookup table. Using an index to probe the lookup table for every invoice probably means three or four logical io's per invoice. However, a full scan of the lookup table in preparation for a hash join from the invoice data would probably require only a couple of logical reads, and the hash join itself would cmoplete in memory at almost no cost at all.
However many tools would look at this and see "full table scan", and tell you to try to use an index. If you do so then you may have just de-tuned your code.
Incidentally over reliance on indexes, as in the above example, causes the "Buffer Cache Hit Ratio" to rise. This is why the BCHR is mostly nonsense as a predictor of system efficiency.
The cardinality hint is mostly undocumented.
explain plan for
select /*+ cardinality(#inner 5000) */ *
from (select /*+ qb_name(inner) */ * from dual)
/
select * from table(dbms_xplan.display)
/
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5000 | 10000 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
The Buffer Cache Hit Ratio is virtually meaningless as a predictor of system efficiency
You can view table data as of a previous time using Flashback Query, with certain limitations.
Select *
from my_table as of timestamp(timestamp '2008-12-01 15:21:13')
11g has a whole new feature set around preserving historical changes more robustly.
Frequent rebuilding of indexes is almost always a waste of time.
wm_concat works like the the MySql group_concat but it is undocumented.
with data:
-car- -maker-
Corvette Chevy
Taurus Ford
Impala Chevy
Aveo Chevy
select wm_concat(car) Cars, maker from cars
group by maker
gives you:
-Cars- -maker-
Corvette, Impala, Aveo Chevy
Taurus Ford
The OVERLAPS predicate is undocumented.
http://oraclesponge.wordpress.com/2008/06/12/the-overlaps-predicate/
I just found out about the pseudo-column Ora_rowSCN. If you don't set your table up for this, this pcolumn gives you the block SCN. This could be really useful for the emergency, "Oh crap I have no auditing on this table and wonder if someone has changed the data since yesterday."
But even better is if you create the table with Rowdependecies ON. That puts the SCN of the last change on every row. This will help you avoid a "Lost Edit" problem without having to include every column in your query.
IOW, when you app grabs a row for user modification, also select the Ora_rowscn. Then when you post the user's edits, include Ora_rowscn = v_rscn in addition to the unique key in the where clause. If someone has touched the row since you grabbed it, aka lost edit, the update will match zero rows since the ora_rowscn will have changed.
So cool.
If you get the value of PASSWORD column on DBA_USERS you can backup/restore passwords without knowing them:
ALTER USER xxx IDENTIFIED BY VALUES 'xxxx';
Bypass the buffer cache and read straight from disk using direct path reads.
alter session set "_serial_direct_read"=true;
Causes a tablespace (9i) or fast object (10g+) checkpoint, so careful on busy OLTP systems.
More undocumented stuff at http://awads.net/wp/tag/undocumented/
Warning: Use at your own risk.
I don't know if this counts as hidden, but I was pretty happy when I saw this way of quickly seeing what happened with a SQL statement you are tuning.
SELECT /*+ GATHER_PLAN_STATISTICS */ * FROM DUAL;
SELECT * FROM TABLE(dbms_xplan.display_cursor( NULL, NULL, 'RUNSTATS_LAST'))
;
PLAN_TABLE_OUTPUT
-----------------------------------------------------
SQL_ID 5z36y0tq909a8, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */ * FROM DUAL
Plan hash value: 272002086
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------
| 1 | TABLE ACCESS FULL| DUAL | 1 | 1 | 1 |00:00:00.02 | 3 | 2 |
---------------------------------------------------------------------------------------------
12 rows selected.
Where:
E-Rows is estimated rows.
A-Rows is actual rows.
A-Time is actual time.
Buffers is actual buffers.
Where the estimated plan varies from the actual execution by orders of magnitude, you know you have problems.
Not a hidden feature, but Finegrained-access-control (FGAC), also known as row-level security, is something I have used in the past and was impressed with the efficiency of its implementation. If you are looking for something that guarantees you can control the granularity of how rows are exposed to users with differing permissions - regardless of the application that is used to view data (SQL*Plus as well as your web app) - then this a gem.
The built-in fulltext indexing is more widely documented, but still stands out because of its stability (just try running a full-reindexing of fulltext-indexed columns on similar data samples on MS-SQL and Oracle and you'll see the speed difference).
WITH Clause
Snapshot tables. Also found in Oracle Lite, and extremely useful for rolling your own replication mechanism.
#Peter
You can actually bind a variable of type "Cursor" in TOAD, then use it in your statement and it will display the results in the result grid.
exec open :cur for select * from dual;
Q: How to call a stored with a cursor from TOAD?
A: Example, change to your cursor, packagename and stored proc name
declare cursor PCK_UTILS.typ_cursor;
begin
PCK_UTILS.spc_get_encodedstring(
'U',
10000002,
null,
'none',
cursor);
end;
The Model Clause (available for Oracle 10g and up)
WM_CONCAT for string aggregation
Scalar subquery caching is one of the most surprising features in Oracle
-- my_function is NOT deterministic but it is cached!
select t.x, t.y, (select my_function(t.x) from dual)
from t
-- logically equivalent to this, uncached
select t.x, t.y, my_function(t.x) from t
The "caching" subquery above evaluates my_function(t.x) only once per unique value of t.x. If you have large partitions of the same t.x value, this will immensely speed up your queries, even if my_function is not declared DETERMINISTIC. Even if it was DETERMINISTIC, you can safe yourself a possibly expensive SQL -> PL/SQL context switch.
Of course, if my_function is not a deterministic function, then this can lead to wrong results, so be careful!

Resources