Pipelined function that uses a table accessed via DB Link - oracle

I have created this pipelined function for fetching configuration from a table which is stored in a DB that I need to access via a DB link:
CREATE OR REPLACE FUNCTION fetch_config (
process_i IN VARCHAR2,
procedure_i IN VARCHAR2,
sub_procedure_i IN VARCHAR2
) RETURN t_config_type
PIPELINED
AS
BEGIN
FOR r_row IN (
SELECT
zprocess,
zprocedure,
zsub_procedure,
zcriteria,
zfield,
zfield2,
zvalue_enabled,
zenabled
FROM
cdc.uap_zufi_dunn_conf#rbip
WHERE
zprocess = process_i
AND zprocedure = procedure_i
AND zsub_procedure = sub_procedure_i
) LOOP
PIPE ROW ( config_type(r_row.zprocess, r_row.zprocedure, r_row.zsub_procedure, r_row.zcriteria, r_row.zfield,
r_row.zfield2, r_row.zvalue_enabled, r_row.zenabled) );
END LOOP;
END fetch_config;
However, when trying to use the function dynamically, the below error is thrown:
BEGIN
EXECUTE IMMEDIATE q'[
CREATE TABLE my_table AS
SELECT
*
FROM
another_table
WHERE
cacont_acc IN (
SELECT
zvalue_enabled
FROM
TABLE ( fetch_config('GLOBAL', 'EXCLUSIONS', 'ZBUT000_ATTRIBUTES') ))
]'
;
END;
Error:
ORA-06512: at line 79
12840. 00000 - "cannot access a remote table after parallel/insert direct load txn"
*Cause: Within a transaction, an attempt was made to perform distributed
access after a PDML or insert direct statement had been issued.
*Action: Commit/rollback the PDML transaction first, and then perform
the distributed access, or perform the distributed access before the
first PDML statement in the transaction.
I've tried to create a view in my local DB pointing to that table, but it fails as well. What would be the workaround for this issue?

It's the combination of a Create Table As Select (CTAS) with a pipelined function that references a remote object that causes the error "ORA-12840: cannot access a remote table after parallel/insert direct load txn". CTAS statements always use an optimized type of write called a direct-path write, but those direct-path writes do not play well with remote objects. There are several workarounds, such as separating your statements into a separate DDL and DML step, or using a common table expression to force Oracle to run the operations in an order that works.
Direct Path Writes
The below code demonstrates that CTAS statements appear to always use direct-path writes. A regular insert would include an operation like "LOAD TABLE CONVENTIONAL", but a direct path write shows up as the operation "LOAD AS SELECT".
drop table my_table;
explain plan for create table my_table as select 1 a from dual;
select * from table(dbms_xplan.display(format => 'basic'));
Plan hash value: 2781518217
-----------------------------------------------------
| Id | Operation | Name |
-----------------------------------------------------
| 0 | CREATE TABLE STATEMENT | |
| 1 | LOAD AS SELECT | MY_TABLE |
| 2 | OPTIMIZER STATISTICS GATHERING | |
| 3 | FAST DUAL | |
-----------------------------------------------------
(However - I don't think CTAS uses a "real" direct path write. Using a real direct path write every time would cause data issues. There would have to be a mechanism to allow conventional writes, but nothing I tried, such as NOLOGGING, NOAPPEND, or creating relational constraints, was able to force a CTAS to use a "LOAD TABLE CONVENTIONAL" operation. I think a CTAS is really using some type of optimization halfway between conventional and direct path.)
Direct path writes are optimized for performance but come at the expense of consistency. Transactions, even the same transaction, cannot write or read from the same object before committing a direct path write. This isn't normally a problem with CTAS, because it all happens in one step. But when there's a remote database, Oracle doesn't know what kind of transactions are happening with that remote database. And accessing a remote object always creates a transaction, so as soon as Oracle calls the remote object in the pipelined function it can't tell what's going on remotely and raises "ORA-12840: cannot access a remote table after parallel/insert direct load txn".
Workarounds
Avoiding the CTAS may be the most straightforward way to prevent this error. Isolate the CTAS direct path write in a separate statement, and then use a regular insert that will use a "LOAD TABLE CONVENTIONAL" operation that works fine with database links.
--(Add warning here about not combining the below two statements.)
EXECUTE IMMEDIATE q'[
CREATE TABLE my_table AS
SELECT
*
FROM
another_table
WHERE
1=0
]';
EXECUTE IMMEDIATE q'[
INSERT INTO my_table
SELECT
*
FROM
another_table
WHERE
cacont_acc IN (
SELECT
zvalue_enabled
FROM
TABLE ( fetch_config('GLOBAL', 'EXCLUSIONS', 'ZBUT000_ATTRIBUTES') ))
]';
But if you want to avoid repeating any code, and do everything in a single step, you can use a Common Table Expression (CTE).
EXECUTE IMMEDIATE q'[
CREATE TABLE my_table AS
WITH configs AS
(
--Use CTE and MATERIALIZE hint to avoid ORA-12840.
SELECT /*+ MATERIALIZE */
zvalue_enabled
FROM
TABLE ( fetch_config('GLOBAL', 'EXCLUSIONS', 'ZBUT000_ATTRIBUTES')
)
SELECT
*
FROM
another_table
WHERE
cacont_acc IN (SELECT zvalue_enabled FROM configs)
]'
;
The CTE and the MATERIALIZE hint force Oracle to retrieve the results of the remote object first and store them in a temporary table. When the CTAS gets executed, it reads from the temporary table and doesn't notice the database link anymore. The execution plan will look something like below:
--------------------------------------------------------------------------------
| Id | Operation | Name |
--------------------------------------------------------------------------------
| 0 | CREATE TABLE STATEMENT | |
| 1 | TEMP TABLE TRANSFORMATION | |
| 2 | LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6707_8AAECC0C |
| 3 | COLLECTION ITERATOR PICKLER FETCH | FETCH_CONFIG |
| 4 | LOAD AS SELECT | MY_TABLE |
...

Related

Truncate local table only when Remote table is accessible or have complete data in oracle

I've a problem which I'm hard to find solution. Hope you guys in this community can solve.
On daily basis I'm copying table from one database(T_TAGS_REMOTE) to table on another database (T_TAGS_LOCAL) through DB links. For this I truncate T_TAGS_LOCAL table first and then perform insert.
Above task is done through Linux job.
Problem comes when
Sometimes T_TAGS_REMOTE from remote database is not accessible giving ORA error
Sometimes T_TAGS_REMOTE have not complete data rows (i,e SYSDATE COUNT < SYSDATE-1 COUNT)
Requirements:
STOP truncating STOP inserting when any of the above problem (1) or (2) has encountered
MyCode:
BEGIN
SELECT COUNT(1) AS OLD_RECORDS_COUNT FROM T_TAGS_LOCAL;
EXECUTE IMMEDIATE 'TRUNCATE TABLE T_TAGS_LOCAL';
INSERT /*+ APPEND */ INTO T_TAGS_LOCAL SELECT * FROM AK.T_TAGS_REMOTE#NETCOOL;
END;
/
Please suggest BETTER option for table copy or code to handle this problem.
I would not use the technique you are using, it would always generate issues. Instead, I think your use case fits a replication using materialized views. A materialized view log in source, and a materialized view using the dblink in target
You only need to decide the refresh method, that could be FAST ON COMMIT, as I guess your table is not very big as you are copying the whole table each and every single day.
Example
In Source
SQL> create table t ( c1 number primary key, c2 number ) ;
Table created.
SQL> declare
begin
for i in 1 .. 100000
loop
insert into t values ( i , dbms_random.value ) ;
end loop;
commit ;
end;
/ 2 3 4 5 6 7 8 9
PL/SQL procedure successfully completed.
SQL> create materialized view log on t with primary key ;
Materialized view log created.
SQL> select count(*) from t ;
COUNT(*)
----------
100000
In Target
SQL> create materialized view my_copy_of_t build immediate refresh fast on demand as
select * from your_source#your_db_link
-- To refresh in target
SQL> select count(*) from my_copy_of_t ;
COUNT(*)
----------
100000
Now, we change source
SQL> insert into t values ( 100001 , dbms_random.value );
1 row inserted
SQL> commit ;
Commit completed.
In target, for refreshing
SQL> exec dbms_mview.refresh('MY_COPY_OF_T');
The only requirement for FAST REFRESH ON DEMAND is that you must have a materialized view log for each of the tables that are part of the Materialized View. In your case, as you are replicating a table, you only need a materialized view log on the source table.
A better option might be using a materialized view. The way you do it now, you'd refresh it on demand using a database job scheduled via DBMS_JOB or DBMS_SCHEDULER.

How can I see the query that the query transformer produced in Oracle

As I know, the query transformer transforms our queries into better ones if possible. So the query I executed and the query database executed at the end can be different.
How can I see the final query that the database executed? I mean the result of the query transformer.
To see the transformed query used by the optimizer you should use a 10053 trace. But an execution plan is more convenient and usually good enough.
Sample Schema
For a quick example, this schema contains two simple tables, and each row in the second table must exist in the fist table.
--drop table test2;
--drop table test1;
create table test1(a number primary key);
create table test2(a number primary key references test1(a));
We want to generate a simple query where the transformed query will be different than the original. To do that, the below query has an unnecessary join. Since there's an inner join, and each row in TEST2 must exist once and only once in TEST1, Oracle doesn't need to do the join. Instead, Oracle only needs to read from a single table or index from TEST2.
select count(*) new_name_for_hard_parse_01
from test1
join test2 on test1.a = test2.a;
10053 Trace
To find the precise query used by the optimizer, you need to generate an 10053 trace. For example:
alter session set events '10053 trace name context forever, level 1';
select count(*) new_name_for_hard_parse_02
from test1
join test2 on test1.a = test2.a;
alter session set events '10053 trace name context off';
(Notice how I used a different name for the column. You need to change the query and force a hard parse. Otherwise, Oracle may simply re-use the existing execution plan and won't generate a trace.)
Wait a minute, and the file will show up in a trace directory somewhere. Depending on the version and configuration, the file might be in USER_DUMP_DEST or a sub directory under DIAGNOSTIC_DEST. For example, on my PC it was the file D:\app\jon\virtual\diag\rdbms\orcl12\orcl12\trace\orcl12_ora_22576.trc
Open the file and look for a section like this:
...
Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "NEW_NAME_FOR_HARD_PARSE_02" FROM "JHELLER"."TEST2" "TEST2"
....
The trace file explains the different transformations and shows the final query.
But you almost never want to use Oracle trace files. The trace files are inconvenient, the commands are undocumented and don't work well, and you won't always have access to the server file system. For 99.9% of Oracle performance tuning, tracing is a waste of time.
Execution Plan
An execution plan is a faster way to determine how the query runs, which is probably what you're interested in.
explain plan for
select count(*) new_name_for_hard_parse_01
from test1
join test2 on test1.a = test2.a;
select * from table(dbms_xplan.display);
Results:
Plan hash value: 4187894267
------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 0 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FULL SCAN| SYS_C009744 | 1 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------
The execution plan shows how only one object was used. It doesn't explain that join elimination was used, you have to infer it.
Oracle provides a tool which allows you to review the execution plan of the query. This gives an insight into the optimizations carried out and provides an opportunity to manually refine the query based upon cost metrics. Oracle does not generate a revised query as such that you can examine.
The EXPLAIN PLAN documentation is here https://docs.oracle.com/cd/B19306_01/server.102/b14211/ex_plan.htm#g42231
You could try with using TKPROF utility, if there is licensing for your version of Oracle Software.
Steps to Follow :
alter session set tracefile_identifier= Test;
alter session set sql_trace = true;
select * from "Your query";
alter session set sql_trace = false;
select value from v$diag_info where name = ‘Diag Trace’;
cd "path from the above query"
tkprof "required filename.trc" try_ex.txt
And with the trace file created with tkprof gives detailed information.
More Information on tkprof, please check with Oracle Documentation.

How can we use oracle private temporary tables in a pl/sql block?

I see the concept of temporary table in oracle is quite different from other databases like SQL Server. In Oracle, we have a concept of global temporary table and we create it only once and in each session we fill it with data which is not the same in other databases.
In 18c, oracle has introduced the concept of private temporary tables which states that upon successful usage, tables can be dropped like in other databases. But how do we use it in a PL/SQL block?
I tried using it using dynamic SQL - EXECUTE IMMEDIATE. But it is giving me table must be declared error. what do I do here?
But how do we use it in a PL/SQL block?
If what you mean is, how can we use private temporary tables in a PL/SQL program (procedure or function) the answer is simple: we can't. PL/SQL programs need to be compiled before we can call them. This means any table referenced in the program must exist at compilation time. Private temporary tables don't change that.
The private temporary table is intended for use in ad hoc SQL work. It allows us to create a data structure we can use in SQL statements for the duration of a session, to make life easier for ourselves.
For instance, suppose I have a massive table of sales data - low level transactions - and my task is to investigate monthly trends. So I only need the total sales by month. Unfortunately, there is no materialized view providing this summary. I don't want to include the aggregating query in my select statements. In previous versions I would have had to create a permanent table (and had to remember to drop it afterwards) but in 18c I can use a private temporary table to stage my summary just for the session.
create private temporary table ora$ptt_sales_summary (
sales_month date
, total_value number )
/
insert into ora$ptt_sales_summary
select trunc(sales_date, 'MM')
, sum (qty*price)
from massive_sales_table
group by trunc(sales_date, 'MM')
/
select *
from ora$ptt_sales_summary
order by sales_month
/
Obviously we can write anonymous PL/SQL blocks in our session but let's continue assuming that's not what you need. So what is the equivalent of a private temporary table in a permanent PL/SQL program? Same as it's been for several versions now: a PL/SQL collection or a SQL nested table type.
Private temporary tables (Available from Oracle 18c ) are dropped at the end of the session/transaction depending on the definition of PTT.
The ON COMMIT DROP DEFINITION option creates a private temporary table that is transaction-specific. At the end of the transaction,
Oracle drops both table definitions and data.
The ON COMMIT PRESERVE DEFINITION option creates a private temporary table that is session-specific. Oracle removes all data and
drops the table at the end of the session.
You do not need to drop it manually. Oracle will do it for you.
CREATE PRIVATE TEMPORARY TABLE ora$ptt_temp_table (
......
)
ON COMMIT DROP DEFINITION;
-- or
-- ON COMMIT PRESERVE DEFINITION;
Example of ON COMMIT DROP DEFINITION (table is dropped after COMMIT is executed)
Example of ON COMMIT PRESERVE DEFINITION (table is retained after COMMIT is executed but it will be dropped at the end of the session)
Note: I don't have access to 18c DB currently and db<>fiddle is facing some issue so I have posted images for you.
Cheers!!
It works with dynamic SQL:
declare
cnt int;
begin
execute immediate 'create private temporary table ora$ptt_tmp (id int)';
execute immediate 'insert into ora$ptt_tmp values (55)';
execute immediate 'insert into ora$ptt_tmp values (66)';
execute immediate 'insert into ora$ptt_tmp values (77)';
execute immediate 'select count(*) from ora$ptt_tmp' into cnt;
dbms_output.put_line(cnt);
execute immediate 'delete from ora$ptt_tmp where id = 66';
cnt := 0;
execute immediate 'select count(*) from ora$ptt_tmp' into cnt;
dbms_output.put_line(cnt);
end;
Example here:
https://livesql.oracle.com/apex/livesql/s/l7lrzxpulhtj3hfea0wml09yg

Batch insert: is there a way to just skip on next record when a constraint is violated?

I am using mybatis to perform a massive batch insert on an oracle DB.
My process is very simple: I am taking records from a list of files and inserting them into a specific table after performing some checks on the data.
-Each file contains an average of 180.000 records and I can have more than one file.
-Some records can be present in more than one file.
-A record is identical to another one if EVERY column matches, in other words I cannot simply perform a check on a specific field. And I have defined a constraint in my DB which makes sure this condition is satisfied.
To put it simply I want to just ignore the constraint exception Oracle will give to me in case that constraint is violated.
Record is not present?-->insert
Record is already present?--> go ahead
is this possible with mybatis?Or can I accomplish something at DB level?
I have control on both Application Server and DB so please tell me what's the most efficient way to accomplish this task (even though I'd like to avoid being too much DB dependant...)
of course, I'd like to avoid performing a select* before each insertion...given the number of records I am dealing with it would ruin my application's performances
Use the IGNORE_ROW_ON_DUPKEY_INDEX hint:
insert /*+ IGNORE_ROW_ON_DUPKEY_INDEX(table_name index_name) */
into table_name
select * ...
I'm not sure about JDBC, but at least in OCI it is possible. With batch operations you pass vectors as bind variables and you also get back vector(s) of returned IDs and also a vector of error codes.
You can also use MERGE on database server side together with custon collection types. Something like:
merge into t
using ( select * from TABLE(:var) v)
on ( v.id = t.id )
when not matched then insert ...
Where :var is bind variable of SQL type: TABLE OF <recordname>
The word "TABLE" is a construct used to cast from bind variable into a table.
Another option is to use SQL error loggin clause:
DBMS_ERRLOG.create_error_log (dml_table_name => 't');
insert into t(...) values(...) log errors reject limit unlimited;
Then after the load you will have to truncate error loging table err$_t;
another option would be to use external tables
It looks that any solution is quite a lot work to do, when compared to using sqlldr.
Ignore error with error table
insert
into table_name
select *
from selected_table
LOG ERRORS INTO SANJI.ERROR_LOG('some comment' )
REJECT LIMIT UNLIMITED;
and error table schema is :
CREATE GLOBAL TEMPORARY TABLE SANJI.ERROR_LOG (
ora_err_number$ number,
ora_err_mesg$ varchar2(2000),
ora_err_rowid$ rowid,
ora_err_optyp$ varchar2(2),
ora_err_tag$ varchar2(2000),
n1 varchar2(128)
)
ON COMMIT PRESERVE ROWS;

Hidden features in Oracle

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I enjoyed the answers and questions about hidden features in sql server
What can you tell us about Oracle?
Hidden tables, inner workings of ..., secret stored procs, package that has good utils...
Since Apex is now part of every Oracle database, these Apex utility functions are useful even if you aren't using Apex:
SQL> declare
2 v_array apex_application_global.vc_arr2;
3 v_string varchar2(2000);
4 begin
5
6 -- Convert delimited string to array
7 v_array := apex_util.string_to_table('alpha,beta,gamma,delta', ',');
8 for i in 1..v_array.count
9 loop
10 dbms_output.put_line(v_array(i));
11 end loop;
12
13 -- Convert array to delimited string
14 v_string := apex_util.table_to_string(v_array,'|');
15 dbms_output.put_line(v_string);
16 end;
17 /
alpha
beta
gamma
delta
alpha|beta|gamma|delta
PL/SQL procedure successfully completed.
"Full table scans are not always bad. Indexes are not always good."
An index-based access method is less efficient at reading rows than a full scan when you measure it in terms of rows accessed per unit of work (typically per logical read). However many tools will interpret a full table scan as a sign of inefficiency.
Take an example where you are reading a few hundred invoices frmo an invoice table and looking up a payment method in a small lookup table. Using an index to probe the lookup table for every invoice probably means three or four logical io's per invoice. However, a full scan of the lookup table in preparation for a hash join from the invoice data would probably require only a couple of logical reads, and the hash join itself would cmoplete in memory at almost no cost at all.
However many tools would look at this and see "full table scan", and tell you to try to use an index. If you do so then you may have just de-tuned your code.
Incidentally over reliance on indexes, as in the above example, causes the "Buffer Cache Hit Ratio" to rise. This is why the BCHR is mostly nonsense as a predictor of system efficiency.
The cardinality hint is mostly undocumented.
explain plan for
select /*+ cardinality(#inner 5000) */ *
from (select /*+ qb_name(inner) */ * from dual)
/
select * from table(dbms_xplan.display)
/
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5000 | 10000 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
The Buffer Cache Hit Ratio is virtually meaningless as a predictor of system efficiency
You can view table data as of a previous time using Flashback Query, with certain limitations.
Select *
from my_table as of timestamp(timestamp '2008-12-01 15:21:13')
11g has a whole new feature set around preserving historical changes more robustly.
Frequent rebuilding of indexes is almost always a waste of time.
wm_concat works like the the MySql group_concat but it is undocumented.
with data:
-car- -maker-
Corvette Chevy
Taurus Ford
Impala Chevy
Aveo Chevy
select wm_concat(car) Cars, maker from cars
group by maker
gives you:
-Cars- -maker-
Corvette, Impala, Aveo Chevy
Taurus Ford
The OVERLAPS predicate is undocumented.
http://oraclesponge.wordpress.com/2008/06/12/the-overlaps-predicate/
I just found out about the pseudo-column Ora_rowSCN. If you don't set your table up for this, this pcolumn gives you the block SCN. This could be really useful for the emergency, "Oh crap I have no auditing on this table and wonder if someone has changed the data since yesterday."
But even better is if you create the table with Rowdependecies ON. That puts the SCN of the last change on every row. This will help you avoid a "Lost Edit" problem without having to include every column in your query.
IOW, when you app grabs a row for user modification, also select the Ora_rowscn. Then when you post the user's edits, include Ora_rowscn = v_rscn in addition to the unique key in the where clause. If someone has touched the row since you grabbed it, aka lost edit, the update will match zero rows since the ora_rowscn will have changed.
So cool.
If you get the value of PASSWORD column on DBA_USERS you can backup/restore passwords without knowing them:
ALTER USER xxx IDENTIFIED BY VALUES 'xxxx';
Bypass the buffer cache and read straight from disk using direct path reads.
alter session set "_serial_direct_read"=true;
Causes a tablespace (9i) or fast object (10g+) checkpoint, so careful on busy OLTP systems.
More undocumented stuff at http://awads.net/wp/tag/undocumented/
Warning: Use at your own risk.
I don't know if this counts as hidden, but I was pretty happy when I saw this way of quickly seeing what happened with a SQL statement you are tuning.
SELECT /*+ GATHER_PLAN_STATISTICS */ * FROM DUAL;
SELECT * FROM TABLE(dbms_xplan.display_cursor( NULL, NULL, 'RUNSTATS_LAST'))
;
PLAN_TABLE_OUTPUT
-----------------------------------------------------
SQL_ID 5z36y0tq909a8, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */ * FROM DUAL
Plan hash value: 272002086
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------
| 1 | TABLE ACCESS FULL| DUAL | 1 | 1 | 1 |00:00:00.02 | 3 | 2 |
---------------------------------------------------------------------------------------------
12 rows selected.
Where:
E-Rows is estimated rows.
A-Rows is actual rows.
A-Time is actual time.
Buffers is actual buffers.
Where the estimated plan varies from the actual execution by orders of magnitude, you know you have problems.
Not a hidden feature, but Finegrained-access-control (FGAC), also known as row-level security, is something I have used in the past and was impressed with the efficiency of its implementation. If you are looking for something that guarantees you can control the granularity of how rows are exposed to users with differing permissions - regardless of the application that is used to view data (SQL*Plus as well as your web app) - then this a gem.
The built-in fulltext indexing is more widely documented, but still stands out because of its stability (just try running a full-reindexing of fulltext-indexed columns on similar data samples on MS-SQL and Oracle and you'll see the speed difference).
WITH Clause
Snapshot tables. Also found in Oracle Lite, and extremely useful for rolling your own replication mechanism.
#Peter
You can actually bind a variable of type "Cursor" in TOAD, then use it in your statement and it will display the results in the result grid.
exec open :cur for select * from dual;
Q: How to call a stored with a cursor from TOAD?
A: Example, change to your cursor, packagename and stored proc name
declare cursor PCK_UTILS.typ_cursor;
begin
PCK_UTILS.spc_get_encodedstring(
'U',
10000002,
null,
'none',
cursor);
end;
The Model Clause (available for Oracle 10g and up)
WM_CONCAT for string aggregation
Scalar subquery caching is one of the most surprising features in Oracle
-- my_function is NOT deterministic but it is cached!
select t.x, t.y, (select my_function(t.x) from dual)
from t
-- logically equivalent to this, uncached
select t.x, t.y, my_function(t.x) from t
The "caching" subquery above evaluates my_function(t.x) only once per unique value of t.x. If you have large partitions of the same t.x value, this will immensely speed up your queries, even if my_function is not declared DETERMINISTIC. Even if it was DETERMINISTIC, you can safe yourself a possibly expensive SQL -> PL/SQL context switch.
Of course, if my_function is not a deterministic function, then this can lead to wrong results, so be careful!

Resources