I have some trouble writing SQL queries. Inside a package function, I am trying to reuse the result of a query in two other queries. Here's how it goes :
My schema stores Requests. Each Request concerns multiple destinations. Also, each Request is detailed in another table (Request_Detail). In addition, Requests are identified by their Ids.
So, I am using mainly 3 tables. One for Requests, another for the destinations and the last one for the details. Each one of theses tables is indexed by the Request_Id column.
The query I want to optimize is when a user wants to find all requests, plus their destinations and commands that have been sent between two dates.
I want to query the Request_Table first in order to get all Request_ids. Then, use this Request_Ids list to query the Command table and the Destination one.
I couldn't find how to do that... I can't use ref cursors as they can't be fetched twice... I just need some array-like or column-like variable to store the Request_Ids, then use this variable twice or more...
Here's the original queries I would like to optimize :
FUNCTION EXTRACT_REQUEST_WITH_DATE (ze_from_date DATE, ze_to_date DATE, x_request_list OUT cursor_type, x_destination_list OUT cursor_type,
x_command_list OUT cursor_type) RETURN VARCHAR2 AS
my_function_id VARCHAR2(80) := PACKAGE_ID || '.EXTRACT_REQUEST_WITH_DATE';
my_return_code VARCHAR2(2);
BEGIN
OPEN x_request_list FOR
SELECT NAME,DESTINATION_TYPE,
SUCCESS_CNT, STATUS, STATUS_DESCRIPTION,
REQUEST_ID, PARENT_REQUEST_ID, DEDUPLICATION_ID, SUBMIT_DATE, LAST_UPDATE_DATE
FROM APP_DB.REQUEST_TABLE
WHERE SUBMIT_DATE >= ze_from_date
AND SUBMIT_DATE < ze_to_date
ORDER BY REQUEST_ID;
OPEN x_destination_list FOR
SELECT REQUEST_ID, DESTINATION_ID
FROM APP_DB.DESTINATION_TABLE
WHERE SUBMIT_DATE >= ze_from_date
AND SUBMIT_DATE < ze_to_date
ORDER BY REQUEST_ID;
OPEN x_command_list FOR
SELECT SEQUENCE_NUMBER, NAME, PARAMS, DESTINATION_ID
SEND_DATE, LAST_UPDATE_DATE,PROCESS_CNT, STATUS, STATUS_DESCRIPTION,
VALIDITY_PERIOD, TO_ABORT_FLAG
FROM APP_DB.REQUEST_DETAILS_TABLE
WHERE SUBMIT_DATE >= ze_from_date
AND SUBMIT_DATE < ze_to_date
ORDER BY REQUEST_ID, DESTINATION_ID, SEQUENCE_NUMBER;
return RETURN_OK;
END EXTRACT_REQUEST_WITH_DATE;
As you see, we use the same predicate (that is the SUBMIT_DATE conditions) for all 3 queries. I think there's maybe some way to optimize it by getting REQUEST_IDs then using them in the remaining queries.
Thanks for hearing me out !
Based on the queries you posted I'd just add a SUBMIT_DATE index to REQUEST_TABLE, DESTINATION_TABLE and REQUEST_DETAILS_TABLE and leave your SQL as is. All three queries will be optimized and will run just as fast as matching against a table of REQUEST_ID values.
So...
I found this method that seems to be efficient enough :
First, defining global types to use as arrays. Here's the code :
Object(Record) type :
create or replace
TYPE "GENERIC_ID" IS OBJECT(ID VARCHAR2(64));
Variable size array of GENERIC_ID
create or replace
TYPE "GENERIC_ID_ARRAY" IS TABLE OF "GENERIC_ID";
Then, populating is done via extend() in a FOR LOOP. The resulting array can be used as a table in SQL requests, using :
TABLE(CAST(my_array_of_ids AS GENERIC_ID_ARRAY)
Thanks,
Related
TLDR; Is there anything I can set in an Oracle Form that would let me bind a placeholder to a Data Block's ORDER BY Clause?
I'm developing a form using Oracle Form Builder 10.1.2.3.0 (because it's interfacing with a system that makes other form types undesirable).
It has a Data Block with Query Data Source Type = Table.
Its WHERE Clause allows the user to be flexible in the search, producing rows of varying interest. I want rows with a perfect match to appear before those that are not.
To implement this specification, I wrote the form's WHERE Clause and ORDER BY Clause to reflect this SQL*Plus example:
var sf varchar2(30)
exec :sf := 'X'
with mdual as (
select case when level=1 then dummy else dummy || level end dummy
from dual
connect by level <= 2
)
select *
from mdual
where :sf is null or dummy like '%' || upper(:sf) || '%'
order by case when :sf = dummy then 0 else 1 end asc, dummy;
The form variable reference is not as simple as :sf and the WHERE Clause is a bit more complicated as is the ORDER BY Clause but this type of query is valid. When executed in SQL*Plus, it produces exactly the type of result I desire. You can reverse the first sort expression to prove it.
When I execute the form, I get an ORA-1008 until I comment the first ORDER BY expression.
My conclusion is that Oracle Forms binds placeholder references in a WHERE Clause but not an ORDER BY Clause.
I could experiment with setting the Query Data Source Type to Procedure and pass the procedure the filter field but a view has more utility than a procedure and so I'd prefer to keep using the view that I've defined for the Query Data Source Type.
Is there a way I can coerce Oracle Forms to do what I consider the right thing?
You can use SET_BLOCK_PROPERTY built-in function in order to make it dynamical and depending on a local or bind variable such as
DECLARE
v_orderby := ' CASE WHEN '||:sf||' = ''dummy'' THEN 0 ELSE 1 END, dummy';
BEGIN
SET_BLOCK_PROPERTY('block1',ORDER_BY, v_orderby);
EXECUTE_QUERY;
END;
which might be invoked from a trigger such as WHEN-NEW-BLOCK-INSTANCE after sending cursor to this block by using another action such as clicking on a button or pressing a key such as enter etc.
Is there a hint to generate execution plan ignoring the existing one from the shared pool?
There is not a hint to create an execution plan that ignores plans in the shared pool. A more common way of phrasing this question is: how do I get Oracle to always perform a hard parse?
There are a few weird situations where this behavior is required. It would be helpful to fully explain your reason for needing this, as the solution varies depending why you need it.
Strange performance problem. Oracle performs some dynamic re-optimization of SQL statements after the first run, like adaptive cursor sharing and cardinality feedback. In the rare case when those features backfire you might want to disable them.
Dynamic query. You have a dynamic query that used Oracle data cartridge to fetch data in the parse step, but Oracle won't execute the parse step because the query looks static to Oracle.
Misunderstanding. Something has gone wrong and this is an XY problem.
Solutions
The simplest way to solve this problem are by using Thorsten Kettner's solution of changing the query each time.
If that's not an option, the second simplest solution is to flush the query from the shared pool, like this:
--This only works one node at a time.
begin
for statements in
(
select distinct address, hash_value
from gv$sql
where sql_id = '33t9pk44udr4x'
order by 1,2
) loop
sys.dbms_shared_pool.purge(statements.address||','||statements.hash_value, 'C');
end loop;
end;
/
If you have no control over the SQL, and need to fix the problem using a side-effect style solution, Jonathan Lewis and Randolf Geist have a solution using Virtual Private Database, that adds a unique predicate to each SQL statement on a specific table. You asked for something weird, here's a weird solution. Buckle up.
-- Create a random predicate for each query on a specific table.
create table hard_parse_test_rand as
select * from all_objects
where rownum <= 1000;
begin
dbms_stats.gather_table_stats(null, 'hard_parse_test_rand');
end;
/
create or replace package pkg_rls_force_hard_parse_rand is
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2;
end pkg_rls_force_hard_parse_rand;
/
create or replace package body pkg_rls_force_hard_parse_rand is
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2
is
s_predicate varchar2(100);
n_random pls_integer;
begin
n_random := round(dbms_random.value(1, 1000000));
-- s_predicate := '1 = 1';
s_predicate := to_char(n_random, 'TM') || ' = ' || to_char(n_random, 'TM');
-- s_predicate := 'object_type = ''TABLE''';
return s_predicate;
end force_hard_parse;
end pkg_rls_force_hard_parse_rand;
/
begin
DBMS_RLS.ADD_POLICY (USER, 'hard_parse_test_rand', 'hard_parse_policy', USER, 'pkg_rls_force_hard_parse_rand.force_hard_parse', 'select');
end;
/
alter system flush shared_pool;
You can see the hard-parsing in action by running the same query multiple times:
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
Now there are three entries in GV$SQL for each execution. There's some odd behavior in Virtual Private Database that parses the query multiple times, even though the final text looks the same.
select *
from gv$sql
where sql_text like '%hard_parse_test_rand%'
and sql_text not like '%quine%'
order by 1;
I think there is no hint indicating that Oracle shall find a new execution plan everytime it runs the query.
This is something we'd want for select * from mytable where is_active = :active, with is_active being 1 for very few rows and 0 for maybe billions of other rows. We'd want an index access for :active = 1 and a full table scan for :active = 0 then. Two different plans.
As far as I know, Oracle uses bind variable peeking in later versions, so with a look at the statistics it really comes up with different execution plans for different bind varibale content. But in older versions it did not, and thus we'd want some hint saying "make a new plan" there.
Oracle only re-used an execution plan for exactly the same query. It sufficed to add a mere blank to get a new plan. Hence a solution might be to generate the query everytime you want to run it with a random number included in a comment:
select /* 1234567 */ * from mytable where is_active = :active;
Or just don't use bind variables, if this is the problem you want to address:
select * from mytable where is_active = 0;
select * from mytable where is_active = 1;
I have the following function, which returns the next available client ID from the Client table:
CREATE OR REPLACE FUNCTION getNextClientID RETURN INT AS
ctr INT;
BEGIN
SELECT MAX(NUM) INTO ctr FROM Client;
IF SQL%NOTFOUND THEN
RETURN 1;
ELSIF SQL%FOUND THEN
-- RETURN SQL%ROWCOUNT;
RAISE_APPLICATION_ERROR(-20010, 'ROWS FOUND!');
-- RETURN ctr + 1;
END IF;
END;
But when calling this function,
BEGIN
DBMS_OUTPUT.PUT_LINE(getNextClientID());
END;
I get the following result:
which I found a bit odd, since the Client table contains no data:
Also, if I comment out RAISE_APPLICATION_ERROR(-20010, 'ROWS FOUND!'); & log the value of SQL%ROWCOUNT to the console, I get 1 as a result.
On the other hand, when changing
SELECT MAX(NUM) INTO ctr FROM Client;
to
SELECT NUM INTO ctr FROM Client;
The execution went as expected. What is the reason behind this behavior ?
Aggregate functions will always return a result:
All aggregate functions except COUNT(*), GROUPING, and GROUPING_ID
ignore nulls. You can use the NVL function in the argument to an
aggregate function to substitute a value for a null. COUNT and
REGR_COUNT never return null, but return either a number or zero. For
all the remaining aggregate functions, if the data set contains no
rows, or contains only rows with nulls as arguments to the aggregate
function, then the function returns null.
You can change your query to:
SELECT COALESCE(MAX(num), 1) INTO ctr FROM Client;
and remove the conditionals altogether. Be careful about concurrency issues though if you do not use SELECT FOR UPDATE.
Query with any aggregate function and without GROUP BY clause always returns 1 row. If you want no_data_found exception on empty table, add GROUP BY clause or remove max:
SQL> create table t (id number, client_id number);
Table created.
SQL> select nvl(max(id), 0) from t;
NVL(MAX(ID),0)
--------------
0
SQL> select nvl(max(id), 0) from t group by client_id;
no rows selected
Usually queries like yours (with max and without group by) are used to avoid no_data_found.
Agregate functions like MAX will always return a row. It will return one row with a null value if no row is found.
By the way SELECT NUM INTO ctr FROM Client; will raise an exception where there's more than one row in the table.
You should instead check whether or not ctr is null.
Others have already explained the reason why your code isn't "working", so I'm not going to be doing that.
You seem to be instituting an identity column of some description yourself, probably in order to support a surrogate key. Doing this yourself is dangerous and could cause large issues in your application.
You don't need to implement identity columns yourself. From Oracle 12c onwards Oracle has native support for identity columns, these are implemented using sequences, which are available in 12c and previous versions.
A sequence is a database object that is guaranteed to provide a new, unique, number when called, no matter the number of concurrent sessions requesting values. Your current approach is extremely vulnerable to collision when used by multiple sessions. Imagine 2 sessions simultaneously finding the largest value in the table; they then both add one to this value and try to write this new value back. Only one can be correct.
See How to create id with AUTO_INCREMENT on Oracle?
Basically, if you use a sequence then you don't need any of this code.
As a secondary note your statement at the top is incorrect:
I have the following function, which returns the next available client ID from the Client table
Your function returns the maximum ID + 1. If there's a gap in the IDs, i.e. 1, 2, 3, 5 then the "missing" number (4 in this case) will not be returned. A gap can occur for any number of reasons (deletion of a row for example) and does not have a negative impact on your database in any way at all - don't worry about them.
I'm new in PL/SQL. I have a matrix stored in the DB as a nested table. Something like,
the matrix is stored as a TABLE of objects (and objects are t1 number, t2 number, ... t100 number)
To to get the matrix it would be select x.* from test t, table(t.matrix) x where... , returning
|T1|T2|T3|...|T100|
I want to create a function that returns the sum over the row to be called using SQL only, something equivalent to
select sum(x.T1),sum(x.T2)...sum(x.T100) from test t, table(t.matrix) x where ...
Something like select bigsum(x.*) from table t, table(t.matrix)
It will be called several times, and I don't want to write the 100 columns every time.
If you want to sum the values from 100 different columns, you're going to have to explicitly list those 100 columns at some point. You can encapsulate that logic for that expression in a view or a function or a pipelined table function or some other construct so that you don't have to repeat the expression many times, you just have to reference the abstraction you've created (i.e. call the function that sums the 100 values).
Although it would likely complicate the problem rather than simplifying it, you could potentially create a solution that uses dynamic SQL to generate the 100 columns names and the expression to add them together if you really, really want to avoid writing out 100 column names. It is highly unlikely, however, that the extra complexity of resorting to dynamic SQL would be beneficial unless there are substantial requirements that you haven't mentioned here that make writing out the column names more than a bit repetitive.
" it'll be called several times, and don't want to write the 100
columns every time"
Why not create a view? Write it once, call it as many times as you like:
create or replace view bigsum
select t.whatever
, sum(x.T1) as sum_t1
, sum(x.T2) as sum_t2
...
, sum(x.T100) as sum_t100
from test t
, table(t.matrix) x
group by t.whatever
You would need to include identifying columns from TEST to allow you to join the view to other tables. This approach would give you something close to want you want:
select *
from bigsum
where whatever = 23
You can reduce the amount of typing further by processing a result set from the data dictionary view USER_TYPE_ATTRS (or a SQL*Plus description) in a decent text editor with a regex search'n'replace.
you can create a function in the below given form depending on your condition and if you require parameter then you can add them while creating function and use them in the condition required
create or replace function bigsum
return number
as
sumall number;
begin
select (sum(x.T1),sum(x.T2)...sum(x.T100)) into sumall
from test t, table(t.matrix) x where .(your condition).. ;
return sumall;
end;/
and call it in the manner
select bigsum from dual;
I have a query that in the select statement uses a custom built function to return one of the values.
The problem I have is every now and then this function will error out because it returns more than one row of information. SQL Error: ORA-01422: exact fetch returns more than requested number of rows
To further compound the issue I have checked the table data within the range that this query should be running and can't find any rows that would duplicate based on the where clause of this Function.
So I would like a quick way to identify on which Row of the original query this crashes so that I can take the values from that query that would be passed into the function and rebuild the Functions query with these values to get it's result and see which two or more rows are returned.
Any ideas? I was hoping there could be a way to force Oracle to process one row at a time until it errors so you can see the results UP to the first error.
Added the code:
FUNCTION EFFPEG
--Returns Effective Pegged Freight given a Effdate, ShipTo, Item
DATE1 IN NUMBER -- Effective Date (JULIANDATE)
, SHAN IN NUMBER -- ShipTo Number (Numeric)
, ITM IN NUMBER -- Short Item Number (Numeric)
, AST IN VARCHAR -- Advance Pricing type (varchar)
, MCU IN VARCHAR Default Null --ShipFrom Plant (varchar)
) RETURN Number
IS
vReturn Number;
BEGIN
Select ADFVTR/10000
into vReturn
from PRODDTA.F4072
where ADEFTJ <= DATE1
and ADEXDJ >= DATE1
and ADAN8 = SHAN and ADITM = ITM
and TRIM(ADAST) = TRIM(AST)
and ADEXDJ = (
Select min(ADEXDJ) ADEXDJ
from PRODDTA.F4072
where ADEFTJ <= DATE1
and ADEXDJ >= DATE1
and ADAN8 = SHAN
and ADITM = ITM
and TRIM(ADAST) = TRIM(AST));
Query that calls this code and passes in the values is:
select GLEXR, ORDTYPE,
EFFPEG(SDADDJ, SDSHAN, SDITM, 'PEGFRTT', SDMCU),
from proddta.F42119
I think the best way to do it is trough Exceptions.
What you need to do is to add the code to handle many rows exception in your function:
EXCEPTION
WHEN TOO_MANY_ROWS THEN
INSERT INTO ERR_TABLE
SELECT your_columns
FROM query_that_sometimes_returns_multiple_rows
In this example the doubled result will go to separated table or you can decide to simply print out with dbms_output.
An easy page to start can be this, then just google exception and you should be able to find all you need.
Hope this can help.