I am trying to create a hash with employee_id (NUMBER(6,0)) as the key and salary (NUMBER(8,2)) as the value.
For that I have created a INDEX-OF table(associative array) in PL/SQL (Oracle 11g) using the following definition:
TYPE emp_title_hash IS TABLE OF employees.salary%type
INDEX BY employees.employee_id%type;
I am getting the following compilation error:
Error(22,28): PLS-00315: Implementation restriction: unsupported table index type
I am aware in this case that the only supported type for the index is STRING or PLS_INTEGER. This seems to really restrictive. Why exactly has this been imposed in Oracle ? Is there a work around to get the above done ?
Appreciate your comments / suggestions.
As someone already pointed out, you can use "index by pls_integer", since pls_integer can contain any number(6,0) value.
Surely it would be nice to be able to use any possible type to index a pl/sql associative array, but I have always managed a way of writing a function that builds a string that identifies the object instance I want to use as index value.
So, instead of writing:
type TMyAssociativeArray is table of MyDataType index by MyIndexType;
I write:
Type TMyAssociativeArray is table of MyDataType index by varchar2(30);
then I write a function that calculates a unique string from MyIndexType:
function GetHash(obj MyIndexType) return varchar2;
and, having written this function I can use it to simulate an associative array indexed by MyIndexType object:
so I can write
declare
arr TMyAssociativeArray;
obj TMyDataType;
idx TMyDataType;
begin
....
arr(GetHash(idx)) := obj;
end;
Here I am getting out of the strict question you asked, and giving you an advice about a possible other way of obtaining a quick customer->salary lookup cache... This is the final purpose of your associative-array question, as I can read from your comment, so maybe this could be useful:
If you are using associative arrays to build a fast look-up mechanism, and if you can use the oracle 11g R2 new features, a easier way of obtaining this caching is to rely on the native RESULT_CACHE feature which has been introduced both for queries (by using the RESULT_CACHE hint) and for pl/sql functions.
for pl/sql functions you can create a function whose result value is cached like this one:
create or replace function cached_employee_salary(employee number)
return number RESULT_CACHE is
result number;
begin
select salary
into result
from employees e
where e.code = employee;
return result;
end;
the RESULT_CACHE keyword instructs oracle to keep an in-memory cache of the result values and to reuse them on subsequent calls (when the function is called with the same parameters, of course).
this cache has these advantages, compared to the use of associative arrays:
it is shared among all sessions: cached data is not kept in the private memory allocated for each session, so it wastes less memory
oracle is smart enough to detect that the function calculates its results by accessing the employees table and automatically invalidates the cached results if the table data is being modified.
Of course I suggest you to run some tests to see if this optimization, in your case gives some tangible results. it mostly depends on how complex is the calculation you make in your function.
You can also rely on an analogous feature introduced for SQL queries, triggered by the /+RESULT_CACHE/ hint:
select /*+RESULT_CACHE*/
salary
from employees e
where e.code = employee;
return result
this hint instructs oracle to store and reuse (on subsequent executions) the result set of the query. it will be stored too in memory.
actually this hint has also the advantage that, being the hint -syntactically speaking- a special comment, this query will continue working without any modifications even on servers < 11gR2, whereas, for the function cache version, you should use some "conditional compilation" magic to make it compile also with previous server versions (for which it would be a normal function without any result caching)
I Hope this helps.
Related
What am trying to do is read through a table that is called transit and based on that table, I want to fill ord & ord 2 and if the function went ahead and executed with no problems, it would return true
CREATE OR REPLACE FUNCTION new_func
RETURN boolean
IS
BEGIN
INSERT INTO ORD(ordernum,cnum,snum,rec,ship,typ)
SELECT ordernum,cnum,snum,rec,ship,typ
FROM TRANSIT;
INSERT INTO ORD2(ordernum,cnum,snum,rec,ship,typ)
SELECT ordernum,cnum,snum,rec,ship,typ
FROM TRANSIT
return true;
END;
Oracle provides a convenient statement for this type action: INSERT ALL. This allows inserting rows into several tables with just one statement: (see example fiddle here)
insert all
into ord(ordernum,cnum,snum,rec,ship,typ)
values(ordernum,cnum,snum,rec,ship,typ)
into ord2(ordernum,cnum,snum,rec,ship,typ)
values(ordernum,cnum,snum,rec,ship,typ)
select ordernum,cnum,snum,rec,ship,typ
from transit;
In the above every row would be copied into both tables. This however is not required. The fiddle referenced above splits the tables between even and odd ordernum, but the split could be any on expression which eventuates true or false.
Initially my reply was just the above. But reading the comments it became apparent that OP may benefit from seeing examples of why this should be a procedure and not a function. (As could whoever required a function.)
As a developer I automatically assume a function can be called directly form SQL. In this case that is not true. This function can only be called form a plsql block. There are (at least) 2 reasons for this:
There is no boolean data type in SQL. Boolean data type is
strictly a plsql data type.
A function called by SQL cannot perform DML.
The referenced fiddle above also contains a example of each of these and what happens when called when called via SQL and by plsql.
I want to write pl/sql code which utilizes a Cursor and Bulk Collect to retrieve my data. My database has rows in the order of millions, and sometimes I have to query it to fetch nearly all records on client's request. I do the querying and subsequent processing in batches, so as to not congest the server and show incremental progress to the client. I have seen that digging down for later batches takes considerably more time, which is why I am trying to do it by way of cursor.
Here is what should be simple pl/sql around my main sql query:
declare
cursor device_row_cur
is
select /my_query_details/;
type l_device_rows is table of device_row_cur%rowtype;
out_entries l_device_rows := l_device_rows();
begin
open device_row_cur;
fetch device_row_cur
bulk collect into out_entries
limit 100;
close device_row_cur;
end;
I am doing batches of 100, and fetching them into out_entries. The problem is that this block compiles and executes just fine, but doesn't return the data rows it fetched. I would like it to return those rows just the way a select would. How can this be achieved? Any ideas?
An anonymous block can't return anything. You can assign values to a bind variable, including a collection type or ref cursor, inside the block. But the collection would have to be defined, as well as declared, outside the block. That is, it would have to be a type you can use in plain SQL, not something defined in PL/SQL. At the moment you're using a PL/SQL type that is defined within the block, and a variable that is declared within the block too - so it's out of scope to the client, and wouldn't be a valid type outside it either. (It also doesn't need to be initialised, but that's a minor issue).
Dpending on how it will really be consumed, one option is to use a ref cursor, and you can declare and display that through SQL*Plus or SQL Developer with the variable and print commands. For example:
variable rc sys_refcursor
begin
open :rc for ( select ... /* your cursor statement */ );
end;
/
print rc
You can do something similar from a client application, e.g. have a function returning a ref cursor or a procedure with an out parameter that is a ref cursor, and bind that from the application. Then iterate over the ref cursor as a result set. But the details depend on the language your application is using.
Another option is to have a pipelined function that returns a table type - again defined at SQL level (with create type) not in PL/SQL - which might consume fewer resources than a collection that's returned in one go.
But I'd have to question why you're doing this. You said "digging down for later batches takes considerably more time", which sounds like you're using a paging mechanism in your query, generating a row number and then picking out a range of 100 within that. If your client/application wants to get all the rows then it would be simpler to have a single query execution but fetch the result set in batches.
Unfortunately without any information about the application this is just speculation...
I studied this excellent paper on optimizing pagination:
http://www.inf.unideb.hu/~gabora/pagination/article/Gabor_Andras_pagination_article.pdf
I used technique 6 mainly. It describes how to limit query to fetch page x and onward. For added improvement, you can limit it further to fetch page x alone. If used right, it can bring a performance improvement by a factor of 1000.
Instead of returning custom table rows (which is very hard, if not impossible to interface with Java), I eneded up opening a sys_refcursor in my pl/sql which can be interfaced such as:
OracleCallableStatement stmt = (OracleCallableStatement) connection.prepareCall(sql);
stmt.registerOutParameter(someIndex, OracleTypes.CURSOR);
stmt.execute();
resultSet = stmt.getCursor(idx);
Consider a deterministic function like:
CREATE OR REPLACE FUNCTION SCHEMA.GET_NAME(ss_id nvarchar2
) RETURN nvarchar2 DETERMINISTIC IS
tmpVar nvarchar2(500);
BEGIN
select name into tmpvar from logistics.organization_items
where id = ss_id ;
return tmpvar ;
END ss_name;
Using Toad I called the SCHEMA.GET_NAME(1) and it returns A. I then changed the value from the table from A to B and recalling the SCHEMA.GET_NAME(1) returned B.
It is a good result. But I'm afraid of the value not being updated according to this page in the documentation, which said:
When Oracle Database encounters a deterministic function in one of these contexts, it attempts to use previously calculated results when possible rather than reexecuting the function. If you subsequently change the semantics of the function, you must manually rebuild all dependent function-based indexes and materialized views.
In what situations would the value of GET_NAME(1) return an old cached value (A instead of B)?
If you select from a table then the results of your function are not deterministic. A deterministic system is one that will always produce the same output, given the same initial conditions.
It is possible to alter the information in a table, therefore a function that selects from a table is not deterministic. To quote from the PL/SQL Language Reference:
Do not specify this clause to define a function that uses package variables or that accesses the database in any way that might affect the return result of the function. The results of doing so are not captured if the database chooses not to reexecute the function.
In other words, Oracle does not guarantee that the results of the function will be accurate (they just might be). If your table is static, and unlikely to ever change, then it should be okay but this is not something I'd ever like to rely on. To answer your question, do not assume that Oracle will return anything other than the cached value within the same transaction/session.
If you need to speed this up there are two ways. Firstly, check that you have an index on ID!
Simply JOIN to this table. If your function is only this then there is no need for the function to exist.
Use scalar sub-query caching (not necessarily possible but worth the try).
select ( select get_name(:id) from dual )
from your_table
Oracle will create an in-memory hash of the results of the function, like a result cache. If you're executing the same function multiple times then Oracle will hit the cache rather than the function.
Ben's answer sums it up nicely, and I would just like to add that the way you used DETERMINISTIC keyword inside your function is not right - keeping in view that you are reading the value from a table and then returning the same to the user.
A deterministic function should be used in cases, where you are evaluating an expression over a fixed input, for example, when you need to return a substring, or upper/lower case for the input string. Programatically, you know that for the same input the lowercase function will always return the same value, and so you would like to cache the result (using deterministic keyword).
When you read a value from a table, Oracle has no way to know that the value in the column has not changed, and so it prefers to rexecute the function and not depend on the cached result (which makes sense)
Can you add a timestamp parameter to your function? Then pass in sysdate to the function from wherever you're calling it.
This way, you're effectively caching the result and you avoid running the function over and over when it generally returns the same value within a given transaction.
The remark of Erez is the answer I was looking for.
Before executing the query or plsql-unit you can with this solution force to execute the function again after you reseted the ret-values of the function (e.g. changing a package var).
I use this for:
select ...
from big_table_vw;
where
create view big_table_vw
as
select ... (analytical functions)
from big_table
where last_mutated >= get_date();
In my case the big_table_vw contains window-functions that prevents Oracle to push the predicate into the view.
This is a late follow-up to a long-answered question, but I just wanted to add that Oracle does provide a caching mechanism for functions with mutable dependencies. RESULT_CACHE is an alternative to DETERMINISTIC that allows Oracle to abandon cached function results any time a referenced object is modified.
This way one can cache costly calculations against rarely-updated objects with confidence that cached results will not return incorrect results.
Here's an example using mythological monsters:
CREATE TABLE MONSTER (
MONSTER_NAME VARCHAR2(100) NOT NULL PRIMARY KEY
);
INSERT INTO MONSTER VALUES ('Chthulu');
INSERT INTO MONSTER VALUES ('Grendel');
INSERT INTO MONSTER VALUES ('Scylla');
INSERT INTO MONSTER VALUES ('Nue');
COMMIT;
CREATE OR REPLACE PACKAGE MONSTER_PKG
IS
FUNCTION IS_THIS_A_MONSTER(P_MONSTER_NAME IN VARCHAR2)
RETURN BOOLEAN RESULT_CACHE;
END MONSTER_PKG;
/
CREATE OR REPLACE PACKAGE BODY MONSTER_PKG
IS
FUNCTION IS_THIS_A_MONSTER(P_MONSTER_NAME IN VARCHAR2)
RETURN BOOLEAN
RESULT_CACHE RELIES_ON (MONSTER)
IS
V_MONSTER_COUNT NUMBER(1, 0) := 0;
BEGIN
SELECT COUNT(*)
INTO V_MONSTER_COUNT
FROM MONSTER
WHERE MONSTER_NAME = P_MONSTER_NAME;
RETURN (V_MONSTER_COUNT > 0);
END;
END MONSTER_PKG;
/
When a scenario like the below occurs, any existing cache is invalidated and a new cache can then be rebuilt.
BEGIN
DBMS_OUTPUT.PUT_LINE('Is Kraken initially a monster?');
IF MONSTER_PKG.IS_THIS_A_MONSTER('Kraken')
THEN
DBMS_OUTPUT.PUT_LINE('Kraken is initially a monster');
ELSE
DBMS_OUTPUT.PUT_LINE('Kraken is not initially a monster');
END IF;
INSERT INTO MONSTER VALUES ('Kraken');
COMMIT;
DBMS_OUTPUT.PUT_LINE('Is Kraken a monster after update?');
IF MONSTER_PKG.IS_THIS_A_MONSTER('Kraken')
THEN
DBMS_OUTPUT.PUT_LINE('Kraken is now a monster');
ELSE
DBMS_OUTPUT.PUT_LINE('Kraken is not now a monster');
END IF;
END;
/
Is Kraken initially a monster?
Kraken is not initially a monster
Is Kraken a monster after update?
Kraken is now a monster
It is my understanding that you cannot use a collection in a where clause unless it is defined at the DB level. I have a distinct dislike for random type definitions laying about a schema. It's a religious thing so don't try to dissuade me.
Types contained within a package are cool, because they are easily found and are related to the work at hand. So having said that I have a package that defines a structure (currently a table type collection) that looks like;
TYPE WORD_LIST_ROW IS RECORD(
WORD VARCHAR(255));
TYPE WORD_LIST IS TABLE OF WORD_LIST_ROW;
There is a routine in the package that instantiates and populates an instance of this. It would be useful to be able to use the instantiated object, or some analog therof in a where clause.
So being the clever (or so I thought) programmer, I said why don't I just create a pipelined function to make a table from the collection which I did, and it looks like;
FUNCTION WORD_LIST_TABLE(IN_WORD_LIST WORD_LIST) RETURN WORD_LIST PIPELINED
AS
OUT_WORD_LIST WORD_LIST := WORD_LIST();
BEGIN
FOR I IN 1 .. IN_WORD_LIST.COUNT
LOOP
PIPE ROW(IN_WORD_LIST(I));
END LOOP;
RETURN;
END WORD_LIST_TABLE;
Then in another routine I call the function that builds the collection, finally I use a pipelined function that uses the collection as input in a cursor's where clause.
sort of like this;
cursor xyz
is
select * from x-stuff where fieldA in (select word from table(word_list_table(temp_word_list));
In the loop for the cursor I get an oracle error ora-21700 object does not exist or is marked for delete.
Is there any easy way to build an oracle object that can be used in an Oracle where clause? Basically what I would like to do is;
select * from whatever where fielda in myobject;
The solution is simple - declare the type at schema level using CREATE TYPE statement and you will be able to use your collections in your SQL statements in PL/SQL blocks.
If you have declared your TYPE inside a PL/SQL package you cannot use it in your queries inside PL/SQL blocks.
Also, you must keep in mind that only varray and nested table type collections can be used in queries as of Oracle 11.2 and you cannot use associative arrays in queries.. In 12c you don't have these restrictions.
For further reference go to Oracle Docs.
I'm writting some stored functions in Oracle. One of these is a really basic function who take a string as parameter and return an another string. Here is my function:
CREATE OR REPLACE
FUNCTION get_mail_custcode (
custcodeParam IN customer_table.custcode%TYPE)
RETURN VARCHAR2
IS
mail_rc contact_table.email%TYPE;
BEGIN
SELECT cc.email
INTO mail_rc
FROM contact_table cc, customer_table cu
WHERE cu.customer_id = cc.customer_id
AND cu.custcode like custcodeParam ;
RETURN mail_rc ;
END;
So it's not working.. The function seems to work well but is executed without any end.. The function is working time and time, I manually cancel the operation after 2 or 3 minutes (this query give normally instant result).
After writing the query again and again I finally (and randomly) change the cu.custcode like custcodeParam into a cu.custcode = custcodeParam and it is working!!
So my question is why? Why I can't use a like comparator in a stored function? Why this makes no error but the function run indefinitly.
Thanks.
Cursors are all treated identically in Oracle. A query in a function will be treated exactly the same as a query you enter manually through SQL*Plus.
However, what may differ in your example is how Oracle works with variables. The following two queries are fundamentally different to the optimizer:
SELECT * FROM tab WHERE code LIKE 'FOO%';
and
variable v_code VARCHAR2(4)
EXEC :v_code := 'FOO%';
SELECT * FROM tab WHERE code LIKE :v_code;
In the first case the optimizer looks at the constant FOO% and can instantly tell that an index on code is perfectly suited to retrieve the rows rapidly via an index RANGE SCAN.
In the second case, the optimizer has to consider that :V_CODE is not constant. The purpose of the optimizer is to determine a plan for a query that will be shared by successive executions of the same query (because computing a plan is expensive).
The behaviour of the optimizer will depend upon your version of Oracle:
In old Oracle versions (9i and before), the value of the variable was ignored to build the plan. In effect Oracle had to build a plan that would be efficiently, whatever value was passed to it. In your case this likely would result in a full scan because Oracle had to take the least risky option and consider that FOO% was as likely a value as %FOO (the latter can't be accessed efficiently via an index range scan).
In 10g Oracle introduced bind peeking: now the optimizer can access the value of the variable and produce a suitable plan. The main problem is that in most cases a query can only have one plan, which means that the value of the first variable ever passed to the function will force the execution plan for all further executions. If the first value to be passed is %FOO, a FULL SCAN will likely be chosen.
In 11g, Oracle has "intelligent cursor sharing": a single query can share more than one plan, in the example above the value FOO% would use a RANGE SCAN while %FOO would probably use a FULL SCAN.
What version of Oracle are you using?
update
In 10g and before if this function is used often without wildcards you should rewrite it to acknowledge the optimizer behaviour:
BEGIN
IF instr(custcodeParam, '%') > 0 OR instr(custcodeParam, '_') > 0 THEN
SELECT cc.email
INTO mail_rc
FROM contact_table cc, customer_table cu
WHERE cu.customer_id = cc.customer_id
AND cu.custcode LIKE custcodeParam;
ELSE
SELECT cc.email
INTO mail_rc
FROM contact_table cc, customer_table cu
WHERE cu.customer_id = cc.customer_id
AND cu.custcode = custcodeParam;
END IF;
RETURN mail_rc;
END;