oracle execute immediate using reference to multiple variables - oracle

I'm using Oracle Rest Data Services to build an app.
I can easily read & write with something like this GET http://example.com/foo/bar that runs query SELECT * FROM bar or
POST http://example.com/foo/bar
{
"first": "a'b",
"second": "c,d"
}
that runs query INSERT INTO bar (first, second) VALUES (:first, :second)
Where query parameters are bound from request body.
Now, I'd like to build a route that run a dynamic query.
I can do that with one binding param, eg.:
POST http://example.com/foo/query
{
"query": "DELETE FROM bar WHERE first = :param",
"param": "a'b"
}
that runs query
BEGIN EXECUTE IMMEDIATE :query USING :param; END;
But I don't know how to do it with multiple params. For eg.
POST http://example.com/foo/query
{
"query": "DELETE FROM bar WHERE first = :first AND second = :second",
"bindings": "first,second",
"first": "a'b",
"second": "c,d"
}
The query should be something like
DECLARE
params ...? -- (params variable should set USING list from :bindings request param)
BEGIN
EXECUTE IMMEDIATE :query USING params;
END;
Any idea?

It's only a small change from your previous example. See this example from the docs for more info.
BEGIN EXECUTE IMMEDIATE :query USING :first, :second; END;
As a warning, the bind variable names in the query don't need to match the names in the EXECUTE IMMEDIATE block - they're evaluated in the order they occur, not by name. So in this example:
:query := 'DELETE FROM bar WHERE first = :a and second = :b';
execute immediate :query using :my_var1, :my_var2;
The value of pl/sql variable my_var1 is assigned to sql bind variable a since they're both the first ones. It can get more complicated if you want to repeat variable names... but I think that's enough to answer your question.

Related

How to give Sequel#execute a block on a simple select?

I'm executing a simple query vis Sequel like such:
result = db.execute("SELECT some_int_colum FROM some_system_table WHERE some_column = 'some_value';")
In a psql session, it returns the expected value but when run through the Sequel Postgres adapter it returns the number of resulting rows, not the value.
From the source (reference):
# Execute the given SQL with this connection. If a block is given,
# yield the results, otherwise, return the number of changed rows.
That clearly explains the why, but how is a block given to the execute method in this scenario?
Dataset#fetch (reference) is a more appropriate method to execute arbitrary sql and return a single or set of values. In the above example, where only a single return value is expected it would look something like this:
result = db.fetch("SELECT some_int_colum FROM some_system_table WHERE some_column = 'some_value'").all.first.values.first

What is the name of the 'resultset' argument in an ADODB.Command calling an Oracle stored procedure in Classic.ASP?

I have the pleasure of maintaining a legacy application using Classic.ASP for the frontend and an Oracle database for the backend.
We have an ongoing issues where we need to routinely update queries like the following to have an ever increasing value for the 'resultset' parameter
Set cmdStoredProc = Server.CreateObject("ADODB.Command")
cmdStoredProc.CommandText = "{call package_name.Procedure_Name(?,{resultset 1500, v_out_one, v_out_two})}"
It started at 500, then a bug fix made it 1000, then 1500, and now it has became an issue again on my watch.
Rather than follow in my predecessor's footsteps and arbitrarily increase it I'd like to know as much as possible about this feature but am struggling to find any documentation on it.
Is there a specific name given to this feature / argument / parameter? Knowing this should be enough to allow me to find out more about it but a brief explanation of it or link to documentation on it would be advantageous.
From the comments / answers it has become apparent that having the definition of the procedure that is being called could be useful:
PROCEDURE Procedure_Name
(n_site_id_in IN TABLENAME.site_org_id%TYPE,
v_out_one OUT t_c_out_one,
v_out_two OUT t_c_out_two)
IS
--Select the CC and account code and descriptions into a cursor
CURSOR c1 IS
SELECT a.out_one,
a.out_two
FROM TABLENAME a
WHERE a.site_org_id = n_site_id_in
ORDER BY a.out_one, a.out_two;
i INTEGER DEFAULT 1;
BEGIN
FOR get_c1 IN c1 LOOP
v_out_one(i) := get_c1.out_one;
v_out_two(i) := get_c1.out_two;
i := i + 1;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE('no data found');
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('sqlerrm '||SQLERRM);
RAISE;
END Procedure_Name;
From this we can see the procedure has 3 parameters defined, 1 IN and 2 OUT, yet the call to the procedure seems to convert the 2 OUT parameters to a collection based on resultset.
The driver in use is 'Microsoft ODBC for Oracle' (MSORCL32.DLL)
Your procedure package_name.Procedure_Name must return a cursor as an out parameter.
This resultset parameter let me think of a parameter defining the number of cursors that can be open at the same time.
The fact is it does not seem to be the right way of doing things because it means that each time the procedure is called, the cursor is not closed.
In your code you must have stg like
Set myRecordSet = cmdStoredProc.Execute()
This recordset is used to read the cursor content.
Please check that it is closed after usage with
myRecordSet.Close()
Set myRecordset = Nothing
The 'resultset' argument does not have any special name, it is just known as the resultset parameter.
There are multiple ways it can be used:
Return all the columns in a single result set (as it currently is):
Set cmdStoredProc = Server.CreateObject("ADODB.Command")
cmdStoredProc.CommandText = "{call package_name.Procedure_Name(?,{resultset 1500, v_out_one, v_out_two})}"
Return each column as a single result set (to return 2 separate result sets):
Set cmdStoredProc = Server.CreateObject("ADODB.Command")
cmdStoredProc.CommandText = "{call package_name.Procedure_Name(?,{resultset 1500, v_out_one}, {resultset 1500, v_out_two})}"
Read more about it here: https://learn.microsoft.com/en-us/sql/odbc/microsoft/returning-array-parameters-from-stored-procedures
As assumed, it is used to set the limit on the amount of records that can be returned from the procedure call.
The definition of the procedure shows that it is returning 2 arrays as output so an error will be thrown if either of them exceeds the limit set in the resultset parameter.

Cannot clear one of the page items via dynamic action

I have a couple of text fields that are filled from the database when the value of a select list changed.
I added another action to the list change dynamic action to execute PL/SQL code:
IF :P2_SELECT_LIST1 LIKE '%ABC%' AND :P2_NAME = 'WWW' THEN
:P2_NAME = NULL;
END IF;
Nothing happend on the page when I change the value of the select list, but the session value of P2_NAME gets cleared.
I also tried:
IF :P2_SELECT_LIST1 LIKE '%ABC%' AND :P2_NAME = 'WWW' THEN
:P2_NAME = '';
END IF;
But gotten the same result
In this dynamic action there are two fields next to your pl/sql code:
Items to submit: list the items here that you use in your pl/sql to get the values from session (if necessary, the values are usually already in session).
Items to return: list the items here that you need to refresh the value on the HTML page after some change in your pl/sql
I think this solves the problem.

Update document after printing result

I'm trying to retrieve a list of documents, do something with the returned documents, then update the status to flag its been processed. This is what I have:
cursor = r.db("stuff").table("test").filter(r.row["subject"] == "books").run()
for document in cursor:
print(document["subject"])
document.update({"processed": True})
This seems to run OK but the "processed" field does not get updated as I would have expected. I'm probably approaching this incorrectly, so any pointers would be appreciated here.
UPDATE
This seems to work OK but I can't help thinking its somewhat inefficient:
cursor = r.db("stuff").table("test").filter(r.row["subject"] == "books").run()
for document in cursor:
print(document["subject"])
r.db("certs").table("test").get(document['id']).update({"tpp_processed": True}).run()
1. Using a for_each
Instead of doing an update with a run every time you want to update a single document, you can saves the changes in an array and then use forEach to update all the documents in one query. This would look something like this:
cursor = r.table('30693613').filter(r.row["subject"] == "book").run(conn)
arr = list(cursor)
for row in arr:
row['processed'] = True
r.expr(arr)
.for_each(lambda row: r.table('30693613').get(row["id"]).update(row))
.run(conn)
Instead of doing N networks calls for every update, this will only execute one network call.
2. Building an update array and using forEach
You can also do something similar in which you build an array and you just run one query at the end:
cursor = r.db("stuff").table("test").filter(r.row["subject"] == "books").run()
updated_rows = {}
for document in cursor:
print(document["subject"])
updated_rows.append({ id: document["id"], "tpp_processed": True }
// Afterwards...
r.expr(updated_rows)
.for_each(lambda row: r.table('30693613').get(row["id"]).update(row))
.run(conn)
3. Using no_reply
Finally, you can keep your query exactly the same as it is and then just run with noreply. That wait, your code will just keep running and won't wait till the database gets back a response.
cursor = r.db("stuff").table("test").filter(r.row["subject"] == "books").run()
for document in cursor:
print(document["subject"])
r.db("certs").table("test").get(document['id']).update({"tpp_processed": True}).run(conn, noreply=True)

Tuning query with VARCHAR2 column

There is this stored procedure that builds a dynamic query string and then execute it. The sp works fine in development and testing environment, but the DBA of the client company has informed that this query is hitting really hard to the database in production. The IT area has asked us to tune up the query. So far so good, we've moved almost all this sp from building the query string dynamically into a single big query that performs really fast (compared to the old query).
We have found (among other things) that the sp built the where clause of the query string by evaluating if a parameter has a default value or a real value i.e.
IF P_WORKFLOWSTATUS <> 0 THEN
L_SQL := TRIM(L_SQL) || ' AND WORKFLOW.STATUS = ' || TO_CHAR(P_WORKFLOWSTATUS);
END IF;
So we optimized this behavior to
WHERE
...
AND (WORKFLOW.STATUS = P_WORKFLOWSTATUS OR P_WORKFLOWSTATUS = 0)
This kind of change has improved the query that affected numeric columns, but we have found a problem with a VARCHAR2 parameter and column. The current behavior is
--CLIENT.CODE is a VARCHAR2(14) column and there is an unique index for this column.
--The data stored in this field is like 'N0002077123', 'E0006015987' and similar
IF NVL(P_CLIENT_CODE, '') <> '' THEN
L_SQL := TRIM(L_SQL) || ' AND CLIENT.CODE = ''' || P_CLIENT_CODE || '''';
END IF;
We tried to change this to our optimized version of the query by doing
WHERE
...
AND (CLIENT.CODE = P_CLIENT_CODE OR NVL(P_CLIENT_CODE, '') = '')
but this change made the query lost performance. Is there a way to optimize this part of the query or should we turn our big query into a dynamic query (again) just to evaluate if this VARCHAR2 parameter should be added or not into the where clause?
Thanks in advance.
Oracle treats empty strings '' as NULL. So this condition NVL(P_CLIENT_CODE, '') = '' doesn't really make much sense. Moreover it will always be false, because here we are checking equality of NULLs, which is always false. To that end you might and probably should recode that part of the query as:
WHERE
...
AND ( (CLIENT.CODE = P_CLIENT_CODE) OR (CLIENT IS NULL) )
I recommend or to move this varchar2 parameters back to dynamic, or to use the following:
WHERE
...
AND CLIENT.CODE = nvl(P_CLIENT_CODE,CLIENT.CODE)
and be sure you have index on client.code.(Or the table partitioned on client.code, if possible.)
Of course, as it has already been said, you need need to perform correct null checks.
However, the trick is, the difference between
AND (CLIENT.CODE = P_CLIENT_CODE OR NVL(P_CLIENT_CODE, '') = '')
and
AND ( (CLIENT.CODE = P_CLIENT_CODE) OR (CLIENT IS NULL) )
is very unlikely to cause performance problems only by itself. I would even say that query with second clause could perform worse than with the first one, as it will yield true for more rows, resulting in larger result set for consequent joins/orders/filers etc.
I'd bet that adding this clause to your query somehow breaks its optimal execution plan. For instance, having obsolete statistics, the optimizer could make a sub-optimal decision to choose unselective index on client.code instead of others available.
However, it is hard to tell for sure without seeing actual (not the expected one, which you obtain with explain plan command!) execution plan of slow query and your table structure.

Resources