PreparedStatement - set param to DEFAULT (keyword) - jdbc

Although this question seems to be close to this one, it is actually different.
Question
Is there any way to specify DEFAULT value as a parameter in JDBC's PreparedStatement?
Use-case
I'd like to have a single statement used for several inserts (or batch) into the table having some column defined as, say:
updated TIMESTAMP NOT NULL DEFAULT TIMESTAMP.
Now, assume that I got a non-uniform set of entries to insert, some of them DO have a value for that column while others DOESN'T (effectively relying on the DB to generate it).
Instead of 'divide and conquer' pattern (which obviously may become exponentially complex if there are more columns like this), I'm looking to run the same PreparedStatement in the single batch, while specifying DEFAULT value for all those entries that DOESN'T have the required values.

Well, seems that a statement of the #a_horse_with_no_name is straight forwardly to the point.
Gone over the PreparedStatement Java 9 docs again and found no hints to anything even close to this.
I'm missing a functionality to set parameters to a DB functions/keywords like DEFAULT, CURRENT_TIMESTAMP etc, but that's the state of PreparedStatement as of now.

Related

avoid triggering auto-update and calculated fields when updating FileMaker data through JDBC

I have noticed two things I want to avoid when updating data in my FileMaker solution through JDBC:
field calculations are run while inserting/updating data, some of which considerably slow down the update process
last-modified fields are changed (but created fields are not, even on INSERT)
So I am looking for a way to update/insert through JDBC without triggering script triggers or the "changed on/by" auto-calculation. (because I am merging data from another DB and want the change fields to represent the actual last change, not the copy).
For case #2 I have tried both the built-in checkbox of changing the field when edited, and the calculated field solution with Let ( trigger = GetField ( "" ) ; If ( $$SilentSync > 0 ; Self ; Get ( CurrentDate ) ) ) as was answered, e.g. in my related question about avoiding auto-calculations when working in the solution itself. Sadly, both get triggered (and the global variable solution doesn't avoid it) when using JDBC.
Is there a way to say "don't update this field when I change it via JDBC"? Either globally or with an improved field calculation? I've searched the official JDBC guide and Google and found nothing.
For example, what would help is if using a calculated field I can somehow determine that data was changed not through the FM solution, but through JDBC.
For anyone finding this question in the future:
The solution is actually fairly simple, modifying the code snippet I posted to check for the account name instead of a global variable. As I have made a special user for JDBC access, that works well.

Something about the optimizer

I create a database and connect with it. But when I execute
select optimizer;
it returns
SELECT: identifier 'optimizer' unknown
What's the problem with it? And I can't find the sys table in the database using \d.
If I want to add an optimizer myopt, is it enough for the steps below:
write the opt_myopt.h and opt_myopt.c in /monetdb5/optimizer/
Add the code into codes in /monetdb5/optimizer/opt_wrapper.c
Add the function into optimizer_init_funcs in /monetdb5/optimizer/optimizer.c
Add a new pipe in /monetdb5/optimizer/opt_pipes.c
Since Oct2020, variables now have a schema (to keep it other SQL objects). In your session, 'sys' is not the session's schema, that's why it cannot find the 'optimizer' variable, the same for the tables.
In default branch (will be available in the next release) I added a "schema path" property on the user to search SQL objects besides the current session's schema. By default it includes the 'sys' schema.
For your first question: if your current_schema is not sys, you need to use select sys.optimizer;.
For your second question: the best existing example is probably in monetdb5/extras/mal_optimizer_template. Next to that, it's basically checking the source code to see how other optimisers have been implemented. NB, although it doesn't often happen, the internals of MonetDB can change between (major) versions. I'd recomment you to use Oct2020 or newer.
Concerning your second question,
You also have to create and add an optimizer pipeline to opt_pipes.c. Look for the default_pipe and then copy/paste that one to a new pipeline and add your optimizer to it.
There are some more places where you might need to add your optimizer, like in the codes[]array in opt_wrapper.c. Just mimick one of the standard optimizers like "reorder".

Is it possible to traverse rowtype fields in Oracle?

Say i have something like this:
somerecord SOMETABLE%ROWTYPE;
Is it possible to access the fields of somerecord with out knowing the fields names?
Something like somerecord[i] such that the order of fields would be the same as the column order in the table?
I have seen a few examples using dynamic sql but i was wondering if there is a cleaner way of doing this.
What i am trying to do is generate/get the DML (insert query) for a specific row in my table but i havent been able to find anything on this.
If there is another way of doing this i'd be happy to use but would also be very curious in knowing how to do the former part of this question - it's more versatile.
Thanks
This doesn't exactly answer the question you asked, but might get you the result you want...
You can query the USER_TAB_COLUMNS view (or the other similar *_TAB_COLUMN views) to get information like the column name (COLUMN_NAME), position (COLUMN_ID), and data type (DATA_TYPE) on the columns in a table (or a view) that you might use to generate DML.
You would still need to use dynamic SQL to execute the generated DML (or at least generate static SQL separately).
However, this approach won't work for identifying the columns in an arbitrary query (unless you create a view of it). If you need that, you might need to resort to DBMS_SQL (or other tools).
Hope this helps.
As far as I know there is no clean way of referencing record fields by their index.
However, if you have a lot of different kinds of updates of the same table each with its own column set to update, you might want to avoid dynamic sql and look in the direction of statically populating your record with values, and then issuing update someTable set row = someTableRecord where someTable.id = someTableRecord.id;.
This approach has it's own drawbacks, like, issuing an update to every, even unchanged column, and thus creating additional redo log data, but I believe it should be considered.

How to inline a variable in PL/SQL?

The Situation
I have some trouble with my query execution plan for a medium-sized query over a large amount of data in Oracle 11.2.0.2.0. In order to speed things up, I introduced a range filter that does roughly something like this:
PROCEDURE DO_STUFF(
org_from VARCHAR2 := NULL,
org_to VARCHAR2 := NULL)
-- [...]
JOIN organisations org
ON (cust.org_id = org.id
AND ((org_from IS NULL) OR (org_from <= org.no))
AND ((org_to IS NULL) OR (org_to >= org.no)))
-- [...]
As you can see, I want to restrict the JOIN of organisations using an optional range of organisation numbers. Client code can call DO_STUFF with (supposed to be fast) or without (very slow) the restriction.
The Trouble
The trouble is, PL/SQL will create bind variables for the above org_from and org_to parameters, which is what I would expect in most cases:
-- [...]
JOIN organisations org
ON (cust.org_id = org.id
AND ((:B1 IS NULL) OR (:B1 <= org.no))
AND ((:B2 IS NULL) OR (:B2 >= org.no)))
-- [...]
The Workaround
Only in this case, I measured the query execution plan to be a lot better when I just inline the values, i.e. when the query executed by Oracle is actually something like
-- [...]
JOIN organisations org
ON (cust.org_id = org.id
AND ((10 IS NULL) OR (10 <= org.no))
AND ((20 IS NULL) OR (20 >= org.no)))
-- [...]
By "a lot", I mean 5-10x faster. Note that the query is executed very rarely, i.e. once a month. So I don't need to cache the execution plan.
My questions
How can I inline values in PL/SQL? I know about EXECUTE IMMEDIATE, but I would prefer to have PL/SQL compile my query, and not do string concatenation.
Did I just measure something that happened by coincidence or can I assume that inlining variables is indeed better (in this case)? The reason why I ask is because I think that bind variables force Oracle to devise a general execution plan, whereas inlined values would allow for analysing very specific column and index statistics. So I can imagine that this is not just a coincidence.
Am I missing something? Maybe there is an entirely other way to achieve query execution plan improvement, other than variable inlining (note I have tried quite a few hints as well but I'm not an expert on that field)?
In one of your comments you said:
"Also I checked various bind values.
With bind variables I get some FULL
TABLE SCANS, whereas with hard-coded
values, the plan looks a lot better."
There are two paths. If you pass in NULL for the parameters then you are selecting all records. Under those circumstances a Full Table Scan is the most efficient way of retrieving data. If you pass in values then indexed reads may be more efficient, because you're only selecting a small subset of the information.
When you formulate the query using bind variables the optimizer has to take a decision: should it presume that most of the time you'll pass in values or that you'll pass in nulls? Difficult. So look at it another way: is it more inefficient to do a full table scan when you only need to select a sub-set of records, or to do indexed reads when you need to select all records?
It seems as though the optimizer has plumped for full table scans as being the least inefficient operation to cover all eventualities.
Whereas when you hard code the values the Optimizer knows immediately that 10 IS NULL evaluates to FALSE, and so it can weigh the merits of using indexed reads for find the desired sub-set records.
So, what to do? As you say this query is only run once a month I think it would only require a small change to business processes to have separate queries: one for all organisations and one for a sub-set of organisations.
"Btw, removing the :R1 IS NULL clause
doesn't change the execution plan
much, which leaves me with the other
side of the OR condition, :R1 <=
org.no where NULL wouldn't make sense
anyway, as org.no is NOT NULL"
Okay, so the thing is you have a pair of bind variables which specify a range. Depending on the distribution of values, different ranges might suit different execution plans. That is, this range would (probably) suit an indexed range scan...
WHERE org.id BETWEEN 10 AND 11
...whereas this is likely to be more fitted to a full table scan...
WHERE org.id BETWEEN 10 AND 1199999
That is where Bind Variable Peeking comes into play.
(depending on distribution of values, of course).
Since the query plans are actually consistently different, that implies that the optimizer's cardinality estimates are off for some reason. Can you confirm from the query plans that the optimizer expects the conditions to be insufficiently selective when bind variables are used? Since you're using 11.2, Oracle should be using adaptive cursor sharing so it shouldn't be a bind variable peeking issue (assuming you are calling the version with bind variables many times with different NO values in your testing.
Are the cardinality estimates on the good plan actually correct? I know you said that the statistics on the NO column are accurate but I would be suspicious of a stray histogram that may not be updated by your regular statistics gathering process, for example.
You could always use a hint in the query to force a particular index to be used (though using a stored outline or optimizer plan stability would be preferable from a long-term maintenance perspective). Any of those options would be preferable to resorting to dynamic SQL.
One additional test to try, however, would be to replace the SQL 99 join syntax with Oracle's old syntax, i.e.
SELECT <<something>>
FROM <<some other table>> cust,
organization org
WHERE cust.org_id = org.id
AND ( ((org_from IS NULL) OR (org_from <= org.no))
AND ((org_to IS NULL) OR (org_to >= org.no)))
That obviously shouldn't change anything, but there have been parser issues with the SQL 99 syntax so that's something to check.
It smells like Bind Peeking, but I am only on Oracle 10, so I can't claim the same issue exists in 11.
This looks a lot like a need for Adaptive Cursor Sharing, combined with SQLPlan stability.
I think what is happening is that the capture_sql_plan_baselines parameter is true. And the same for use_sql_plan_baselines. If this is true, the following is happening:
The first time that a query started it is parsed, it gets a new plan.
The second time, this plan is stored in the sql_plan_baselines as an accepted plan.
All following runs of this query use this plan, regardless of what the bind variables are.
If Adaptive Cursor Sharing is already active,the optimizer will generate a new/better plan, store it in the sql_plan_baselines but is not able to use it, until someone accepts this newer plan as an acceptable alternative plan. Check dba_sql_plan_baselines and see if your query has entries with accepted = 'NO' and verified = null
You can use dbms_spm.evolve to evolve the new plan and have it automatically accepted if the performance of the plan is at least 1,5 times better than without the new plan.
I hope this helps.
I added this as a comment, but will offer up here as well. Hope this isn't overly simplistic, and looking at the detailed responses I may be misunderstanding the exact problem, but anyway...
Seems your organisations table has column no (org.no) that is defined as a number. In your hardcoded example, you use numbers to do the compares.
JOIN organisations org
ON (cust.org_id = org.id
AND ((10 IS NULL) OR (10 <= org.no))
AND ((20 IS NULL) OR (20 >= org.no)))
In your procedure, you are passing in varchar2:
PROCEDURE DO_STUFF(
org_from VARCHAR2 := NULL,
org_to VARCHAR2 := NULL)
So to compare varchar2 to number, Oracle will have to do the conversions, so this may cause the full scans.
Solution: change proc to pass in numbers

Oracle empty strings

How do you guys treat empty strings with Oracle?
Statement #1: Oracle treats empty string (e.g. '') as NULL in "varchar2" fields.
Statement #2: We have a model that defines abstract 'table structure', where for we have fields, that can't be NULL, but can be "empty". This model works with various DBMS; almost everywhere, all is just fine, but not with Oracle. You just can't insert empty string into a "not null" field.
Statement #3: non-empty default value is not allowed in our case.
So, would someone be so kind to tell me - how can we resolve it?
This is why I've never understood why Oracle is so popular. They don't actually follow the SQL standard, based on a silly decision they made many years ago.
The Oracle 9i SQL Reference states (this has been there for at least three major versions):
Oracle currently treats a character value with a length of zero as null. However, this may not continue to be true in future releases, and Oracle recommends that you do not treat empty strings the same as nulls.
But they don't say what you should do. The only ways I've ever found to get around this problem are either:
have a sentinel value that cannot occur in your real data to represent NULL (e.g, "deoxyribonucleic" for a surname field and hope that the movie stars don't start giving their kids weird surnames as well as weird first names :-).
have a separate field to indicate whether the first field is valid or not, basically what a real database does with NULLs.
Are we allowed to say "Don't support Oracle until it supports the standard SQL behaviour"? It seems the least pain-laden way in many respects.
If you can't force (use) a single blank, or maybe a Unicode Zero Width Non-Break Space (U+FEFF), then you probably have to go the whole hog and use something implausible such as 32 Z's to indicate that the data should be blank but isn't because the DBMS in use is Orrible.
Empty string and NULL in Oracle are the same thing. You want to allow empty strings but disallow NULLs.
You have put a NOT NULL constraint on your table, which is the same as a not-an-empty-string constraint. If you remove that constraint, what are you losing?

Resources