I have a table like this one in Oracle:
create table suppliers(name varchar2(100));
With its corresponding index on upper(name):
create index supplier_name_upper_idx on suppliers(upper(name));
I would like to populate an autocomplete through AJAX, getting the information from a Servlet running JDBC queries.
This works:
PreparedStatement ps =
conn.prepareStatement(
"select * from suppliers where upper(name) like ?"
);
ps.setString(1, 'something%');
The problem is, as far as I know, PreparedStatement won't use the index because it can't know, at statement compile time, whether the parameter is going to be 'something%' (being able to get performance advantage from index) or '%something%' (not being able to get performance advantage from index).
So, my question is:
Should I use a Statement instead? If so, what's the best way to escape the input parameter (since it will come from an AJAX request)
Is there something I can use to make the PreparedStatement to use the index?
I think prepared statement is better because you will have fewer parses on server side, and as a result fewer 'latches : library cache'.
SELECT /*+ INDEX suppliers(supplier_name_upper_idx) */ * from suppliers .... should make statement use index "supplier_name_upper_idx".
Related
I'm running Apex 19.2 and I would like to create a classical or interactive report based on dynamic query.
The query I'm using is not known at design time. It depends on an page item value.
-- So I have a function that generates the SQL as follows
GetSQLQuery(:P1_MyItem);
This function may return something like
select Field1 from Table1
or
Select field1,field2 from Table1 inner join Table2 on ...
So it's not a sql query always with the same number of columns. It's completely variable.
I tried using PL/SQL function Body returning SQL Query but it seems like Apex needs to parse the query at design time.
Has anyone an idea how to solve that please ?
Cheers,
Thanks.
Enable the Use Generic Column Names option, as Koen said.
Then set Generic Column Count to the upper bound of the number of columns the query might return.
If you need dynamic column headers too, go to the region attributes and set Type (under Heading) to the appropriate value. PL/SQL Function Body is the most flexible and powerful option, but it's also the most work. Just make sure you return the correct number of headings as per the query.
Is there any possibility of avoiding full scans, when dealing with incoming NULL query parameters in a stored procedure? Suppose I have 4 parameters, that the user sends from a form and tries to look for an exact match in the table, like this:
SELECT *
FROM table1 t1
WHERE ((:qParam1 is null) OR (t1.col1 = :qParam1)) AND
((:qParam2 is null) OR (t1.col2 = :qParam2)) AND
((:qParam3 is null) OR (t1.col3 = :qParam3)) AND
((:qParam4 is null) OR (t1.col4 = :qParam4));
So when this part of the procedure executes, because of NULL check, it will do a FTS, since the procedure has already been compiled and the execution plan determined. It would need 2^4 different queries to be written inside the procedure in order to always use the most efficient plan considering the incoming query parameters (and considerably more if the input parameter number increases). My question is - is there any way, except for dynamic SQL, to avoid the FTS in these type of queries?
Maybe not. Oracle does not store nulls in the index so it cannot ever use an index when there's a possibility of null in the predicate. If your columns are nullable then no. Having said that, there's a good chance it's the best plan anyway - your query is so vague (ok - flexible) that it would be hard pressed to build a single, useful plan anyway. Oracle is quite smart with flexible plans but there's not really much to go on here.
If you do have nullable columns and an index you might be able to bodge it with
t1.col1 = :qParam2 and t1.col1 is not null
in case it's not smart enough to work that out for itself.
well this problem is general in sql server ce
i have indexes on all the the fields.
also the same query but with ID IN ( list of int ids) is pretty fast.
i tried to change the query to OUTER Join but this just make it worse.
so any hints on why this happen and how to fix this problem?
That's because the index is not really helpful for that kind of query, so the database has to do a full table scan. If the query is (for some reason) slower than a simple "SELECT * FROM TABLE", do that instead and filter the unwanted IDs in the program.
EDIT: by your comment, I recognize you use a subquery instead of a list. Because of that, there are three possible ways to do the same (hopefully one of them is faster):
Original statement:
select * from mytable where id not in (select id from othertable);
Alternative 1:
select * from mytable where not exists
(select 1 from othertable where mytable.id=othertable.id);
Alternative 2:
select * from mytable
minus
select mytable.* from mytable in join othertable on mytable.id=othertable.id;
Alternative 3: (ugly and hard to understand, but if everything else fails...)
select * from mytable
left outer join othertable on (mytable.id=othertable.id)
where othertable.id is null;
This is not a problem in SQL Server CE, but overall database.
The OPERATION IN is sargable and NOT IN is nonsargable.
What this mean ?
Search ARGument Able, thies mean that DBMS engine can take advantage of using index, for Non Search ARGument Ablee the index can't be used.
The solution might be using filter statement to remove those IDs
More in SQL Performance Tuning by Peter Gulutzan.
ammoQ is right, index does not help much with your query. Depending on distribution of values in your ID column you could optimise the query by specifying which IDs to select rather than not to select. If you end up requesting say more than ~25% of the table index will not be used anyway though because for nonclustered indexed (which is the only type of indexes which SQL CE supports if memory serves) it would be cheaper to scan the table. Otherwise (if the query is actually selective) you could re-write query with ID ranges to select ('union all' may work better than 'or' to combine ranges if SQL CE supports 'union all', not sure)
To optimize SELECT queries, I run them both with and without an index and measure the difference. I run a bunch of different similar queries and try to select different data to make sure that caching doesn't throw off the results. However, on very large tables, indexes take a really long time to create, and I have several different ideas about what indexes would be appropriate.
Is it possible in Oracle (or any other database for that matter) to perform a query but tell the database to not use a certain index when performing the query? Or just turn off the index entirely, but be able to easily switch it back on without having to re-index the entire table? This would make it much easier to test, since I can create all the indexes I'm thinking about all at once, then try my queries using different ones.
Alternatively, is there any better way to go about optimizing queries on large tables and know which indexes would be best to create?
You can set index visibility in 11g -
ALTER INDEX idx1 [ INVISIBLE | VISIBLE ]
this makes it unusable by the optimizer, but oracle still updates the index when data is added or removed. This makes it easy to test performance with the index disabled without having to remove & rebuild the whole index.
See here for the oracle docs on index visibility
You can use the NO_INDEX hint in the queries to ignore the indexes - see docs for further details. The SQL Access Advisor is an Oracle utility that will recommend indexing strategies.
Well you can write the query in such a way that it wont use index(using expression instead of a value)
For example
Select * from foobar where column1 = 'result' --uses index on column1
To avoid using index for a number and varchar
Select * from foobar where column1 + 0 = 5 -- simple expression to disable the index
Select * from foobar where column1 || '' = 'result' --simple expression to disable the index
Or you can just use NVL to disable the index in the query without worrying about the column's data type
Select * from foobar where nvl(column1,column1) = 'result' --i love this way :D
Similarly you can use index hints
like /* Index(E employee_id) */ to use indexes.
P.S. This is all the paraphrased from Dan Tow's Book SQL Tuning. I started reading it a few days back :)
Given a table name and column name in a pair of variables, can I perform a select query without using dynamic sql?
for example, I'd like something nicer than this:
CREATE PROCEDURE spTest (#table NVARCHAR(30), #column NVARCHAR(30)) AS
DECLARE #sql NVARCHAR(2000)
SELECT #sql = N'SELECT ' + #column + N' FROM ' + #table
PRINT #sql
EXEC sp_executesql #sql
I'd like to do this because my dynamic sql version is 3x slower than the non-dynamic version (which doesn't support a programmable table/column name, hence this question).
The dynamic version is going to be 3x slower, solely because your'e going to be swapping in and out the table name, and the parser can't optimize for that.
You might want to use a nested switch statement of sorts in your routine and use that to select your SELECT statement from pre-defined tables/columns, especially if your schema won't be changed frequently. That should run lots faster, but you'll lose the true dynamic-ness.
So it sounds like the question is whether you can build and run a query at runtime without using dynamic SQL?
I'd say the answer is no. The choices are:
dynamic SQL - calling sp_ExecuteSQL or Exec() given a user-built string. Whether that's pass-through SQL using an ADO library and a Command object, or it's right within a sproc.
compiled SQL - using the usual objects (sprocs and UDF) with known good statements interacting with known good objects. Parsing is done beforehand in this case.