Main Query:
PreparedStatement pstmt = con.prepareStatement("DELETE FROM employee WHERE eno = 1");
Working: by concatenation
PreparedStatement pstmt = con.prepareStatement("DELETE FROM "+tn+" WHERE "+cn+" = ?");
Not Working: when used positional parameter
PreparedStatement pstmt = con.prepareStatement("DELETE FROM ? WHERE ? = ?");
Can we use positional parameters only for working with table data?
Why I'm not able to work with table_name, column_name etc.?
The point of a prepared statement is to let the database prepare an execution plan (i.e. compute what needs to be done, using which tables, indices, joining strategies, algorithms, etc.), and then to execute that plan one or several times with various parameters.
If the database doesn't even know which table it must delete from, based on which criteria, there's no way it can prepare the execution plan.
Related
I have the function below
CREATE OR REPLACE FUNCTION BUTCE_REPORT_Fun (birim_id IN VARCHAR2)
RETURN sys_refcursor
IS
retval sys_refcursor;
BEGIN
OPEN retval FOR
select *
from ifsapp.butce_gerceklesme
WHERE budget_year = '2018'
AND USER_GROUP = birim_id ;
RETURN retval;
END BUTCE_REPORT_Fun;
and am trying to execute the function this way
SELECT * from table(IFSAPP.BUTCE_REPORT_FUN('3008'))
the line above generates this exception
ora-22905 cannot access rows from a non-nested table item
to keep in mind that ifsapp.butce_gerceklesme is a view (which I do not think that it matters).
So how I can solve this. any help is appreciated.
Actually, am trying to create a function that returns rows from the view above according to the parameters provided. so if I can achieve that in another way that would be better.
Ref Cursors are for use in program calls: they map to JDBC or ODBC ResultSet classes. They can't be used as an input to a table() call. Besides, there is no value in calling your function in SQL because you can simply execute the embedded query in SQL.
the main table is huge and the inner query assigned to USER_GROUP is selected every time
So maybe what you want is subquery factoring AKA the WITH clause?
with ug as (
select con2.CODE_PART_VALUE
from IFSAPP.ACCOUNTING_ATTRIBUTE_CON2 con2
where COMPANY = 'XYZ'
and ATTRIBUTE = 'ABC'
and CODE_PART = 'J'
and con2.ATTRIBUTE_VALUE=407
AND rownum = 1
)
select *
from ifsapp.butce_gerceklesme t
join ug on t.USER_GROUP = ug.CODE_PART_VALUE
WHERE t.budget_year = '2018'
Tuning queries on StackOverflow is a mug's game, because there are so many things which might be responsible for sub-optimal performance. But as a rule of thumb you should try to tune the whole query. Encapsulating a part of it in PL/SQL is unlikely to improve response times, and indeed may degrade them.
I have the following query where the first argument itself is a subquery
The java code is:
String query = select * from (?) where ROWNUM < ?
PreparedStatement statement = conn.preparedStatement(query)
statement.setString(1, "select * from foo_table")
statement.setInt(2, 3)
When I run the java code, I get an exception. What alternatives do I have for making the first subquery statement.setString(1, "select * from foo_table") a parameter?
This is not possible, parameter placeholders can only represent values, not object names (like table names, column names, etc) nor subselects or other query elements.
You will need to dynamically create the query to execute using string concatenation, or other string formatting/templating options.
I have a scenario to perform fetch operation for some wildcarded names. wildcarded names can be more than 75k.
Case 1:
I have tried IN clause and LIKE, but it doesn't allow more than 2500 parametered names in IN clause using spring jdbc. So, i used parallel async request with every request contains 2500 wildcarded names
SELECT NAME FROM DB.TABLE_A
WHERE NAME LIKE ANY (:wildcarded_names)
Map<String, Object> paramMap = new HashMap<String, Object>();
paramMap.put("wildcarded_names", wildcarded_names);
SqlRowSet rowSet = getNamedParameterJdbcTemplate().queryForRowSet(query, paramMap);
Set<String> names = new HashSet<String>();
while (rowSet.next()) {
names.add(rowSet.getString("NAME"));
}
Case 2:
Creating Volatile table in teradata, and inserting all the wildcarded names and using innerjoin with LIKE clause
SELECT NAME
FROM DB.TABLE tbl
INNER JOIN volatile_table vtbl ON vtbl.NAME LIKE tbl.NAME
Which one is more efficient and gives better performance?
If you can have up to 75K values, a volatile table will likely be your better option. The optimizer included an enhancement around TD 13.10 to improve IN list performance, but there are still limits. With the volatile table you should include statistics on the PI and join column, if it differs from the PI.
I have a java app with an Oracle database backend that I need to insert multiple rows into. I've seen the discussion about inserting multiple rows into Oracle, but I'm also interested in how the performance is affected when JDBC in thrown in the mix.
I see a few possibilities:
Option 1:
Use a singe-row insert PreparedStatement and execute it multiple times:
String insert = "Insert into foo(bar, baz) values (?, ?)";
PreparedStatement stmt = conn.prepareStatement(insert);
for(MyObject obj : someList) {
stmt.setString(1, obj.getBar());
stmt.setString(2, obj.getBaz());
stmt.execute();
}
Option 2:
Build an Oracle INSERT ALL statement:
String insert = "INSERT ALL " +
"INTO foo(bar, baz), (?, ?) " +
"INTO foo(bar, baz), (?, ?) " +
"SELECT * FROM DUAL";
PreparedStatement stmt = conn.prepareStatement(insert);
int i=1;
for(MyObject obj : someList) {
stmt.setString(i++, obj.getBar());
stmt.setString(i++, obj.getBaz());
}
stmt.execute();
Option 3:
Use the addBatch functionality of PreparedStatement:
String insert = "Insert into foo(bar, baz) values (?, ?)";
PreparedStatement stmt = conn.prepareStatement(insert);
for(MyObject obj : someList) {
stmt.setString(1, obj.getBar());
stmt.setString(2, obj.getBaz());
stmt.addBatch();
}
stmt.execute();
I guess another possibility would be to create a CSV file and use the SQL Loader, but I'm not sure that would really be faster if you add in the overhead of creating the CSV file...
So which option would perform the fastest?
Use the addBatch() functionality of PreparedStatement for anything below 1,000,000 rows.
Each additional component you add to your code increases the dependencies and points of failure.
If you go down that route (external tables, sql loader etc) make sure it is really worth it.
Serializing data to a csv file, moving it into a location readable by database will easily take a second or so.
During that time, I could have inserted 20,000 rows if I just sucked it up and started inserting with JDBC.
SQL Loader seems to be better way even without direct path loading, but it's hard to maintain.
Batch insert 2-4 times faster than single insert statements.
Insert all just like batch insert, and both of this would be faster then PL/SQL implementation.
Also you may want to read this AskTom topic.
Using batch can be transparent to a programmer. Here is a cite from here:
Setting the Connection Batch Value
You can specify a default batch value for any Oracle prepared statement in your Oracle connection. > To do this, use the setDefaultExecuteBatch() method of the OracleConnection object. For example, the following code sets the default batch value to 20 for all prepared statement objects associated with the conn connection object:
((OracleConnection)conn).setDefaultExecuteBatch(20);
Even though this sets the default batch value for all the prepared statements of the connection, you can override it by calling setDefaultBatch() on individual Oracle prepared statements.
The connection batch value will apply to statement objects created after this batch value was set.
I am trying to refer to a column name to order a query in an application communicating with an Oracle database. I want to use a bind variable so that I can dynamically change what to order the query by.
The problem that I am having is that the database seems to be ignoring the order by column.
Does anyone know if there is a particular way to refer to a database column via a bind variable or if it is even possible?
e.g my query is
SELECT * FROM PERSON ORDER BY :1
(where :1 will be bound to PERSON.NAME)
The query is not returning results in alphabetical order, I am worried that the database is interpreting this as:-
SELECT * FROM PERSON ORDER BY 'PERSON.NAME'
which will obviously not work.
Any suggestions are much appreciated.
No. You cannot use bind variables for table or column names.
This information is needed to create the execution plan. Without knowing what you want to order by, it would be impossible to figure out what index to use, for example.
Instead of bind variables, you have to directly interpolate the column name into the SQL statement when your program creates it. Assuming that you take precautions against SQL injection, there is no downside to that.
Update: If you really wanted to jump through hoops, you could probably do something like
order by decode(?, 'colA', colA, 'colB', colB)
but that is just silly. And slow. Don't.
As you are using JDBC. You can rewrite your code, to something without bind variables. This way you can also dynamically change the order-by e.g.:
String query = "SELECT * FROM PERS ";
if (condition1){
query = query+ " order by name ";
// insert more if/else or case statements
} else {
query = query+ " order by other_column ";
}
Statement select = conn.createStatement();
ResultSet result = select.executeQuery(query);
Or even:
String columnName = getColumnName(input);
Statement select = conn.createStatement();
ResultSet result = select.executeQuery("SELECT * FROM PERS ORDER BY "+columnName);
ResultSet result = select.executeQuery(
"SELECT * FROM PERS ORDER BY " + columnName
);
will always be a new statement to the database.
That means it is, like Thilo already explained, impossible to "reorder" an already bound, calculated, prepared, parsed statement. When using this result set over and over in your application and the only thing, which changes over time is the order of the presentation, try to order the set in your client code.
Otherwise, dynamic SQL is fine, but comes with a huge footprint.