Oracle and JDBC performance: INSERT ALL vs preparedStatement.addBatch - oracle

I have a java app with an Oracle database backend that I need to insert multiple rows into. I've seen the discussion about inserting multiple rows into Oracle, but I'm also interested in how the performance is affected when JDBC in thrown in the mix.
I see a few possibilities:
Option 1:
Use a singe-row insert PreparedStatement and execute it multiple times:
String insert = "Insert into foo(bar, baz) values (?, ?)";
PreparedStatement stmt = conn.prepareStatement(insert);
for(MyObject obj : someList) {
stmt.setString(1, obj.getBar());
stmt.setString(2, obj.getBaz());
stmt.execute();
}
Option 2:
Build an Oracle INSERT ALL statement:
String insert = "INSERT ALL " +
"INTO foo(bar, baz), (?, ?) " +
"INTO foo(bar, baz), (?, ?) " +
"SELECT * FROM DUAL";
PreparedStatement stmt = conn.prepareStatement(insert);
int i=1;
for(MyObject obj : someList) {
stmt.setString(i++, obj.getBar());
stmt.setString(i++, obj.getBaz());
}
stmt.execute();
Option 3:
Use the addBatch functionality of PreparedStatement:
String insert = "Insert into foo(bar, baz) values (?, ?)";
PreparedStatement stmt = conn.prepareStatement(insert);
for(MyObject obj : someList) {
stmt.setString(1, obj.getBar());
stmt.setString(2, obj.getBaz());
stmt.addBatch();
}
stmt.execute();
I guess another possibility would be to create a CSV file and use the SQL Loader, but I'm not sure that would really be faster if you add in the overhead of creating the CSV file...
So which option would perform the fastest?

Use the addBatch() functionality of PreparedStatement for anything below 1,000,000 rows.
Each additional component you add to your code increases the dependencies and points of failure.
If you go down that route (external tables, sql loader etc) make sure it is really worth it.
Serializing data to a csv file, moving it into a location readable by database will easily take a second or so.
During that time, I could have inserted 20,000 rows if I just sucked it up and started inserting with JDBC.

SQL Loader seems to be better way even without direct path loading, but it's hard to maintain.
Batch insert 2-4 times faster than single insert statements.
Insert all just like batch insert, and both of this would be faster then PL/SQL implementation.
Also you may want to read this AskTom topic.

Using batch can be transparent to a programmer. Here is a cite from here:
Setting the Connection Batch Value
You can specify a default batch value for any Oracle prepared statement in your Oracle connection. > To do this, use the setDefaultExecuteBatch() method of the OracleConnection object. For example, the following code sets the default batch value to 20 for all prepared statement objects associated with the conn connection object:
((OracleConnection)conn).setDefaultExecuteBatch(20);
Even though this sets the default batch value for all the prepared statements of the connection, you can override it by calling setDefaultBatch() on individual Oracle prepared statements.
The connection batch value will apply to statement objects created after this batch value was set.

Related

Parametrizing a sub query with jdbc PreparedStatement

I have the following query where the first argument itself is a subquery
The java code is:
String query = select * from (?) where ROWNUM < ?
PreparedStatement statement = conn.preparedStatement(query)
statement.setString(1, "select * from foo_table")
statement.setInt(2, 3)
When I run the java code, I get an exception. What alternatives do I have for making the first subquery statement.setString(1, "select * from foo_table") a parameter?
This is not possible, parameter placeholders can only represent values, not object names (like table names, column names, etc) nor subselects or other query elements.
You will need to dynamically create the query to execute using string concatenation, or other string formatting/templating options.

Performance on IN clause or inner join with volatile table with Teradata

I have a scenario to perform fetch operation for some wildcarded names. wildcarded names can be more than 75k.
Case 1:
I have tried IN clause and LIKE, but it doesn't allow more than 2500 parametered names in IN clause using spring jdbc. So, i used parallel async request with every request contains 2500 wildcarded names
SELECT NAME FROM DB.TABLE_A
WHERE NAME LIKE ANY (:wildcarded_names)
Map<String, Object> paramMap = new HashMap<String, Object>();
paramMap.put("wildcarded_names", wildcarded_names);
SqlRowSet rowSet = getNamedParameterJdbcTemplate().queryForRowSet(query, paramMap);
Set<String> names = new HashSet<String>();
while (rowSet.next()) {
names.add(rowSet.getString("NAME"));
}
Case 2:
Creating Volatile table in teradata, and inserting all the wildcarded names and using innerjoin with LIKE clause
SELECT NAME
FROM DB.TABLE tbl
INNER JOIN volatile_table vtbl ON vtbl.NAME LIKE tbl.NAME
Which one is more efficient and gives better performance?
If you can have up to 75K values, a volatile table will likely be your better option. The optimizer included an enhancement around TD 13.10 to improve IN list performance, but there are still limits. With the volatile table you should include statistics on the PI and join column, if it differs from the PI.

Can we use table objects as positional parameters in jdbc?

Main Query:
PreparedStatement pstmt = con.prepareStatement("DELETE FROM employee WHERE eno = 1");
Working: by concatenation
PreparedStatement pstmt = con.prepareStatement("DELETE FROM "+tn+" WHERE "+cn+" = ?");
Not Working: when used positional parameter
PreparedStatement pstmt = con.prepareStatement("DELETE FROM ? WHERE ? = ?");
Can we use positional parameters only for working with table data?
Why I'm not able to work with table_name, column_name etc.?
The point of a prepared statement is to let the database prepare an execution plan (i.e. compute what needs to be done, using which tables, indices, joining strategies, algorithms, etc.), and then to execute that plan one or several times with various parameters.
If the database doesn't even know which table it must delete from, based on which criteria, there's no way it can prepare the execution plan.

Why a select query is running slower in Oracle than sql server

I am reading data from Oracle database by using ODP.Net with the follwing code
OracleConnection con = new OracleConnection(connectionString);
OracleCommand cmd = new OracleCommand( SELECT ID,RECORD(XMLType) FROM tbl_Name, con);
con.Open();
OracleDataReader _dataReader = cmd.ExecuteReader();
while (_dataReader.Read())
{
string rowId = _dataReader[0].ToString();
string xmlString = _dataReader[1].ToString();
adding this data into Queue for further processing
}
It working fine but in a minute it's reading only 10000 record. If I use SqlServer database it's reading 500000 record in minute having table with same schema.
Please help me if I am missing something to read data faster using ODP.NET
Thank you.
**
ANSWER:
**
I have tried with GetClobVal() and GetString Val() functions, now it is working fine.
select t.RECID, t.XMLRECORD.GetClobVal() from tableName t"
select x.RECID,x.XMLRECORD.getStringVal() from tableName x"
If we use these queries with oracle command it will run fast, but not as fast as sql server query.

Is it possible to refer to column names via bind variables in Oracle?

I am trying to refer to a column name to order a query in an application communicating with an Oracle database. I want to use a bind variable so that I can dynamically change what to order the query by.
The problem that I am having is that the database seems to be ignoring the order by column.
Does anyone know if there is a particular way to refer to a database column via a bind variable or if it is even possible?
e.g my query is
SELECT * FROM PERSON ORDER BY :1
(where :1 will be bound to PERSON.NAME)
The query is not returning results in alphabetical order, I am worried that the database is interpreting this as:-
SELECT * FROM PERSON ORDER BY 'PERSON.NAME'
which will obviously not work.
Any suggestions are much appreciated.
No. You cannot use bind variables for table or column names.
This information is needed to create the execution plan. Without knowing what you want to order by, it would be impossible to figure out what index to use, for example.
Instead of bind variables, you have to directly interpolate the column name into the SQL statement when your program creates it. Assuming that you take precautions against SQL injection, there is no downside to that.
Update: If you really wanted to jump through hoops, you could probably do something like
order by decode(?, 'colA', colA, 'colB', colB)
but that is just silly. And slow. Don't.
As you are using JDBC. You can rewrite your code, to something without bind variables. This way you can also dynamically change the order-by e.g.:
String query = "SELECT * FROM PERS ";
if (condition1){
query = query+ " order by name ";
// insert more if/else or case statements
} else {
query = query+ " order by other_column ";
}
Statement select = conn.createStatement();
ResultSet result = select.executeQuery(query);
Or even:
String columnName = getColumnName(input);
Statement select = conn.createStatement();
ResultSet result = select.executeQuery("SELECT * FROM PERS ORDER BY "+columnName);
ResultSet result = select.executeQuery(
"SELECT * FROM PERS ORDER BY " + columnName
);
will always be a new statement to the database.
That means it is, like Thilo already explained, impossible to "reorder" an already bound, calculated, prepared, parsed statement. When using this result set over and over in your application and the only thing, which changes over time is the order of the presentation, try to order the set in your client code.
Otherwise, dynamic SQL is fine, but comes with a huge footprint.

Resources