Spring JdbcTemplate execute vs update - spring

What is the difference between execute(String sql) and update(String sql) in JdbcTemplate?
If my statement is a straight CRUD and not an object creation DDL (as the execute javadoc implies), does it make sense to use execute vs the seemingly more lightweight update?

The method execute(String sql) returns void if a call of the method succeeds without errors. (see execute(..) JavaDoc). As for plain JDBC, it should/can be used to define database schema elements (DDL), for instance with CREATE TABLE... statements.
By contrast, update(String sql) is typically used for DML statements which correspond to SQL INSERT/UPDATE/DELETE operations. In these cases, in which data are manipulated, from a programmer's perspective it is important to know how many rows have been added/changed/deleted by the respective DML operation.
For this reason, the update(...) method returns a non negative int value to let you know:
Returns:
the number of rows affected
As the JavaDoc indicates by using the term "typically" in its description, you could, however, use execute(String sql) to manipulate data without the need to use the returned int value. In theory, and for some DBMS implementations, this call could be some nanoseconds quicker, as no return value needs to be transferred.
Yet, from my personal and a programmer's perspective, you should use both operations with the difference between DDL vs. DML statements in mind, as by its nature update signals a data manipulation operation being conducted.
Hope, it helps.

Related

Is better Linq or SQL query for complex calculations and aggregations?

We must create and show at runtime (asp.net mvc) some complex reports from Oracle tables data with millions of records. The reports data must be obtained from groupings and little complex calculations.
So is it better for performance and maintainability of code that do these groupings and calculations via sql query (pl/sql) or via linq?
Thanks for your kindle reply
So is it better for performance and maintainability of code that do
these groupings and calculations via sql query (pl/sql) or via linq?
It depends on what you mean by via linq. If you mean that you fetch the complete table to local memory and then use linq statements to extract the result that you want, then of course SQL statements are faster.
However, if you mean that you use Entity Framework, or something similar, then the answer is not a easy to give.
If you use Entity Framework (or some clone), your tables will be represented by IQueryable<...> instead of IEnumerable<...>. An IQueryable has an Expression and a Provider. The Expression represents the query that must be performed. The Provider knows which system must execute the query (usually a Database Management System) and how to communicate with this system. When the query must be executed, it is the task of the Provider to translate the Expression into the language that the system knows (usually something SQL-like) and to execute the SQL-query.
There are two kinds of IQueryable LINQ statements: those that return an IQueryable<...> of something, and those that return a TResult. The ones that return IQueryable only change the Expression. They are functions that use deferred execution.
Function that do not return an IQueryable, are ToList(), FirstOrDefault(), Any(), Max(), etc. Internally they will call functions that will GetEnumerator() (usually via a foreach), which orders the Provider to translate the Expression and execute the query.
Back to your question
So which one is more efficient, entity framework or SQL? Efficiency is not only the time to perform the queries, it is also the development/testing time, for the first version and for future changes in the software.
If you use an entity-framework (-clone), the SQL-queries created from the Expressions are pretty efficient, depending on the framework manufacturer. If you look at the code, then sometimes the SQL query is not the optimal one, although you'll have to be a pretty good SQL-programmer to improve most queries.
The big advantage above using Entity Framework and LINQ queries above SQL statements is that development times will be shorter. The syntax of the LINQ statements is checked at compile time, SQL statements at run-time. Development and test periods will be shorter.
It is easy to reuse LINQ statements, while SQL statements almost always have to be written especially for the query you want to execute. LINQ statements can be tested without a database on any sequence of items that represent your tables.
My Advice
For most queries you won't notice any difference in execution time between the entity framework query or the SQL query.
If you expect complicated queries and future changes, I'd go for entity framework. With main argument the shorter development time, the better testing possibilities, and the better maintainability.
If you detect some queries where you notice that the execution time is too long, you can always decide to bypass entity framework by executing a SQL query instead of using LINQ.
If you've wrapped your DbContext in a proper repository, where you hide the use cases from their implementations, the users of your repository won't notice the difference.

JOOQ vs JDBC+tests

What advantage have JOOQ over JDBC+tests?
In JDBC you can write SQL queries direct in code, with JOOQ we calls methods, so JOOQ is by default more slow.
In JOOQ is harder to do mistakes but not impossible. This mistakes can be caught in tests, with JOOQ you also should write these tests, so, no advantage here for JOOQ.
I completely agree with you. It's always better to have tests, regardless if you're using a "dynamic language" (SQL as an external DSL, e.g. JDBC) or a "static language" (SQL as an internal DSL, e.g. jOOQ)
But there's much more than that:
You seem to have scratched only the surface of what jOOQ can do for you. Sure, type safe, embedded SQL is a great feature, but once you have that, you get for free (list is far from exhaustive):
Active records: With JDBC, you're back to spelling out each individual boring INSERT, UPDATE, DELETE statement manually. jOOQ's UpdatableRecord greatly simplifies this, while offering things like:
RecordListener for record lifecycle management
Optimistic locking
Batching of inserts, updates, deletes
Dynamic SQL is very easy. Instead of that JDBC string concatenation mess that you'd be getting otherwise, you can just dynamically add clauses to your SQL statements.
Multi tenancy: You can easily switch schema references and/or table references at runtime in order to run the same query against a different schema.
Standardisation: The same jOOQ query runs on up to 21 RDBMS because the jOOQ API standardises the generated SQL. This can be seen in the jOOQ manual's section about the LIMIT clause, for instance - one of SQL's most poorly standardised clauses
Query Lifecycle: There's a simple SPI called ExecuteListener that allows you to hook into the various JDBC interaction steps, including:
SQL generation
Prepared statement creation
Variable binding
Execution
Result fetching
Exceptions
SQL transformation: The VisitListener SPI allows you to intercept the SQL generation at any arbitrary position in your query expression tree. This can be very useful, e.g. to implement powerful things like row level security.
Stored procedures: These are rather tedious to bind to with JDBC, especially if you're using more advanced features like:
Oracle's TABLE and OBJECT types (imagine implementing SQLData et al.)
Oracle's PL/SQL types
Implicit cursors
Table-valued functions
Of course, you get the compile-time type safety that you've mentioned (and IDE autocompletion) for free. And if you don't want to go all in on the internal DSL that jOOQ is offering, you can still just use plain SQL and then use jOOQ's API in a non-type safe way (which still has tons of features):
// Just one example: CSV exports
String csvExport =
ctx.fetch("SELECT * FROM my_table WHERE id = ?", 3)
.formatCSV();
TL;DR:
JDBC is a wire protocol abstraction API
jOOQ is a SQL API
Disclaimer:
(Of course, this answer is biased as I work for the company behind jOOQ)

Why "recursive sql" is used in "Dictionary-Managed Tablespaces" in oracle?

Oracle documents points out that locally managed tablespace is better than dictionary-managed tablespace in several aspects. one is that recursive sql is used when database allocate free blocks in dictionary-managed tablespace.
Table fet$ has columns (TS#, FILE#, BLOCK#, LENGTH)
Could anyone explain why recursive sql is used for allocation with fet$?
You seem to be interpreting 'recursive' in the normal programming sense; but it can have slightly different meanings:
drawing upon itself, referring back.
The recursive nature of stories which borrow from each other
(mathematics, not comparable) of an expression, each term of which is determined by applying a formula to preceding terms
(computing, not comparable) of a program or function that calls itself
...
If you interpret it as a recursive function (meaning 3) then it doesn't quite make sense; fet$ isn't updated repeatedly and an SQL statement doesn't re-execute itself. Here 'recursive' is used more generally (meaning 1, sort of), in the sense that the SQL you run generates another layer of SQL 'under the hood'. Not the same SQL or the same function called by itself, but 'SQL drawing upon SQL', or 'SQL referring back to SQL', if you like.
The concepts guide - which is where I think you got your question from - says:
Avoids using the data dictionary to manage extents
Recursive operations can occur in dictionary-managed tablespaces if
consuming or releasing space in an extent results in another operation
that consumes or releases space in a data dictionary table or undo
segment.
With a table in a dictionary managed tablespace (DMT), when you insert data Oracle has to run SQL statements against the dictionary tables to identify and allocate blocks. You don't normally notice that, but you can see it in trace files and other performance views. SQL statements will be run against fet$ etc. to manage the space.
The 'recursive' part is that one SQL statement has to execute another (different) SQL statement; and that may in turn have to execute yet another (different again) SQL statement.
With a locally managed tablespace (LMT), block information is held in a bitmap within the tablespace itself. There is no dependence on the dictionary (for this, anyway). That extra layer of SQL is not needed, which saves time - both from the dictionary query itself and from potential concurrency delays, as multiple queries (across the database, for all tablespaces) access the dictionary at the same time. Managing that local block is much simpler and faster.
The concepts guide also says:
Note: Oracle strongly recommends the use of locally managed tablespaces with Automatic Segment Space Management.
As David says, there's not really any benefit to ever using a dictionary managed tablespaces any more, and unless you've inherited an old database that still uses them - in which case migrating to LMT should be considered - or are just learning for the sake of it, you can pretty much forget about them; anything new should be using LMT really, and references to DMTs are hopefully only of historic significance.
I wanted to demonstrate the difference by running a trace on the same insert statement against an LMT and a DMT, and showing the extra SQL statements from the trace file in the DMT version; but I can't find a DMT on any database I have access too, going back to 9i, which kind of backs up David's point I suppose. Instead I'll point you to yet more documentation:
Sometimes, to execute a SQL statement issued by a user, Oracle
Database must issue additional statements. Such statements are called
recursive calls or recursive SQL statements. For example, if you
insert a row into a table that does not have enough space to hold that
row, then Oracle Database makes recursive calls to allocate the space
dynamically. Recursive calls are also generated when data dictionary
information is not available in the data dictionary cache and must be
retrieved from disk.
You can use the tracing tools described in that document to compare for yourself, if you have access to a DMT; or you can search for examples.
You can see recursive SQL referred to elsewhere, usually in errors; the error isn't directly in the SQL your are executing, but in extra SQL Oracle issues internally in order to fulfil your request. LMTs just remove one instance where they used to be necessary, and in the process can remove a significant bottleneck.

jdbc Statement or PreparedStatement without where

If I don't have a WHERE clause in a query then should I use Statement or PreparedStatement. Which one will be efficient.
For Ex,
SELECT ID, NAME FROM PERSON
A prepared statement is precompiled to enhance efficiency. Also the database caches the statement which gains performance on later execution. Both can be of use even if you don't have variables in your statement. Especially if the statement is executed often.
If executed once or very seldomly I'd say a normal Statement is fine. Otherwise I would use a PreparedStatement. But there's no way of beeing sure about it without benchmarking.
Depends on the implementation of the JDBC driver. Some vendors save that statement in a cache, regardless if is a instance of java.sql.Statement or java.sql.PreparedStatement. For simplicity, you could use java.sql.Statement. On the other hand, if you plan to add a parameter and execute the statement several times (in the same connection), uses an instance of java.sql.PreparedStatement.
In the javadoc for java.sql.PreparedStatement says:
This object can then be used to efficiently execute this statement multiple times.
Apart from what has been mentioned by stonedsquirrel, another point is in future if you would want to add where condition then it is easy to make a change, all you need to add the following in your code
PreparedStatement ps = con.prepareStatement("SELECT ID, NAME FROM PERSON WHERE NAME= ?");
ps.setString(1, getName(""));
....
...
However if you are using Statement, then you need to make more changes in your code.
So by using PreparedStatement you will do minimal change if you need to add where conditions.
On the contrary by using Statement, it is quite easy to log or print the sql query, however if
PreparedStatement is used, logging or printing sql statement is quite difficult or there are no direct approaches available.

How performance can change using greedy LINQ operators?

Is it performant wise to use greedy LINQ operators such as ToList,ToLookUp,Distinct etc?
What would be a best practice(s) for LINQ query execution?
You often use for your objects List<> or making all your objects lists to IEnumerable<>. I know the latest gives more flexibility.
When working with memory (LINQ to Objects) it's ok to always use deffered loading, cause you can access it whenever you need without fear that tha data changed, added or inserted as the reference will execute the query as soon as you need access. But this changes with database LINQ queries such LINQ to EF.
Would like a StackOverflow users opinion.
Thank you!
What would be a best practice(s) for LINQ query execution?
A List may be accessed by index, a Lookup may be accessed by Key. These types are obviously serializable across a WCF boundary. A deferred IEnumerable doesn't do these things well.
For EF or LinqToSql, one must run their queries before the DataContext or whatever holds the SqlConnection gets disposed.
In my code, I use deferred IEnumerables only for method scoped variables when convenient. I use List for properties (sometimes the property constructs the List, but usually it's just backed by an instance) and method return types. Since I'm doing comparatively expensive things (like accessing the database or using WCF), the performance of eagerly executing in-memory Linq queries has never been an issue.
The final authority on any performance question is: how does it measure?

Resources