How to set lock time out in jdbc - jdbc

is there a way to set the lock timeout in jdbc? It should work for PostgreSQL, Oracle, SQL Server and MySQL.
I found the method setQueryTimout in the statement class. Is this the right thing? Or is this a general timeout, so when a long update takes longer than the query timout, does an exception occur? Even if the query does not wait for a lock?
What is the best way the set lock timeout in jdbc?

There is no standard JDBC option to configure a lock timeout. This is database specific and not supported by the JDBC standard. You will need to find out how each database supports lock timeout and how this is configured in their driver, and then handle the differences between each driver.
The query timeout is not a lock timeout. It specifies the time a query is allowed to run (if supported by the driver and database), this is intended to kill/prevent long-running queries.

Related

Snowflake cancel query when ABORT_DETACHED_QUERY=true on session level

I am using Snowflake JDBC to execute multi-statement scripts on Snowflake. My application is started by a Oozie job on a hadoop cluster(in migration phase). The requirement here is, when Oozie job is killed and there by killing the running application instance, the query that was submitted using JDBC should get cancelled by Snowflake.
I have added ABORT_DETACHED_QUERY=true to the JDBC connection url which looks like jdbc:snowflake://<account>.snowflakecomputing.com/?warehouse=<WH>&db=<DB>&schema=<SCHEMA>&ABORT_DETACHED_QUERY=true.
Even after 25 mins, the script execution is not cancelled by Snowflake. I tried to find out the underlying problems. I tried to query the session on SESSIONS view using session-id but it was not there. I also tried to query for active connections but could not find a way to do it.
So I have two queries,
Is it the right way to configure ABORT_DETACHED_QUERY parameter?
How do you check for active JDBC connections on Snowflake, because SHOW CONNECTIONS didn't return any connection to my application?
Also,
I am using commons-dbcp BasicDataSource as datasource manager,
commons-dbutils to submit query using int QueryRunner.execute(String) method.
This is a session parameter not a connection string parameter therefore the proper way to set it is by using an ALTER command:
ALTER SESSION SET ABORT_DETACHED_QUERY=TRUE;
After 5 minutes your queries should be aborted if the connectivity is lost due to abrupt termination of a session.

Consuming cockroachdb changefeed via JDBC

Is it possible to consume "EXPERIMENTAL CHANGEFEED FOR" (core) type queries over JDBC?
Is it possible to consume "CREATE CHANGEFEED FOR" (enterprise) type queries over JDBC?
thanks for your interest in CockroachDB changefeeds. Enterprise changefeeds should work fine with JDBC or any other SQL driver: the CREATE CHANGEFEED statement sets up the changefeed to deliver data to a Kafka or cloud storage target, and immediately returns a job ID that you can use to monitor the health of the changefeed via the SHOW JOBS statement or the web UI.
Core changefeeds work a little differently from other SQL statements: when you issue a CHANGEFEED FOR statement, CockroachDB streams results back indefinitely and never returns unless something goes wrong or the query is canceled. Currently, this streaming behavior isn't implemented in the way that the Postgres JDBC driver expects (see #4035 and the linked work-in-progress PRs), so consuming results using Postgres JDBC cursors won't work. We're working on adding support for this.

sybase and jdbc. Could not commit jdbc transaction. Read time out

After my app tries committing many transactions after several mins, I'm getting the following exception:
could not commit jdbc transaction nested exception is
java.sql.sqlexception: jz006: caught ioexception:
java.net.SocketTimeoutException: Read timed out..."
I'm using Sybase with the JDBC 4 driver with Spring JDBC, and I found this link: http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc39001.0707/html/prjdbc0707/prjdbc070714.htm
Could I just use any of the following:
SESSION_TIMEOUT
DEFAULT_QUERY_ TIMEOUT
INTERNAL_QUERY_TIMEOUT
One idea is to make the transactions in batch, but I have no time to develop that.
What options are there to avoid getting that error?
Check if your processes are blocking each other when they execute (or ask your DBA if you're not sure how to check). Depending upon the connection properties (specifically autocommit being set to off) you may not actually be committing each transaction fully before the next one is attempted and they may block each other if you're using a connection pool with multiple threads. Talk to your DBA and check the table's locking scheme as for example if its set to allpages locking, you will hold locks at the page rather than row-level of data. You can also check this yourself via sp_help . Some more info regarding the various types of locking scheme can be found at http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc20021_1251/html/locking/X25549.htm (old version but still valid on current versions).
You can check for locks via sp_who, sp_lock or against the system tables directly (select spid, blocked from master..sysprocesses where blocked !=0 is a very simple one to get the process and blocking process you can add more columns to this as required).
You should also ask your DBA to check that the transactions are optimal as for example a tablescan on an update could well lock out the whole table to other transactions and would lead to the timeout issues you're seeing here.

glassfish JDBC Connection Pooling and oracle global temporary table same sessionId

Before I start with my question, I would like to clarify that I am java/j2ee developer and have limited understanding of things on oracle side.
I am using glassfish server with JDBC connection Pooling and in back side, oracle database.
Also i am using global temporary table of oracle to execute some work flow.
i am storing session specific data in global temp table.
Now my issue is most of the time i am getting the same sessionId for each connection.
Does that means i can't use glboal temporary table with glassfish JDBC connection Pooling.
Another interesting thing is if i removed connection pooling then i am getting different sessionID for each connection.
Please provide your suggestions.
When using Connection Pooling it's always best to not leave states in the database session when the connection is released into the pool. That's because there is no guarantee that you'll get back the same connection the next time you need one. A global temp table (GTT) is an example of such a state and belongs to one Database session, or to one JDBC connection (there is a 1-1 mapping between DB session and JDBC connection). It won't be visible if you use another JDBC connection.
So if your business logic requires that you use a GTT then you should not release the connection back to the pool until you're dong using this GTT. Note that this goes against the best practices which recommend to release the connection back to the pool as soon as possible. As an alternative you may use a normal table and commit your temporary results into it so that they can be accessed through any other connection.

HikariCP: SELECT queries execute roll back due to dirty commit state on close()

I am trying to use HikarCP in a legacy system. I configured autocommit to false, which is what we want, and realized that my logs are filled with
[c.z.h.p.ProxyConnection][ProxyConnection.java:232] ora - Executed rollback on connection net.sf.log4jdbc.ConnectionSpy#3f2bba67 due to dirty commit state on close().
This is happening when a connection acquired from the pool is closed after issuing a finder query. No insert/update/delete's are happening within the life of connection. Is this how it should be for select queries? Should I be doing a COMMIT after each select?
Yes, you should be committing. Even SELECT queries initiate transactions and acquire locks. Particularly with various isolation levels, and depending on the database even with TRANSACTION_READ_COMMITTED.
HikariCP treats a non-explicit commit when autocommit is false as an application error. Some other pools support configuring "commit-on-close", but HikariCP considers that risky, and a hack to support applications that were never properly written.
The JDBC specification is explicitly silent on whether a Connection without auto commit should automatically commit or rollback. That is an implementation detail left up the the driver developers.

Resources