After my app tries committing many transactions after several mins, I'm getting the following exception:
could not commit jdbc transaction nested exception is
java.sql.sqlexception: jz006: caught ioexception:
java.net.SocketTimeoutException: Read timed out..."
I'm using Sybase with the JDBC 4 driver with Spring JDBC, and I found this link: http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc39001.0707/html/prjdbc0707/prjdbc070714.htm
Could I just use any of the following:
SESSION_TIMEOUT
DEFAULT_QUERY_ TIMEOUT
INTERNAL_QUERY_TIMEOUT
One idea is to make the transactions in batch, but I have no time to develop that.
What options are there to avoid getting that error?
Check if your processes are blocking each other when they execute (or ask your DBA if you're not sure how to check). Depending upon the connection properties (specifically autocommit being set to off) you may not actually be committing each transaction fully before the next one is attempted and they may block each other if you're using a connection pool with multiple threads. Talk to your DBA and check the table's locking scheme as for example if its set to allpages locking, you will hold locks at the page rather than row-level of data. You can also check this yourself via sp_help . Some more info regarding the various types of locking scheme can be found at http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc20021_1251/html/locking/X25549.htm (old version but still valid on current versions).
You can check for locks via sp_who, sp_lock or against the system tables directly (select spid, blocked from master..sysprocesses where blocked !=0 is a very simple one to get the process and blocking process you can add more columns to this as required).
You should also ask your DBA to check that the transactions are optimal as for example a tablescan on an update could well lock out the whole table to other transactions and would lead to the timeout issues you're seeing here.
Related
In a particularly requested DB2 table, accessed by distributed Java desktop applications via JDBC, I'm getting the following scenario several times a day:
Client A wants to INSERT new registers and gets a IX lock on the table, and X locks in each new row;
Other client(s) want(s) to perform a SELECT, is granted a IS lock on the table, but the application stucks;
Client A continues to work, but the INSERT and UPDATE queries are not commited, the locks are not released, and it keeps collecting X locks to each row;
Client A exits and its work is not committed. The other clients finnally get their SELECT result set.
Used to work well, and it does most of the time, but the lock situations are getting more and more frequent.
Auto-commit is ON.
There are no exceptions thrown or errors detected in the logs.
DB2 9.5 / JDBC Driver 9.1 (JDBC 3 specification)
If the jdbc applications are not performing COMMIT then the locks will persist until a rollback or commit. If an application quits with uncommitted inserts then a rollback will happen for all recent versions of Db2. This is expected behaviour for Db2 on Linux/Unix/Windows.
If the jdbc application is failing to commit then it is broken or misconfigured so you must get to root cause of that if you seek a permanent solution.
If the other clients wish to ignore the insert row-locks then they should choose the correct isolation level and you can configure Db2 to skip insert-locks . See documentation DB2_SKIPINSERTED at this link
It turns out that sometimes the auto-commit, and I don't know why, becomes off to a random single instance of the application.
The following validation seems to solve the problem (but not the root of it):
if (!conn.getAutoCommit()) {
conn.commit();
}
I use datagrip to move some data from a mysql installation to another postresql-database.
That worked for 3 other tables like a charm. The next one, over 500.000 rows big, could not be imported.
I use the function "Copy Table To... (F5)".
This is the log.
16:28 Connected
16:30 user#localhost: tmp_post imported to forum_post: 1999 rows (1m
58s 206ms)
16:30 Can't save current transaction state. Check connection and
database settings and try again.
For other errors like wrong data types, null data on not null columns, a very helpful log is created. But not now.
The problem is also relevant when using the database plugin for IntelliJ-based IDEs, not only DataGrip
The simplest way to solve the issue is just to add "prepareThreshold=0" to your connection string as in this answer:
jdbc:postgresql://ip:port/db_name?prepareThreshold=0
Or, for example, if you a using several settings in the connection string:
jdbc:postgresql://hostmaster.com:6432,hostsecond.com:6432/dbName?&targetServerType=master&prepareThreshold=0
It's a well-known problem when connecting to the PostgreSQL server via PgBouncer rather than a problem with IntelliJ itself. When loading massive data to the database IntelliJ splits data into chunks and loads them sequentially, each time executing the query and committing the data. By default, PostgreSQL starts using server-side prepared statements after 5 execution of a query.
The driver uses server side prepared statements by default when
PreparedStatement API is used. In order to get to server-side prepare,
you need to execute the query 5 times (that can be configured via
prepareThreshold connection property). An internal counter keeps track
of how many times the statement has been executed and when it reaches
the threshold it will start to use server side prepared statements.
Probably your PgBouncer runs with transaction pooling and the latest version of PbBouncer doesn't support prepared statements with transaction pooling.
How to use prepared statements with transaction pooling?
To make prepared statements work in this mode would need PgBouncer to
keep track of them internally, which it does not do. So the only way
to keep using PgBouncer in this mode is to disable prepared statements
in the client
You can verify that the issue is indeed because of the incorrect use of prepared statements with the pgbouncer via viewing IntelliJ log files. For that go to Help -> Show Log in Explorer, and search for "org.postgresql.util.PSQLException: ERROR: prepared statement" exception.
2022-04-08 12:32:56,484 [693272684] WARN - j.database.dbimport.ImportHead - ERROR: prepared statement "S_3649" does not exist
java.sql.SQLException: ERROR: prepared statement "S_3649" does not exist
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
at org.postgresql.jdbc.PgConnection.executeTransactionCommand(PgConnection.java:755)
at org.postgresql.jdbc.PgConnection.commit(PgConnection.java:777)
I am trying to use HikarCP in a legacy system. I configured autocommit to false, which is what we want, and realized that my logs are filled with
[c.z.h.p.ProxyConnection][ProxyConnection.java:232] ora - Executed rollback on connection net.sf.log4jdbc.ConnectionSpy#3f2bba67 due to dirty commit state on close().
This is happening when a connection acquired from the pool is closed after issuing a finder query. No insert/update/delete's are happening within the life of connection. Is this how it should be for select queries? Should I be doing a COMMIT after each select?
Yes, you should be committing. Even SELECT queries initiate transactions and acquire locks. Particularly with various isolation levels, and depending on the database even with TRANSACTION_READ_COMMITTED.
HikariCP treats a non-explicit commit when autocommit is false as an application error. Some other pools support configuring "commit-on-close", but HikariCP considers that risky, and a hack to support applications that were never properly written.
The JDBC specification is explicitly silent on whether a Connection without auto commit should automatically commit or rollback. That is an implementation detail left up the the driver developers.
is there a way to set the lock timeout in jdbc? It should work for PostgreSQL, Oracle, SQL Server and MySQL.
I found the method setQueryTimout in the statement class. Is this the right thing? Or is this a general timeout, so when a long update takes longer than the query timout, does an exception occur? Even if the query does not wait for a lock?
What is the best way the set lock timeout in jdbc?
There is no standard JDBC option to configure a lock timeout. This is database specific and not supported by the JDBC standard. You will need to find out how each database supports lock timeout and how this is configured in their driver, and then handle the differences between each driver.
The query timeout is not a lock timeout. It specifies the time a query is allowed to run (if supported by the driver and database), this is intended to kill/prevent long-running queries.
My app to recovers automatically from failures. I test it as follows:
Start app
In the middle of processing, kill the application server host (shutdown -r -f)
On host reboot, application server restarts (as a windows service)
Application restarts
Application tries to process, but is blocked by incomplete 2-phase commit transaction in Oracle DB from previous session.
Somewhere between 10 and 30 minutes later the DB resolves the prior txn and processing continues OK.
I need it to continue processing faster than this. My DBA advises that I should prefix my statement with
ALTER SESSION ADVISE COMMIT;
But he can't give me guarantees or details about the potential for data loss doing this.
Luckily the statement in question is simply updating a datetime value to SYSDATE every second or so, so if there was some data corruption it would last < 1 second before it was overwritten.
But, to my question. What exactly does the statement above do? How does Oracle resolve data synchronisation issues when it is used?
Can you clarify the role of the 'local' and 'remote' databases in your scenario.
Generally a multi-db transaction does the following
Starts the transaction
Makes a change on on database
Makes a change on the other database
Gets the other database to 'promise to commit'
Commits locally
Gets the remote db to commit
In doubt transactions happen if step 4 is completed and then something fails. The general practice is to get the remote database back up and confirm if it committed. If so, step (5) goes ahead. If the remote component of the transaction can't be committed, the local component is rolled back.
Your description seems to refer to an app server failure which is a different kettle of fish. In your case, I think the scenario is as follows :
App server takes a connection and starts a transaction
App server dies without committing
App server restarts and make a new database connection
App server starts a new transaction on the new connection
New transaction get 'stuck' waiting for a lock held by the old connection/transaction
After 20 minutes, dead connection is terminated and transaction rolled back
New transaction then continues
In which case the solution is to kill off the old connection quicker, with a shorter timeout (eg SQLNET_EXPIRE_TIME in the sqlnet.ora of the server) or a manual ALTER SYSTEM KILL SESSION.