Connection ended abnormally Spring Jdbc - spring

I'm runnig a Spring Boot batch with two different connections. An Oracle DB and a DB2 DB both using SimpleJDBCTemplate.
The problem happens when I make a query for the second time about 1 hour after the first one happens because I have 300,000 inserts before runs the second time.
The Oracle connection keeps alive for so many time that I believe it throws the IOException.
I'm thinking about closing the Oracle connection before those 300,000 inserts...
Here is the stacktrace:
Io exception: EDC8120I Connection ended abnormally.; nested exception
is java.sql.SQLException: Io exception: EDC8120I Connection ended
abnormally.
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:660)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:695)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:727)

Use batch insert by 1000 items in each iteration.

Related

Sybase JZ0R2 issue related to connection pool refresh on Weblogic

We have application running on weblogic . At times ( once in 2-3 weeks ) all of the sudden I start getting below in stack trace. Even after multiple try I only get "JZ0R2: No result set for this query" but data is there in DB for the row. And to resolve I simply refresh weblogic connection pools and things start working as expected. Can someone help with any tentative reason for this behavior.
DB : Sybase DB version 15.7
Java: 1.7
Error retrieving database meta-data; nested exception is org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData; nested exception is java.sql.SQLException: JZ0R2: No result set for this query.
org.springframework.dao.DataAccessResourceFailureException: Error retrieving database meta-data; nested exception is org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData; nested exception is java.sql.SQLException: JZ0R2: No result set for this query.
at org.springframework.jdbc.core.metadata.CallMetaDataProviderFactory.createMetaDataProvider(CallMetaDataProviderFactory.java:142)
at org.springframework.jdbc.core.metadata.CallMetaDataContext.initializeMetaData(CallMetaDataContext.java:243)
at org.springframework.jdbc.core.simple.AbstractJdbcCall.compileInternal(AbstractJdbcCall.java:304)
at org.springframework.jdbc.core.simple.AbstractJdbcCall.compile(AbstractJdbcCall.java:289)
at org.springframework.jdbc.core.simple.AbstractJdbcCall.checkCompiled(AbstractJdbcCall.java:349)
at org.springframework.jdbc.core.simple.AbstractJdbcCall.doExecute(AbstractJdbcCall.java:364)
at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:197)
Try one of these:
conn.setAutoCommit( true );
or set various isolation levels, like this one, for instance:
conn.setTransactionIsolation(0);

JDBC Source Connector Error: Transaction was aborted. It was wounded by a higher priority transaction due to conflict on keys in range

I am using JDBC source connector with the JDBC Driver for collecting data from Google Cloud Spanner to Kafka.
I am using "timestamp+incrementing" mode on a table. The primary key of the table includes 2 columns (order_item_id and order_id).
I used order_item_id for the incrementing column, and a column named "updated_time" for timestamp column.
When I started the connector, I got the following errors sometimes, but I still can get the data finally.
ERROR Failed to run query for table TimestampIncrementingTableQuerier{table="order_item", query='null',
topicPrefix='test_', incrementingColumn='order_item_id', timestampColumns=[updated_time]}: {}
(io.confluent.connect.jdbc.source.JdbcSourceTask:404)
com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory$JdbcAbortedDueToConcurrentModificationException:
The transaction was aborted and could not be retried due to a concurrent modification
...
Caused by: com.google.cloud.spanner.AbortedDueToConcurrentModificationException:
The transaction was aborted and could not be retried due to a concurrent modification
...
Suppressed: com.google.cloud.spanner.connection.AbstractBaseUnitOfWork$SpannerAsyncExecutionException:
Execution failed for statement:
SELECT * FROM `order_item` WHERE `order_item`.`updated_time` < #p1 AND ((`order_item`.`updated_time` = #p2 AND `order_item`.`order_item_id` > #p3) OR `order_item`.`updated_time` > #p4) ORDER BY `order_item`.`updated_time`,`order_item`.`order_item_id` ASC
...
Caused by: com.google.cloud.spanner.AbortedException: ABORTED: io.grpc.StatusRuntimeException:
ABORTED: Transaction was aborted. It was wounded by a higher priority transaction due to conflict on keys in range [[5587892845991837697,5587892845991837702], [5587892845991837697,5587892845991837702]), column adjust in table order_item.
retry_delay {
nanos: 12974238
}
- Statement: 'SELECT * FROM `order_item` WHERE `order_item`.`updated_time` < #p1 AND ((`order_item`.`updated_time` = #p2 AND `order_item`.`order_item_id` > #p3) OR `order_item`.`updated_time` > #p4) ORDER BY `order_item`.`updated_time`,`order_item`.`order_item_id` ASC'
...
I am wondering how does this error happen in my case. Btw, even with the error, the connector can still collect the data at the end. Can anyone help with it? Thank you so much!
I'm not sure exactly how your entire pipeline is set up, but the error indicates that you are executing the query in a read/write transaction. Any read/write transaction on Cloud Spanner can be aborted by Cloud Spanner, and may result in the error that you are seeing.
If your pipeline is only reading from Cloud Spanner, the best thing to do is to set your JDBC connection in read-only and autocommit mode. You can do this directly in your JDBC connection URL by adding the readonly=true and autocommit=true properties to the URL.
Example:
jdbc:cloudspanner:/projects/my-project/instances/my-instance/databases/my-database;readonly=true;autocommit=true
It could also be that the framework(s) that you are using is changing the JDBC connection after it has been opened. In that case you should have a look if you can change that in the framework(s). But changing the JDBC URL based on the above example may very well be enough in this case.
Background information:
If the JDBC connection is opened with autocommit turned off and the connection is in read/write mode, then a read/write transaction will be started automatically when a query is executed. All subsequent queries will also use the same read/write transaction, until commit() is called on the connection. This is the least efficient way to read large amounts of data on Cloud Spanner, and should therefore be avoided whenever possible. It will also cause aborted transactions, as the read operations will take locks on the data that it is reading.

"NzSQLException: The update count exceeded Integer.MAX_VALUE" ERROR only on JDBC connection

When constructing a rather large table in netezza, I get the following ERROR when using a JDBC connection:
org.netezza.error.NzSQLException: The update count exceeded Integer.MAX_VALUE.
The table does get created properly, but the code throws an exception. When I try running the same SQL using nzsql I get:
INSERT 0 2395423258
i.e. no thrown exceptions. It seems the variable storing the count of records in JDBC is not large enough?
Has anyone else encountered this error? How did you deal with it?
Modify your connection string to include ignoreUpdateCount=on as a parameter and try again.

Can we insert into external table

I am debugging a Big Data code in Production environment of my company. Hive return the following error:
Exception: org.apache.hadoop.hive.ql.lockmgr.LockException: No record of lock could be found, may have timed out
Killing DAG...
Execution has failed.
Exception in thread "main" java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask.
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:282)
at org.apache.hive.jdbc.HiveStatement.executeUpdate(HiveStatement.java:392)
at HiveExec.main(HiveExec.java:159)
After investigation, I have found that this error could be caused by BoneCP in connectionPoolingType property, but the cluster support team told me that they fixed this bug by upgrading BoneCP.
My question is: can we INSERT INTO an external table in Hive, because I have doubt about the insertion script ?
Yes, you can insert into external table.

Jdbc call to oracle 11.1.0.7.0 db blocked

The Jdbc call blocks and does not return back .. below is the stack trace
Oracle server = 11.1.0.7
Oracle thin driver used # client
Would appreciate your help ....
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:140)
at oracle.net.ns.Packet.receive(Packet.java:240)
at oracle.net.ns.DataPacket.receive(DataPacket.java:92)
at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:172)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:117)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:92)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:77)
at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1034)
at oracle.jdbc.driver.T4CMAREngine.unmarshalSB1(T4CMAREngine.java:1010)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:588)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:183)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:780)
at oracle.jdbc.driver.T4CStatement.executeMaybeDescribe(T4CStatement.java:855)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1377)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1377)
at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:387)
at oracle.jdbc.driver.OracleDatabaseMetaData.getTypeInfo(OracleDatabaseMetaData.java:670)
there might be few reasons:
This thread is locked in DB and waits for other thread commit(or rollback)
This can be the firewall issue. Firewall may handle inappropriately stale connections.
You can see more information here: http://forums.oracle.com/forums/thread.jspa?messageID=4354229
If you are running a stored procedure, it is apparently a JDBC THIN driver BUG.
Update your JDBC driver to 11.0.2.2. It's supposed to fix the hang on getNextPacket issue:
Oracle JDBC Drivers release 11.2.0.2 Readme.txt
===============================================
Note: this readme is specific to fixes in 11.2.0.2 patch-set; see
the master readme of the 11.2 release at http://download.oracle.com/otn/utilities_drivers/jdbc/112/Readme.txt
Bug# Description
======= ===========================================================================================
2935043 SQLException Invalid conversion error when binding short JDBC type to Oracle number column
5071698 PropertyCheckInterval of zero causes high CPU
6748410 registerOutParameter does not perform well in JDBC Thin
7208615 NUMBER column shows as precision 22 in JDBC
7281435 ORA-30006 during xa_commit cause XAER_RMERR in JDBC
8588311 PreparedStatement.setBinaryStream() does not read all the data from the stream
8592559 JDBC Thin cannot fetch PlsqlIndexByTable with more than 32k items
8617180 ORA-1458 error with large batch sizes using JDBC OCI driver
8832811 Non ASCII characters inserted into US7ASCII DB using JDBC Thin
8834344 Binding date of the Julian calendar throws IllegalArgumentException
8873676 JDBC This throws SqlException while reading invalid characters
8874882 ORA-22922 reusing large string bind for a LOB in JDBC
8889839 XA_RMERR being thrown on the recover(TMNOFLAGS) call from JDBC
8891187 JDBC does not close the connection after a fatal error
8980899 JDBC Thin new property enableDataInLocator for LOB data
8980918 JDBC Thin should use "data in locator" feature to save round-trips for small Lobs
8982104 Add JDBC support for SQLXML
9045206 11.2 JDBC driver returns zero rows with REF CURSOR OUT parameter
9099863 ps.setbytes on BLOB columns in batch does not inherit value to following lines
9105438 ORA-22275 during ps.executeBatch with LOBs
9121586 ORA-22925 getting large LOB via JDBC Thin 11.2
9139227 Wrong error code on JDBC connection failure
9147506 Named parameter in callable statement not working from JDBC
9180882 JDBC Statement.Execute does not handle comments as first elements for INSERT
9197956 JDBC Data Change Notification fails with IllegalArgumentException
9240210 Silent truncation reading 4gb BLOB with JDBC Thin 11.2
9259830 DatabaseChangeNotification fails to clean up
9260568 isValidObjectName() rejects valid object names
9341542 getmetadata().getindexinfo fails with quoted table names (ORA-947)
9341742 setBinaryStream causes dump/ORA-24804 if an unread stream is bound to a DML
9374132 Territory is allowed to be NULL resulting in ORA-12705
9394224 Poor performance for batch PreparedStatement execute with XMLType or objects.
9445675 "No more data" / ORA-3137 using end to end metrics with JDBC Thin
9468517 JDBC OCI and OCI do not behave the same on TAF failover
9491385 Memory not released using registerIndexTableOutParameter in JDBC Thin
9491954 RuntimeException "Assertion botch: negative time" from Timestamp bind
9660015 JDBC Thin hangs by waiting getnextpacket when calling stored procedure
9767715 TIMESTAMPTZ stringvalue truncates leading zeros in decimal part
9786503 Cannot use OS authentication with OracleXADataSource
Check out bug #9660015.
Hope it helps.
It could be the Firewall issue, did you try checking, if the connections were active in DB, or did DB send the 10Byte ping data to this specific connections and it was OK ?
If JDBC able to create the PreparedStatement from Connections it means the Connections were OK from client perspective, however the whats there between DB and Client ? Firewall ? router ? check their settings.

Resources