Jdbc call to oracle 11.1.0.7.0 db blocked - oracle

The Jdbc call blocks and does not return back .. below is the stack trace
Oracle server = 11.1.0.7
Oracle thin driver used # client
Would appreciate your help ....
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:140)
at oracle.net.ns.Packet.receive(Packet.java:240)
at oracle.net.ns.DataPacket.receive(DataPacket.java:92)
at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:172)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:117)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:92)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:77)
at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1034)
at oracle.jdbc.driver.T4CMAREngine.unmarshalSB1(T4CMAREngine.java:1010)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:588)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:183)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:780)
at oracle.jdbc.driver.T4CStatement.executeMaybeDescribe(T4CStatement.java:855)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1377)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1377)
at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:387)
at oracle.jdbc.driver.OracleDatabaseMetaData.getTypeInfo(OracleDatabaseMetaData.java:670)

there might be few reasons:
This thread is locked in DB and waits for other thread commit(or rollback)
This can be the firewall issue. Firewall may handle inappropriately stale connections.
You can see more information here: http://forums.oracle.com/forums/thread.jspa?messageID=4354229

If you are running a stored procedure, it is apparently a JDBC THIN driver BUG.
Update your JDBC driver to 11.0.2.2. It's supposed to fix the hang on getNextPacket issue:
Oracle JDBC Drivers release 11.2.0.2 Readme.txt
===============================================
Note: this readme is specific to fixes in 11.2.0.2 patch-set; see
the master readme of the 11.2 release at http://download.oracle.com/otn/utilities_drivers/jdbc/112/Readme.txt
Bug# Description
======= ===========================================================================================
2935043 SQLException Invalid conversion error when binding short JDBC type to Oracle number column
5071698 PropertyCheckInterval of zero causes high CPU
6748410 registerOutParameter does not perform well in JDBC Thin
7208615 NUMBER column shows as precision 22 in JDBC
7281435 ORA-30006 during xa_commit cause XAER_RMERR in JDBC
8588311 PreparedStatement.setBinaryStream() does not read all the data from the stream
8592559 JDBC Thin cannot fetch PlsqlIndexByTable with more than 32k items
8617180 ORA-1458 error with large batch sizes using JDBC OCI driver
8832811 Non ASCII characters inserted into US7ASCII DB using JDBC Thin
8834344 Binding date of the Julian calendar throws IllegalArgumentException
8873676 JDBC This throws SqlException while reading invalid characters
8874882 ORA-22922 reusing large string bind for a LOB in JDBC
8889839 XA_RMERR being thrown on the recover(TMNOFLAGS) call from JDBC
8891187 JDBC does not close the connection after a fatal error
8980899 JDBC Thin new property enableDataInLocator for LOB data
8980918 JDBC Thin should use "data in locator" feature to save round-trips for small Lobs
8982104 Add JDBC support for SQLXML
9045206 11.2 JDBC driver returns zero rows with REF CURSOR OUT parameter
9099863 ps.setbytes on BLOB columns in batch does not inherit value to following lines
9105438 ORA-22275 during ps.executeBatch with LOBs
9121586 ORA-22925 getting large LOB via JDBC Thin 11.2
9139227 Wrong error code on JDBC connection failure
9147506 Named parameter in callable statement not working from JDBC
9180882 JDBC Statement.Execute does not handle comments as first elements for INSERT
9197956 JDBC Data Change Notification fails with IllegalArgumentException
9240210 Silent truncation reading 4gb BLOB with JDBC Thin 11.2
9259830 DatabaseChangeNotification fails to clean up
9260568 isValidObjectName() rejects valid object names
9341542 getmetadata().getindexinfo fails with quoted table names (ORA-947)
9341742 setBinaryStream causes dump/ORA-24804 if an unread stream is bound to a DML
9374132 Territory is allowed to be NULL resulting in ORA-12705
9394224 Poor performance for batch PreparedStatement execute with XMLType or objects.
9445675 "No more data" / ORA-3137 using end to end metrics with JDBC Thin
9468517 JDBC OCI and OCI do not behave the same on TAF failover
9491385 Memory not released using registerIndexTableOutParameter in JDBC Thin
9491954 RuntimeException "Assertion botch: negative time" from Timestamp bind
9660015 JDBC Thin hangs by waiting getnextpacket when calling stored procedure
9767715 TIMESTAMPTZ stringvalue truncates leading zeros in decimal part
9786503 Cannot use OS authentication with OracleXADataSource
Check out bug #9660015.
Hope it helps.

It could be the Firewall issue, did you try checking, if the connections were active in DB, or did DB send the 10Byte ping data to this specific connections and it was OK ?
If JDBC able to create the PreparedStatement from Connections it means the Connections were OK from client perspective, however the whats there between DB and Client ? Firewall ? router ? check their settings.

Related

JDBC Source Connector Error: Transaction was aborted. It was wounded by a higher priority transaction due to conflict on keys in range

I am using JDBC source connector with the JDBC Driver for collecting data from Google Cloud Spanner to Kafka.
I am using "timestamp+incrementing" mode on a table. The primary key of the table includes 2 columns (order_item_id and order_id).
I used order_item_id for the incrementing column, and a column named "updated_time" for timestamp column.
When I started the connector, I got the following errors sometimes, but I still can get the data finally.
ERROR Failed to run query for table TimestampIncrementingTableQuerier{table="order_item", query='null',
topicPrefix='test_', incrementingColumn='order_item_id', timestampColumns=[updated_time]}: {}
(io.confluent.connect.jdbc.source.JdbcSourceTask:404)
com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory$JdbcAbortedDueToConcurrentModificationException:
The transaction was aborted and could not be retried due to a concurrent modification
...
Caused by: com.google.cloud.spanner.AbortedDueToConcurrentModificationException:
The transaction was aborted and could not be retried due to a concurrent modification
...
Suppressed: com.google.cloud.spanner.connection.AbstractBaseUnitOfWork$SpannerAsyncExecutionException:
Execution failed for statement:
SELECT * FROM `order_item` WHERE `order_item`.`updated_time` < #p1 AND ((`order_item`.`updated_time` = #p2 AND `order_item`.`order_item_id` > #p3) OR `order_item`.`updated_time` > #p4) ORDER BY `order_item`.`updated_time`,`order_item`.`order_item_id` ASC
...
Caused by: com.google.cloud.spanner.AbortedException: ABORTED: io.grpc.StatusRuntimeException:
ABORTED: Transaction was aborted. It was wounded by a higher priority transaction due to conflict on keys in range [[5587892845991837697,5587892845991837702], [5587892845991837697,5587892845991837702]), column adjust in table order_item.
retry_delay {
nanos: 12974238
}
- Statement: 'SELECT * FROM `order_item` WHERE `order_item`.`updated_time` < #p1 AND ((`order_item`.`updated_time` = #p2 AND `order_item`.`order_item_id` > #p3) OR `order_item`.`updated_time` > #p4) ORDER BY `order_item`.`updated_time`,`order_item`.`order_item_id` ASC'
...
I am wondering how does this error happen in my case. Btw, even with the error, the connector can still collect the data at the end. Can anyone help with it? Thank you so much!
I'm not sure exactly how your entire pipeline is set up, but the error indicates that you are executing the query in a read/write transaction. Any read/write transaction on Cloud Spanner can be aborted by Cloud Spanner, and may result in the error that you are seeing.
If your pipeline is only reading from Cloud Spanner, the best thing to do is to set your JDBC connection in read-only and autocommit mode. You can do this directly in your JDBC connection URL by adding the readonly=true and autocommit=true properties to the URL.
Example:
jdbc:cloudspanner:/projects/my-project/instances/my-instance/databases/my-database;readonly=true;autocommit=true
It could also be that the framework(s) that you are using is changing the JDBC connection after it has been opened. In that case you should have a look if you can change that in the framework(s). But changing the JDBC URL based on the above example may very well be enough in this case.
Background information:
If the JDBC connection is opened with autocommit turned off and the connection is in read/write mode, then a read/write transaction will be started automatically when a query is executed. All subsequent queries will also use the same read/write transaction, until commit() is called on the connection. This is the least efficient way to read large amounts of data on Cloud Spanner, and should therefore be avoided whenever possible. It will also cause aborted transactions, as the read operations will take locks on the data that it is reading.

Jmeter 3.0 Cannot create PoolableConnectionFactory (ORA-00900: invalid SQL statement)

I am using Oracle DB from Jmeter and I am running Select statement with a very small query, I am getting below error
Response message: java.sql.SQLException: Cannot create PoolableConnectionFactory (ORA-00900: invalid SQL statement)
You change the default values, and most important you define zero time to wait for connection. so you can't create connection.
Set Max Wait to a valid value as 10000
Max Wait (ms) Pool throws an error if the timeout period is exceeded in the process of trying to retrieve a connection
Also I'm not sure about your validation query, for Oracle it should be
Select 1 from dual
Validation Query A simple query used to determine if the database is still responding. This defaults to the 'isValid()' method of the jdbc driver, which is suitable for many databases. However some may require a different query; for example Oracle something like 'SELECT 1 FROM DUAL' could be used.
You need to remove getData from the "Validation Query" and replace it with select 1 from dual
Also consider upgrading to JMeter 4.0 on next available opportunity as according to JMeter Best Practices you should always be using the last available JMeter version, newer JMeter versions normally contain bug fixes, new features and performance improvements so it might be the case you're suffering from a bug which has already been addressed.

Cannot create PoolableConnectionFactory (ORA-00936: missing expression ): JDBC connection error in Jmeter3.0

I am running one jmeter script to read data from Database (using JDBC Request). Here I am getting following error if I am running script in Jmeter3.0.
Cannot create PoolableConnectionFactory (ORA-00936: missing expression
)
But same script is running fine with jmeter2.13.
Do I need to change any property values?
Just modify Validation query in JDBC Connection Configuration to select 1 from DUAL as per documentation:
A simple query used to determine if the database is still responding. This defaults to 'SELECT 1' which is suitable for many databases. However some may require a different query; for example Oracle requires something like 'SELECT 1 FROM DUAL'. Note this validation query is used on pool creation to validate it even if "Test While Idle" suggests query would only be used on idle connections. This is DBCP behaviour.

coldfusion cfquery returning inserted oracle rowid

According to the CF9 cfquery documentation, I should be able to return the oracle ROWID in the cfquery result.
I've failed on all counts, it simply does not return any identity or generated keys
I am using the jdbc oracle thin client, can anyone point me in the right direction here?
If you were using one of the Oracle drivers that ships with ColdFusion, then you should be able to access GENERATEDKEY from the RESULT struct within the ColdFusion query object. Since you are using the JDBC Oracle thin client driver, where you setup a data source using "Add a new data source > Other", then enter the JDBC configuration, you don't have access to the RESULT struct described in the documentation.
I ran into the same issue when we used the MS JDBC driver with CF8. After converting to CF9 with the built-in SQL Driver, we were able to update our code to correctly reference the RESULT struct.
You will have to write your INSERT statements to also SELECT the value of ROWID, which you should be able to retrieve from the final query object.

JDBC connection default autoCommit behavior

I'm working with JDBC to connect to Oracle. I tested connection.setAutoCommit(false) vs connection.setAutoCommit(true) and the results were as expected.
While by default connection is supposed to work as if autoCommit(true) [correct me if I'm wrong], but none of the records are being inserted till connection.commit() was called. Any advice regarding default behaviour?
String insert = "INSERT INTO MONITOR (number, name,value) VALUES (?,?,?)";
conn = connection; //connection details avoided
preparedStmtInsert = conn.prepareStatement(insert);
preparedStmtInsert.execute();
conn.commit();
From Oracle JDBC documentation:
When a connection is created, it is in auto-commit mode. This means
that each individual SQL statement is treated as a transaction and is
automatically committed right after it is executed. (To be more
precise, the default is for a SQL statement to be committed when it is
completed, not when it is executed. A statement is completed when all
of its result sets and update counts have been retrieved. In almost
all cases, however, a statement is completed, and therefore committed,
right after it is executed.)
The other thing is - you ommitted connection creation details, so I'm just guessing - if you are using some frameworks, or acquiring a connection from a datasource or connection pool, the autocommit may be turned off by those frameworks/pools/datasources - the solution is to never trust in default settings ;-)

Resources