Weblogic Error - Method not supported : Statement.cancel - jdbc

I am running an application on Weblogic 9.2 MP3, currently having problem with connection pool.
ERROR - UserBean retrieving user record. weblogic.jdbc.extensions.PoolLimitSQLException:
weblogic.common.resourcepool.ResourceLimitException: No resources currently available in pool MyApp Data Source to allocate to applications, please increase the size of the pool and retry..
I also kept getting below error message, saying "Method not supported : Statement.cancel()" which I think it is the cause to the error above.
<Error> <JDBC> <BEA-001131> <Received an exception when closing a cached statement for the pool "MyApp Data Source": java.sql.SQLException: Method not supported : Statement.cancel()..>
I went through the app source code, this method didn't seem to be used by the app at all. Just though it might be something to do with weblogic itself.
Anyone have any idea to fix this error?

Firstly, I'd make sure that I was closing every java.sql.Connection variable.
e.g.
final Connection connection = dataSource.getConnection();
// do your database work here
if (connection != null) {
connection.close();
}
You could probably make it even tighter by putting connection.close(); into the finally part of a try/catch block.

Related

Getting IllegalStateException when starting the grpc server

I am using grpc for my API development.
I was able to create and access API's so far.
All of a sudden I am seeing this exception stack trace continuously some 5 seconds after the "INFO: Server started. Listening on port 42420" message is displayed.
I have deployed this project and bringing the server up on a GCE instance. Please let me know the reason and solution for this issue if anyone have faced it before.
Stack trace:
May 11, 2016 7:14:20 AM io.grpc.internal.AbstractServerStream deframeFailed
WARNING: Exception processing message
java.lang.IllegalStateException: MessageDeframer is already closed
at com.google.common.base.Preconditions.checkState(Preconditions.java:173)
at io.grpc.internal.MessageDeframer.checkNotClosed(MessageDeframer.java:222)
at io.grpc.internal.MessageDeframer.deframe(MessageDeframer.java:168)
at io.grpc.internal.AbstractStream.deframe(AbstractStream.java:283)
at io.grpc.internal.AbstractServerStream.inboundDataReceived(AbstractServerStream.java:199)
at io.grpc.netty.NettyServerStream.inboundDataReceived(NettyServerStream.java:77)
at io.grpc.netty.NettyServerHandler.onDataRead(NettyServerHandler.java:234)
at io.grpc.netty.NettyServerHandler.access$300(NettyServerHandler.java:95)
at io.grpc.netty.NettyServerHandler$FrameListener.onDataRead(NettyServerHandler.java:443)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:236)
at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:46)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:409)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:240)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:147)
at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:39)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:102)
at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:515)
at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:575)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:360)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:163)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:155)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:950)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:125)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:510)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:467)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:381)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
I was having the similar issue because in my service implementation I had something like below,
#Override
public void myMethod(
//super.myMethod(myRequest, responseObserver);// removing this line resolved my issue.
...
responseObserver.onNext(response); // I had exception here because of the super.myMethod call.
...
}
I thought it was good to call the base implementation and then do my implementation. But seems like that was not a good idea.
I hope this saves someone's time.
The error seems like this issue. The user from the issue was experiencing the exception due to a service clients were accessing but was unimplemented, but there are other possible triggers. The fix is almost committed and it should be fixed in the next release.

java mongodb driver how do you catch exceptions?

I want to be able to detect if a mongo server is available from the java driver for the purpose of reacting to any abnormal events as one would in JDBC land etc. It all works fine when the server is up but I am struggling to understand why it is so difficult to detect errors. I have a feeling its because the mongo client runs in a different thread and it doesn't re throw to me or something?
try {
MongoClient mongoClient = new MongoClient("localhost", 27017);
MongoDatabase db = mongoClient.getDatabase("mydb");
// if db is down or error getting people collection handle it in catch block
MongoCollection<Document> people = commentarr.getCollection("people");
} catch (Exception e) {
// handle server down or failed query here.
}
The result is
INFO: Exception in monitor thread while connecting to server localhost:27017
With the resulting stack trace containing a few different exceptions which I have tried to catch but my catch blocks still didn't do anything.
com.mongodb.MongoSocketOpenException: Exception opening socket
Caused by: java.net.ConnectException: Connection refused
I am using the java mongodb driver 3.0.4, most posts I read are from an older API with hacks like MongoClient.getDatabaseNames() which throws a MongoException if errors, except this is deprecated now and replaced with MongoClient.listDatabaseNames() which doesn't have the same error throwing semantics.
Is there a way to just execute a mongo query from the java driver in a try catch block and actually have the exception caught?
In the new API, MongoException is a RuntimeException. You can either catch the generic MongoException or, I believe, listDatabaseNames() would end up throwing a MongoCommandException ultimately.
You can redirect System.err to a ByteArrayOutputStream buffer. If a runtime exception is thrown, it will be collected into the buffer.
See an answer to a similar problem at: https://stackoverflow.com/a/47699292/7388679

How do I prevent a database call from hanging in jdbcTemplate.query when the database is not available?

I am using the Spring JDBC template and a BoneCP Connection Pool. When I purposely set the JDBC URL to an invalid value (to test how my system works on failed database connectivity), it throws an UnknownHostException on server start-up, but the server continues to start. When I submit a request, the jdbcTemplate.query(sql) method hangs. What is a good way to handle this so the system doesn't hang in jdbcTemplate.query(sql)?
This scenario is possible if the database is down due to network connectivity issues. I tried to play around with the idleMaxAgeInSeconds and maxConnectionAgeInSeconds values; I set them both to 10, but the code still hangs in jdbcTemplate.query().
I'm not sure I understand your question, but it seems to me either you don't want the server to start if the exception occurs, or you want to set a static variable that checks if the database is on before trying to contact it (no UnknownHostException when initializing jdbc).
Maybe this points you towards your answer:
http://www.cubrid.org/blog/dev-platform/understanding-jdbc-internals-and-timeout-configuration/
good luck!

SQLException like archiver error. Connect internal only, until freed

can any one give me an idea regarding this below error:
2011-02-11 05:48:42,858
-[c=STATS_VITALS] Error running system monitor for connectionCloseTime:
java.sql.SQLException: ORA-00257: archiver error.
Connect internal only, until freed.
java.sql.SQLException: ORA-00257: archiver error.
Connect internal only, until freed.
This usually occurs when the system encounters an error while trying to archive a redo log. Was this working previously and it just failed now for the first time, or is this a new installation? Do you know where your logs are being archived? If so, check to see if that location is out of space as a first step. Once you have more information we might be able to help you a little more effectively.

JDBC URL for Oracle XA client

Using the JDBC driver oracle.jdbc.xa.client.OracleXADataSource, what is the correct format of the JDBC URL? The thin format of
jdbc:oracle:thin:#host:port:sid
does not work. WebSphere is reporting that the given url (which is otherwise correct) is invalid.
The test connection operation failed for data source Oracle MyDB (XA) on
server nodeagent at node MY_node with the following exception:
java.sql.SQLException: Invalid Oracle URL specifiedDSRA0010E: SQL State = 99999,
Error Code = 17,067. View JVM logs for further details.
There is nothing in the JVM logs.
Whether you use a XA Driver or not, the JDBC connection string is the same (and the format of your question is correct).
For me the issue resolved by adding alias name, username and password in JAAS - J2C authentication data. And also selecting this entry as Component-managed authentication alias.
In case this happens to anyone else. The problem went away after restarting websphere.
In my case, the problem went away when I change the authentication property of the jdbc resource reference from Authentication=Application to Authentication=Container
Had the same issue. Dont know about simple deployments, but on a two nodes cluster, I restarted the first node, and the connection started working on it (not on the second). Restarted the second node, and the connection started working there too.
So just restart the nodes (I also restarted the nodeAgents, but i don't know if it's necessary).
if you are doing using wsadmin command then you need to stop manager,stop node,start manager, sync node and then start node (I mean full sync). Hopefully this will resolve the issue. I dont know why but this resolves my issue.

Resources