I am using grpc for my API development.
I was able to create and access API's so far.
All of a sudden I am seeing this exception stack trace continuously some 5 seconds after the "INFO: Server started. Listening on port 42420" message is displayed.
I have deployed this project and bringing the server up on a GCE instance. Please let me know the reason and solution for this issue if anyone have faced it before.
Stack trace:
May 11, 2016 7:14:20 AM io.grpc.internal.AbstractServerStream deframeFailed
WARNING: Exception processing message
java.lang.IllegalStateException: MessageDeframer is already closed
at com.google.common.base.Preconditions.checkState(Preconditions.java:173)
at io.grpc.internal.MessageDeframer.checkNotClosed(MessageDeframer.java:222)
at io.grpc.internal.MessageDeframer.deframe(MessageDeframer.java:168)
at io.grpc.internal.AbstractStream.deframe(AbstractStream.java:283)
at io.grpc.internal.AbstractServerStream.inboundDataReceived(AbstractServerStream.java:199)
at io.grpc.netty.NettyServerStream.inboundDataReceived(NettyServerStream.java:77)
at io.grpc.netty.NettyServerHandler.onDataRead(NettyServerHandler.java:234)
at io.grpc.netty.NettyServerHandler.access$300(NettyServerHandler.java:95)
at io.grpc.netty.NettyServerHandler$FrameListener.onDataRead(NettyServerHandler.java:443)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:236)
at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:46)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:409)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:240)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:147)
at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:39)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:102)
at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:515)
at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:575)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:360)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:163)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:155)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:950)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:125)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:510)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:467)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:381)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
I was having the similar issue because in my service implementation I had something like below,
#Override
public void myMethod(
//super.myMethod(myRequest, responseObserver);// removing this line resolved my issue.
...
responseObserver.onNext(response); // I had exception here because of the super.myMethod call.
...
}
I thought it was good to call the base implementation and then do my implementation. But seems like that was not a good idea.
I hope this saves someone's time.
The error seems like this issue. The user from the issue was experiencing the exception due to a service clients were accessing but was unimplemented, but there are other possible triggers. The fix is almost committed and it should be fixed in the next release.
Related
Laravel Version: 5.7.28
PHP Version: 7.2.15
Database Driver & Version: MariaDB 10.2.23
I am struggling with a bug on my production server using Horizon.
ErrorException: Warning: PDO::prepare(): MySQL server has gone away
[internal] in unserialize
You can see a stack trace of the error here: https://sentry.io/share/issue/b105b7946b524a9e841f56f44445ea14/
As far as I can tell, this error should be caught by the Laravel framework. I'm not sure why it's not being caught and turned into a QueryException which would then trigger the reconnection and/or killing the worker.
See: https://github.com/laravel/framework/blob/9fb420cc29a7dd5de5051f09c523ffc3ea01b969/src/Illuminate/Database/Connection.php#L663
And then: https://github.com/laravel/framework/blob/9fb420cc29a7dd5de5051f09c523ffc3ea01b969/src/Illuminate/Database/Connection.php#L735
My understanding is that any Exception should be caught and then re-thrown as a QueryException, which would then be properly caught by the framework and then reconnected to the database.
This is an occasional error so it's difficult to reproduce; I've tried to manually throw a similar error but it is caught properly and handled properly.
Any general guidance on why this error might be different in production and ideas on how I can isolate the error would be appreciated.
In case anyone else runs into this, the current theory is that Sentry is catching errors that are still being handled properly by the framework.
Essentially, the job still completes correctly, because MySQL connection errors are handled automatically by the framework. However, Sentry still catches an error in that error handling process, though the reason is currently unknown.
For reference, see this discussion on Github:
https://github.com/laravel/horizon/issues/583
As of today JDBC mySQL stopped working and is throwing this error:
Added timeout stmt.setQueryTimeout(30), but it does not help..
Any ideas what happened?
Is it google service?
Issue can be tracked directly here: https://issuetracker.google.com/issues/120104426
You can star it or leave comments maybe it will get priority status..
Update: google team started server rollback, but unsure how much time it will take.. comment #28
Update2: It's working again. Tested with several queries exceeding 20k lines.
I got the same error now, the only solution is to disable V8 as described sometimes here: https://issuetracker.google.com/issues/120104426
I'm currently trying to deploy Eremetic (version 0.28.0) on top of Marathon using the configuration provided as an example. I actually have been able to deploy it once, but suddenly, after trying to redeploy it, the framework stays inactive.
By inspecting the logs I noticed a constant attempt to connect to some service that apparently never succeeds because of some authentication problem.
2017/08/14 12:30:45 Connected to [REDACTED_MESOS_MASTER_ADDRESS]
2017/08/14 12:30:45 Authentication failed: EOF
It looks like the service returning an error is ZooKeeper and more precisely it looks like the error can be traced back to this line in the Go ZooKeeper library. ZooKeeper however seems to work: I've tried to query it directly with zkCli and to run a small Spark job (where the Mesos master is given with zk:// URL) and everything seems to work.
Unfortunately I'm not able to diagnose the problem further, what could it be?
It turned out to be a configuration problem. The master URL was simply wrong and this is how the error was reported.
I get this error sometimes when trying to save things to Parse or to fetch data from it.
This is not constant and appear once in a while making the operation to fail.
I have contacted Parse for that. Here is their answer:
Starting on 4/28/2016, apps that have not migrated their database may see a "428" error code if the request cannot be handled by the remaining shared pool of resources. If you see this error in your logs, we highly recommend migrating the database for your app without delay.
Means this happens because of starting this date all apps are on low priority but those who started DB migration. So, Migration of the DB should resolve that.
I am using the Spring JDBC template and a BoneCP Connection Pool. When I purposely set the JDBC URL to an invalid value (to test how my system works on failed database connectivity), it throws an UnknownHostException on server start-up, but the server continues to start. When I submit a request, the jdbcTemplate.query(sql) method hangs. What is a good way to handle this so the system doesn't hang in jdbcTemplate.query(sql)?
This scenario is possible if the database is down due to network connectivity issues. I tried to play around with the idleMaxAgeInSeconds and maxConnectionAgeInSeconds values; I set them both to 10, but the code still hangs in jdbcTemplate.query().
I'm not sure I understand your question, but it seems to me either you don't want the server to start if the exception occurs, or you want to set a static variable that checks if the database is on before trying to contact it (no UnknownHostException when initializing jdbc).
Maybe this points you towards your answer:
http://www.cubrid.org/blog/dev-platform/understanding-jdbc-internals-and-timeout-configuration/
good luck!