I am using wso2esb + wso2mb + websockets to transform JMS messages from wso2mb to websockets. During my performance tests (I have tried to send 7k messages from wso2mb to websocket) I got following errror message:
java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901)
at java.util.ArrayList$Itr.next(ArrayList.java:851)
at org.wso2.carbon.inbound.endpoint.protocol.websocket.management.WebsocketSubscriberPathManager.broadcastOnSubscriberPath(WebsocketSubscriberPathManager.java:98)
at org.wso2.carbon.inbound.endpoint.protocol.websocket.InboundWebsocketResponseSender.handleSendBack(InboundWebsocketResponseSender.java:117)
at org.wso2.carbon.inbound.endpoint.protocol.websocket.InboundWebsocketResponseSender.sendBack(InboundWebsocketResponseSender.java:86)
at org.apache.synapse.core.axis2.Axis2Sender.sendBack(Axis2Sender.java:214)
at org.apache.synapse.mediators.builtin.RespondMediator.mediate(RespondMediator.java:35)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:97)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:59)
at org.apache.synapse.config.xml.AnonymousListMediator.mediate(AnonymousListMediator.java:37)
at org.apache.synapse.config.xml.SwitchCase.mediate(SwitchCase.java:69)
at org.apache.synapse.mediators.filters.SwitchMediator.mediate(SwitchMediator.java:119)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:97)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:59)
at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:158)
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:993)
at org.wso2.carbon.inbound.endpoint.protocol.websocket.InboundWebsocketSourceHandler.injectToSequence(InboundWebsocketSourceHandler.java:461)
at org.wso2.carbon.inbound.endpoint.protocol.websocket.InboundWebsocketSourceHandler.handleWebsocketPassthroughTextFrame(InboundWebsocketSourceHandler.java:346)
at org.wso2.carbon.inbound.endpoint.protocol.websocket.InboundWebsocketSourceHandler.handleWebSocketFrame(InboundWebsocketSourceHandler.java:242)
at org.wso2.carbon.inbound.endpoint.protocol.websocket.InboundWebsocketSourceHandler.channelRead(InboundWebsocketSourceHandler.java:132)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Do you know what could be the reason?
Let me know if you would like to see my wso2esb project.
Thank you very much!
When new clients are connected, it is added to a map. If you broadcast during a load test, the same map is used to add connections and for broadcasting purposes.
This will ended up with a Concurrent Modification Exception. This is a known behavior and we recommend you to stop broadcasting while you are running
performance tests. Instead write only to one channel during the performance tests and that would solve this issue. You man find the code at [1] for further analysis.
[1] https://github.com/wso2/carbon-mediation/blob/master/components/inbound-endpoints/org.wso2.carbon.inbound.endpoint/src/main/java/org/wso2/carbon/inbound/endpoint/protocol/websocket/management/WebsocketSubscriberPathManager.java#L98
Related
During some load tests, in some circumstances I have:
session with status Status{code=CANCELLED, description=Failed to read message., cause=io.grpc.StatusRuntimeException: INTERNAL: Invalid protobuf byte sequence
at io.grpc.Status.asRuntimeException(Status.java:526)
at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parse(ProtoLiteUtils.java:218)
at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parse(ProtoLiteUtils.java:118)
at io.grpc.MethodDescriptor.parseResponse(MethodDescriptor.java:284)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInternal(ClientCallImpl.java:661)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInContext(ClientCallImpl.java:646)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).
at com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:125)
at com.google.protobuf.CodedInputStream$ArrayDecoder.readTag(CodedInputStream.java:633)
at c.n.r.m.c.v1.CMM.<init>(CMM.java:45)
at c.n.r.m.c.v1.CMM$1.parsePartialFrom(CMM.java:974)
at c.n.r.m.c.v1.CMM$1.parsePartialFrom(CMM.java:968)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:86)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parseFrom(ProtoLiteUtils.java:223)
at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parse(ProtoLiteUtils.java:215)
... 9 more
}.
The client/server are some java aps. Both have the same mvn dependencies incuding protos.
Any idea how I can debug this ? The msg is printed on the client stream onError. So I assume that it is the server that cannot decode the grpc message.
A similar issue occurs if the server is using a python grpc implementation.
I cannot put a breakpoint in the c.n.r.m.c.v1.CMM$1.parsePartialFrom(CMM.java:974)_ - yet. Some challenges from the IDE.
Protocol message contained an invalid tag (zero).
This means the received message was invalid. This normally happens if the message is corrupted or it wasn't a protobuf message at all. Normal schema evolution does not trigger this.
Kafka Streams 2.1.0 on MS Windows here.
I'm on macOS so can't work on it myself, but while working with people who were on MS Windows they reported java.nio.file.AccessDeniedException when they used KafkaStreams.cleanUp in a Kafka Streams application every time they started the app (except the first time).
In Deleting topics throws exception #196 it was asked why a Kafka Streams application would fail with java.nio.file.AccessDeniedException when running EmbeddedSingleNodeKafkaCluster#deleteTopicsAndWait.
Caused by: java.nio.file.AccessDeniedException: C:\Users\gwade\AppData\Local\Temp\junit6747789160683566966\junit5490786451417386230\topic-0 -> C:\Users\gwade\AppData\Local\Temp\junit6747789160683566966\junit5490786451417386230\topic-0.a3c80cfca5e740bd8c1be434d817af2c-delete
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:809)
at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:728)
at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:726)
at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:726)
at kafka.log.Log.maybeHandleIOException(Log.scala:1927)
at kafka.log.Log.renameDir(Log.scala:726)
at kafka.log.LogManager.asyncDelete(LogManager.scala:842)
at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:353)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259)
at kafka.cluster.Partition.delete(Partition.scala:347)
at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:350)
at kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:380)
at kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:378)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:378)
at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:200)
at kafka.server.KafkaApis.handle(KafkaApis.scala:111)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
at java.lang.Thread.run(Thread.java:748)
Any idea what the root cause is?
A workaround was to shut down Zookeeper and remove the /tmp/zookeeper (which simply removed the entire state of a Kafka cluster, incl. topics to be deleted with their local directories on brokers).
Compile no problem, but after run.....
26183 [Thread-34] ERROR backtype.storm.util - Async loop died!
java.lang.UnsatisfiedLinkError: org.zeromq.ZMQ$Socket.finalize()V
at org.zeromq.ZMQ$Socket.finalize(Native Method)
at org.zeromq.ZMQ$Socket.close(ZMQ.java:339)
at storm.starter.spout.RandomSentenceSpout.nextTuple(RandomSentenceSpout.java:56)
at backtype.storm.daemon.executor$fn__3985$fn__3997$fn__4026.invoke(executor.clj:502)
at backtype.storm.util$async_loop$fn__465.invoke(util.clj:377)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:724)
26185 [Thread-34] ERROR backtype.storm.daemon.executor -
java.lang.UnsatisfiedLinkError: org.zeromq.ZMQ$Socket.finalize()V
at org.zeromq.ZMQ$Socket.finalize(Native Method)
at org.zeromq.ZMQ$Socket.close(ZMQ.java:339)
at storm.starter.spout.RandomSentenceSpout.nextTuple(RandomSentenceSpout.java:56)
at backtype.storm.daemon.executor$fn__3985$fn__3997$fn__4026.invoke(executor.clj:502)
at backtype.storm.util$async_loop$fn__465.invoke(util.clj:377)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:724)
Storm recommends to use exactly 2.1.7 version of zeroMQ.
https://github.com/xumingming/storm-wiki/blob/master/Installing-native-dependencies.md
Other version are known to cause issues as there are some serious bugs. Which version of zeroMQ are you using?
Seems the OP is long gone but I'm leaving this answer in case anyone else has this problem (this page had a high page rank when I googled for the same solution)
There's an issue with some versions of ZMQ (specifically the java wrapper) that throws UnsatisfiedLinkError when you try to close a socket, like storm is doing in this case.
More info here:
https://github.com/zeromq/jzmq/issues/237
The obvious solution would be to upgrade your jzmq version (or in this case, storm version).
Cheers.
I am running a Jmeter script in version 2.8. After running script in result
tree i am getting below mentioned error. But this error is not coming to
any specific request. I am getting this error for any random request. If i
ran test 1st time i am getting error for one png request. In next run same
png request passed successfully and error is seen for some different
request.
Can someone please help me to resolve this issue?
java.net.SocketException: Connection reset at
java.net.SocketInputStream.read(SocketInputStream.java:168) at
java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at
java.io.BufferedInputStream.read1(BufferedInputStream.java:258) at
java.io.BufferedInputStream.read(BufferedInputStream.java:317) at
sun.net.www.MeteredStream.read(MeteredStream.java:116) at
java.io.FilterInputStream.read(FilterInputStream.java:116) at
sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:2676)
at org.apache.commons.io.input.ProxyInputStream.read(ProxyInputStream.java:99)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) at
java.io.BufferedInputStream.read(BufferedInputStream.java:317) at
java.io.FilterInputStream.read(FilterInputStream.java:90) at
org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.readResponse(HTTPSamplerBase.java:1620)
at org.apache.jmeter.protocol.http.sampler.HTTPAbstractImpl.readResponse(HTTPAbstractImpl.java:236)
at org.apache.jmeter.protocol.http.sampler.HTTPJavaImpl.readResponse(HTTPJavaImpl.java:282)
at org.apache.jmeter.protocol.http.sampler.HTTPJavaImpl.sample(HTTPJavaImpl.java:512)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:62)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1054)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1043)
at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:416)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:271)
at java.lang.Thread.run(Thread.java:662)
This reflects server side issues as you are getting connection reset error.
If you are doing real load test, do it in NON GUI mode, GUI mode is for scripting.
i'm running coherence cache server and it throws the following error after showing the member set and the member list info. And i'm not sure what's up. :(
Here is the exception that I'm getting.
Stopping cluster due to unhandled exception: com.tangosol.net.messaging.ConnectionException: Unable to refresh sockets: [InboundUnicastUdpSocket{State=STATE_OPEN, address:port=191.193.1.127:8088}, MulticastUdpSocket{State=STATE_OPEN, address:port=196.194.184.13:50110, InterfaceAddress=175.143.1.127, TimeToLive=12}, TcpSocketAccepter{State=STATE_OPEN, ServerSocket=191.193.1.127:8088}]; last failed socket: MulticastUdpSocket{State=STATE_OPEN, address:port=172.194.144.93:50110, InterfaceAddress=191.193.1.127, TimeToLive=12}
at com.tangosol.coherence.component.net.Cluster$SocketManager.refreshSockets(Cluster.CDB:91)
at com.tangosol.coherence.component.net.Cluster$SocketManager$MulticastUdpSocket.onInterruptedIOException(Cluster.CDB:9)
at com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:33)
at com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Receive timed out
at java.net.PlainDatagramSocketImpl.receive0(Native Method)
at java.net.PlainDatagramSocketImpl.receive(Unknown Source)
at java.net.DatagramSocket.receive(Unknown Source)
at com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
at com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.o nNotify(PacketListener.CDB:19)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
What could be the cause? And I'm interested to learn if there are tools, techniques by which coherence errors could be investigated and tackled. please share.
-Thanks in advance,
Rose
That seem to either be a problem with your network/firewall setup (see http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/tune_multigramtest.htm) or with the JDK version (some behave badly in this specific network usage - one working fine is 1.6.0_22).
As for general troubleshooting aspects see:
http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/tune_datagramtest.htm#CIHFAHFB
http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/tune_perftune.htm
http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/appendix_errormsgs.htm#sthref979