i'm running coherence cache server and it throws the following error after showing the member set and the member list info. And i'm not sure what's up. :(
Here is the exception that I'm getting.
Stopping cluster due to unhandled exception: com.tangosol.net.messaging.ConnectionException: Unable to refresh sockets: [InboundUnicastUdpSocket{State=STATE_OPEN, address:port=191.193.1.127:8088}, MulticastUdpSocket{State=STATE_OPEN, address:port=196.194.184.13:50110, InterfaceAddress=175.143.1.127, TimeToLive=12}, TcpSocketAccepter{State=STATE_OPEN, ServerSocket=191.193.1.127:8088}]; last failed socket: MulticastUdpSocket{State=STATE_OPEN, address:port=172.194.144.93:50110, InterfaceAddress=191.193.1.127, TimeToLive=12}
at com.tangosol.coherence.component.net.Cluster$SocketManager.refreshSockets(Cluster.CDB:91)
at com.tangosol.coherence.component.net.Cluster$SocketManager$MulticastUdpSocket.onInterruptedIOException(Cluster.CDB:9)
at com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:33)
at com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Receive timed out
at java.net.PlainDatagramSocketImpl.receive0(Native Method)
at java.net.PlainDatagramSocketImpl.receive(Unknown Source)
at java.net.DatagramSocket.receive(Unknown Source)
at com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
at com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.o nNotify(PacketListener.CDB:19)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
What could be the cause? And I'm interested to learn if there are tools, techniques by which coherence errors could be investigated and tackled. please share.
-Thanks in advance,
Rose
That seem to either be a problem with your network/firewall setup (see http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/tune_multigramtest.htm) or with the JDK version (some behave badly in this specific network usage - one working fine is 1.6.0_22).
As for general troubleshooting aspects see:
http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/tune_datagramtest.htm#CIHFAHFB
http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/tune_perftune.htm
http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/appendix_errormsgs.htm#sthref979
Related
I am trying to set up a new JDBC connection to an Intersystems Cache data source, and I'm struggling to know if it can even be done.
Since there was no Intersystems Cache option in the JDBC driver drop down, I added the driver string manually -> com.intersys.jdbc.CacheDriver
I then added the URL manually in the following format -> jdbc:Cache://123.123.123.123:12345/namespace
I also found the JDBC driver and have added it to the Jar File Path -> cachedb.jar
Based on the error message, I am wondering if it's even possible to connect to intersystems databases with the JDBC connector. What do you think?
When I try to connect, I get the following error:
Exception, if you want to see more information look into the details.
Reason: java.lang.ClassNotFoundException: com.intersys.jdbc.CacheDriver cannot be found by net.sf.jasperreports_6.2.1.final
The Details:
net.sf.jasperreports.engine.JRRuntimeException: java.lang.ClassNotFoundException: com.intersys.jdbc.CacheDriver cannot be found by net.sf.jasperreports_6.2.1.final
at net.sf.jasperreports.data.jdbc.JdbcDataAdapterService.getConnection(JdbcDataAdapterService.java:173)
at net.sf.jasperreports.data.jdbc.JdbcDataAdapterService.contributeParameters(JdbcDataAdapterService.java:128)
at net.sf.jasperreports.data.AbstractDataAdapterService.test(AbstractDataAdapterService.java:128)
at com.jaspersoft.studio.data.wizard.AbstractDataAdapterWizard$3.runOperations(AbstractDataAdapterWizard.java:162)
at com.jaspersoft.studio.utils.jobs.CheckedRunnableWithProgress$1.run(CheckedRunnableWithProgress.java:59)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: com.intersys.jdbc.CacheDriver cannot be found by net.sf.jasperreports_6.2.1.final
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:439)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:352)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:344)
at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:160)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at net.sf.jasperreports.engine.util.JRClassLoader.loadClassForRealName(JRClassLoader.java:174)
at net.sf.jasperreports.data.jdbc.JdbcDataAdapterService.getConnection(JdbcDataAdapterService.java:145)
... 5 more
I have asked this on the JasperReports community page, but it doesn't get much activity on there.
You say that you found cachedb.jar, but you should use cachejdbc.jar this file you can find at dev/java/lib/JDK(17|18) in InterSystems installation folder
Documentation
Under load, using Hazlecast 2.4, we encountered the following Hazlecast exception in a cluster. As it seems the underlying issue has been addressed in Hazlecast 2.5. In order to validate that the upgrade indeed addresses the same issue we encountered we would like to reproduce it. In our current setup it only occurs rarely. How can we reproduce it under lab conditions?
I noticed
Hazelcast - OperationTimeoutException
which may be related.
com.hazelcast.core.OperationTimeoutException: [CONCURRENT_MAP_CONTAINS_KEY] Operation Timeout (with no response!): 0
at com.hazelcast.impl.BaseManager$ResponseQueueCall.waitAndGetResult(BaseManager.java:619)
at com.hazelcast.impl.BaseManager$ResponseQueueCall.getRedoAwareResult(BaseManager.java:641)
at com.hazelcast.impl.BaseManager$ResponseQueueCall.getResult(BaseManager.java:636)
at com.hazelcast.impl.BaseManager$RequestBasedCall.getResultAsBoolean(BaseManager.java:447)
at com.hazelcast.impl.BaseManager$ResponseQueueCall.getResultAsBoolean(BaseManager.java:555)
at com.hazelcast.impl.BaseManager$RequestBasedCall.booleanCall(BaseManager.java:432)
at com.hazelcast.impl.BaseManager$ResponseQueueCall.booleanCall(BaseManager.java:555)
at com.hazelcast.impl.ConcurrentMapManager$MContainsKey.containsKey(ConcurrentMapManager.java:622)
at com.hazelcast.impl.MProxyImpl$MProxyReal.containsKey(MProxyImpl.java:937)
at sun.reflect.GeneratedMethodAccessor322.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.hazelcast.impl.MProxyImpl$DynamicInvoker.invoke(MProxyImpl.java:66)
at com.sun.proxy.$Proxy180.containsKey(Unknown Source)
at com.hazelcast.impl.MProxyImpl.containsKey(MProxyImpl.java:312)
I created a classic BusinessWorks process using BW 5.10 (Tibco TRA version 5.7, Tibco Designer 5.7.4.4, Tibco Administrator 5.7). The process worked fine in test node in Tibco Designer, and on the same machine I have created a BW domain and deployed the ear file in Tibco Administrator without any error. However the process can't be started with the error below with message code BW-TIBSS-100001:
Activation error with process starter [process/sendZugstandort_Hannover.process]Activation error with process starter [process/sendZugstandort_Hannover.process]
at com.tibco.pe.core.ProcessStarter.setState(Unknown Source)
at com.tibco.pe.core.JobPool.if(Unknown Source)
at com.tibco.pe.core.JobPool.resume(Unknown Source)
at com.tibco.pe.core.JobPool.a(Unknown Source)
at com.tibco.pe.core.JobPool.startNotFT(Unknown Source)
at com.tibco.pe.PEMain.a(Unknown Source)
at com.tibco.pe.PEMain.do(Unknown Source)
at com.tibco.pe.PEMain.a(Unknown Source)
at com.tibco.pe.PEMain.<init>(Unknown Source)
at com.tibco.pe.PEMain.main(Unknown Source)caused by: Cannot activate Event Source: Specified message type does not exist..
at com.tibco.smartsockets.plugin.SSEventSource.activate(SSEventSource.java:150)
at com.tibco.pe.core.ProcessStarter.setState(Unknown Source)
at com.tibco.pe.core.JobPool.if(Unknown Source)
at com.tibco.pe.core.JobPool.resume(Unknown Source)
at com.tibco.pe.core.JobPool.a(Unknown Source)
at com.tibco.pe.core.JobPool.startNotFT(Unknown Source)
at com.tibco.pe.PEMain.a(Unknown Source)
at com.tibco.pe.PEMain.do(Unknown Source)
at com.tibco.pe.PEMain.a(Unknown Source)
at com.tibco.pe.PEMain.<init>(Unknown Source)
at com.tibco.pe.PEMain.main(Unknown Source)
My question is:
What is the essential difference between running a BW process in test mode of Designer and Tibco Administrator?
Why does my process only in test mode of designer and not in Tibco Administrator?
I could resolve my issue by adding manually additional resources to enterprise archive. Normally all resources referenced in a process definition will be automatically added to enterprise archive, but for SmartSockets palette, SmartSockets message types definition are not included in enterprise archive.
Lesson learned: When process works fine in test mode of Designer, but not in Tibco Administrator, then check first the ear file, if all resources all included in it.
When I run my hbase custom filter I got this error:
org.apache.hadoop.hbase.client.RpcRetryingCaller#459c8c0a, java.io.IOException: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1360)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:916)
at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3056)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28454)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1358)
... 9 more
Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.filter.FilterList.parseFrom(FilterList.java:406)
... 14 more
Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1360)
at org.apache.hadoop.hbase.filter.FilterList.parseFrom(FilterList.java:403)
... 14 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1358)
... 15 more
Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: parseFrom called on base Filter, but should be called on derived type
at org.apache.hadoop.hbase.filter.Filter.parseFrom(Filter.java:267)
... 20 more
Anybody know how can i fix it?
I also had this error when trying to make a custom filter. My problem was that I did not include the functions "toByteArray" and "parseFrom" in my filter. See here for where I found the solution, and links to examples. (took me two weeks of digging to find - HBase could really use some better documentation...)
As far as what needs to go into those methods, I'm still having trouble in that regard. Conceptually (as I understand it), their purpose is to encode and decode the identifying information for your instance of filter (basically, the information you would send to the constructor) into a serialized string of bytes. That way the particular filter can be 'instantiated' wherever its needed.
For me, including these methods prevented the hang and error, and my program now runs through to completion. I don't think I entirely understand the methods correctly, though, as it seems the filter still doesn't actually run, but that's another topic. (if you figured it out, let me know!)
I had 1 cluster server that was giving this same error. Note toByteArray and parseFrom where already present and the same jar file worked on other clusters just fine. I was able to solve it by restarting the HBase and Zookeeper services together along with ensuring that the /hbase/lib folder and custom filter jar file had the appropriate owner (set it to the hbase user) first.
I'm not able to replicate the error but what I did above solved it for me. I tried changing the owner, the HBase config for the /hbase/lib folder, creating a new folder but couldn't replicate it, so it could just come down to the HBase restart.
The missing link is now located here
Compile no problem, but after run.....
26183 [Thread-34] ERROR backtype.storm.util - Async loop died!
java.lang.UnsatisfiedLinkError: org.zeromq.ZMQ$Socket.finalize()V
at org.zeromq.ZMQ$Socket.finalize(Native Method)
at org.zeromq.ZMQ$Socket.close(ZMQ.java:339)
at storm.starter.spout.RandomSentenceSpout.nextTuple(RandomSentenceSpout.java:56)
at backtype.storm.daemon.executor$fn__3985$fn__3997$fn__4026.invoke(executor.clj:502)
at backtype.storm.util$async_loop$fn__465.invoke(util.clj:377)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:724)
26185 [Thread-34] ERROR backtype.storm.daemon.executor -
java.lang.UnsatisfiedLinkError: org.zeromq.ZMQ$Socket.finalize()V
at org.zeromq.ZMQ$Socket.finalize(Native Method)
at org.zeromq.ZMQ$Socket.close(ZMQ.java:339)
at storm.starter.spout.RandomSentenceSpout.nextTuple(RandomSentenceSpout.java:56)
at backtype.storm.daemon.executor$fn__3985$fn__3997$fn__4026.invoke(executor.clj:502)
at backtype.storm.util$async_loop$fn__465.invoke(util.clj:377)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:724)
Storm recommends to use exactly 2.1.7 version of zeroMQ.
https://github.com/xumingming/storm-wiki/blob/master/Installing-native-dependencies.md
Other version are known to cause issues as there are some serious bugs. Which version of zeroMQ are you using?
Seems the OP is long gone but I'm leaving this answer in case anyone else has this problem (this page had a high page rank when I googled for the same solution)
There's an issue with some versions of ZMQ (specifically the java wrapper) that throws UnsatisfiedLinkError when you try to close a socket, like storm is doing in this case.
More info here:
https://github.com/zeromq/jzmq/issues/237
The obvious solution would be to upgrade your jzmq version (or in this case, storm version).
Cheers.