I've been struggling with class loading problems using hazelcast and OSGI (equinox), that I hope have been fixed with Version 3.2-RC1 (ClassLoaderUtil).
My Problem now is that since Version 3.1.6, the hazelcast-client artifact's manifest does not contain any bundle information anymore, same with 3.2-RC1.
I couldn't find any reason for this in the issue tracker so I guess it's an error?
My workaround to use 3.2-RC1 on the cluster nodes an 3.1.5 on the clients (where I don't have class loading issues) does not work, I'm getting this exception:
15:54:41.002 ERROR [hz.node1.cached.thread-1 ] ClientEngine - [127.0.0.1]:5701 [dev] [3.2-RC1] Unknown field name: 'cId' for ClassDefinition {id: 2, version: 0}
com.hazelcast.nio.serialization.HazelcastSerializationException: Unknown field name: 'cId' for ClassDefinition {id: 2, version: 0}
at com.hazelcast.nio.serialization.DefaultPortableReader.throwUnknownFieldException(DefaultPortableReader.java:226) ~[hazelcast-3.2-RC1.jar:3.2-RC1]
at com.hazelcast.nio.serialization.DefaultPortableReader.getPosition(DefaultPortableReader.java:269) ~[hazelcast-3.2-RC1.jar:3.2-RC1]
at com.hazelcast.nio.serialization.DefaultPortableReader.readInt(DefaultPortableReader.java:71) ~[hazelcast-3.2-RC1.jar:3.2-RC1]
at com.hazelcast.client.ClientRequest.readPortable(ClientRequest.java:85) ~[hazelcast-3.2-RC1.jar:3.2-RC1]
at com.hazelcast.nio.serialization.PortableSerializer.read(PortableSerializer.java:99) ~[hazelcast-3.2-RC1.jar:3.2-RC1]
at com.hazelcast.nio.serialization.PortableSerializer.read(PortableSerializer.java:29) ~[hazelcast-3.2-RC1.jar:3.2-RC1]
at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:59) ~[hazelcast-3.2-RC1.jar:3.2-RC1]
at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:221) ~[hazelcast-3.2-RC1.jar:3.2-RC1]
at com.hazelcast.client.ClientEngineImpl$ClientPacketProcessor.run(ClientEngineImpl.java:349) ~[hazelcast-3.2-RC1.jar:3.2-RC1]
at com.hazelcast.util.executor.ManagedExecutorService$Worker.run(ManagedExecutorService.java:178) [hazelcast-3.2-RC1.jar:3.2-RC1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_45]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
at com.hazelcast.util.executor.PoolExecutorThreadFactory$ManagedThread.run(PoolExecutorThreadFactory.java:59) [hazelcast-3.2-RC1.jar:3.2-RC1]
You're right, it seems that bundle informations are missing on hazelcast-client.jar. I'm looking into it to fix it as Peter said for 3.2.
Chris
The client version needs to be exactly the same as the server version. So you can't have a 3.1.5 client in combination with a 3.2-RC1 server.
If there is an issue with the manifest file, please open a ticket here:
https://github.com/hazelcast/hazelcast/issues
And we'll get if fixed before the 3.2 release.
Related
I am trying to update an springboot application which uses org.apache.zookeeper.zookeeper.
After updating the spring boot version. I am getting one of the two errors given below depending upon the version used.
Error 1 - (For new version provided below)
Caused by: org.apache.zookeeper.KeeperException$UnimplementedException: KeeperErrorCode = Unimplemented for /service/**/test/**/************
at org.apache.zookeeper.KeeperException.create(KeeperException.java:106)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1836)
at org.apache.curator.framework.imps.CreateBuilderImpl$16.call(CreateBuilderImpl.java:1131)
at org.apache.curator.framework.imps.CreateBuilderImpl$16.call(CreateBuilderImpl.java:1113)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:1110)
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:593)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:583)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:48)
at org.apache.curator.x.discovery.details.ServiceDiscoveryImpl.internalRegisterService(ServiceDiscoveryImpl.java:237)
at org.apache.curator.x.discovery.details.ServiceDiscoveryImpl.registerService(ServiceDiscoveryImpl.java:192)
at org.springframework.cloud.zookeeper.serviceregistry.ZookeeperServiceRegistry.register(ZookeeperServiceRegistry.java:71)
... 63 more
or
Error 2 - (For some other versions of zookeeper and curator provided in thread 1 provided below)
Caused by: java.lang.ClassNotFoundException: org.apache.zookeeper.admin.ZooKeeperAdmin
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 109 more
Old versions: (Working good)
Java - 8
SpringBoot - 2.3.3.RELEASE
Zookeeper - 3.4.12
Curator - 4.0.1
New version: (Spring managed versions)
Java - 8
SpringBoot - 2.7.4
Zookeeper - 3.6.0
Curator - 5.1.0
Many threads mentions that the issue is because of incompatible zookeeper and curator versions.
There are some threads already available regarding the issue
Zookeeper : java.lang.ClassNotFoundException: org.apache.zookeeper.admin.ZooKeeperAdminI tried every solution provided in the this thread and also some other combinations but none seems to work. I tried to use the old version and updating the rest. This too didn't work.
Apache Curator Unimplemented Errors When Trying to Create zNodesI am not accessing the curator directly as provided in this thread and I believe the zookeeper internally uses curator.
Is there any other dependency I need to upgrade? or Do I need to upgrade the java?
Please mention if you need some more info.
i make flow, which process real time data from local server and send relevant data to Elasticsearch. I use Minifi, but when I run MiNifi it returned the following error.
Does anyone know, where is the issue?
Thanks
ERROR [Timer-Driven Process Thread-10] o.a.n.p.elasticsearch.PutElasticsearch5 PutElasticsearch5[id=4ed70cbe-9838-35cd-0000-000000000000] PutElasticsearch5[id=4ed70cbe-9838-35cd-0000-000000000000] failed to process due to java.lang.NoClassDefFoundError: Could not initialize class org.elasticsearch.Version; rolling back session: {}
java.lang.NoClassDefFoundError: Could not initialize class org.elasticsearch.Version
at org.elasticsearch.common.io.stream.StreamOutput.(StreamOutput.java:73)
at org.elasticsearch.common.io.stream.BytesStreamOutput.(BytesStreamOutput.java:60)
at org.elasticsearch.common.io.stream.BytesStreamOutput.(BytesStreamOutput.java:57)
at org.elasticsearch.common.io.stream.BytesStreamOutput.(BytesStreamOutput.java:47)
at org.elasticsearch.common.xcontent.XContentBuilder.builder(XContentBuilder.java:67)
at org.elasticsearch.common.settings.Setting.arrayToParsableString(Setting.java:698)
at org.elasticsearch.common.settings.Setting.lambda$listSetting$26(Setting.java:656)
at org.elasticsearch.common.settings.Setting$2.getRaw(Setting.java:660)
at org.elasticsearch.common.settings.Setting.get(Setting.java:300)
at org.elasticsearch.plugins.PluginsService.(PluginsService.java:164)
at org.elasticsearch.client.transport.TransportClient.newPluginService(TransportClient.java:81)
at org.elasticsearch.client.transport.TransportClient.buildTemplate(TransportClient.java:106)
at org.elasticsearch.client.transport.TransportClient.(TransportClient.java:228)
at org.elasticsearch.transport.client.PreBuiltTransportClient.(PreBuiltTransportClient.java:69)
at org.elasticsearch.transport.client.PreBuiltTransportClient.(PreBuiltTransportClient.java:65)
at org.apache.nifi.processors.elasticsearch.AbstractElasticsearch5TransportClientProcessor.getTransportClient(AbstractElasticsearch5TransportClientProcessor.java:230)
at org.apache.nifi.processors.elasticsearch.AbstractElasticsearch5TransportClientProcessor.createElasticsearchClient(AbstractElasticsearch5TransportClientProcessor.java:170)
at org.apache.nifi.processors.elasticsearch.AbstractElasticsearch5Processor.setup(AbstractElasticsearch5Processor.java:94)
at org.apache.nifi.processors.elasticsearch.PutElasticsearch5.onTrigger(PutElasticsearch5.java:177)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
In order to reduce its footprint, MiNiFi java only ships with the standard bundle of processors. In order to use the other processors that are present within a standard NiFi deployment in MiNiFi, you need to put the appropriate "nar" file into the "lib" of the MiNiFi deployment.
For "PutElasticSearch" you need "nifi-elasticsearch-nar-.nar" where "" is the version of NiFi that your version of MiNiFi is built off of. Versions 0.4.0 of MiNiFi java uses NiFi 1.5.0.
For more information and a list of the processors that do come bundled with MiNiFi out of the box see the "MiNiFi Java Agent Quick Start" documentation, section "Using Processors Not Packaged with MiNiFi"[1]. For more information on the different versions of MiNiFi correspond to the version of NiFi frameworks, see here[2].
[1] https://nifi.apache.org/minifi/minifi-java-agent-quick-start.html
[2] https://cwiki.apache.org/confluence/display/MINIFI/MiNiFi+Versioning+and+Toolkit+Compatibility
Good afternoon everyone, the problem is this I have a server with SonarQube, that when I try to start the windows service, it gets up but then it stops.
The following error appears in the sonarqube log:
2017.11.14 11:04:52 WARN sea[o.e.transport.netty] [sonar-1510653879773] exception caught on transport layer [[id: 0x346b46fb, /127.0.0.1:59330 => /127.0.0.1:9001]], closing connection
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method) ~[na:1.8.0_152]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) ~[na:1.8.0_152]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_152]
at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_152]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[na:1.8.0_152]
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [elasticsearch-1.1.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_152]
2017.11.14 11:04:52 INFO app[o.s.p.m.TerminatorThread] Process[search] is stopping
2017.11.14 11:04:52 INFO sea[o.s.p.StopWatcher] Stopping process
Do you know why this error?
I have set the sonar.properties correctly, including set the value of the sonar.search.port property to 0 as this link suggests: Sonar launch error, but the problem persists.
I hope you can give me a hand...
Regards!!!
UnComment below line in sonar property file and change port 9001 to 0
#sonar.search.port=9001
sonar.search.port=0
I had the same problem and i could fix it like this:
Go to this folder: sonarqube-x.x\conf
Open this file: sonar.properties
Find the word: #sonar.web.port
Change the value from 9000 to another port, like 9002
Save your changes
Start again your sonarqube
Access to the server with port 9000: http://localhost:9000
The reason could be the port number of sonarQube OR the one of elasticSearch instance used by sonarQube (I had a similar problem before), so the step to change both/one of those ports are :
Go to this folder: sonarqube-x.x\conf
Open this file: sonar.properties
For sonarQube port:
Find: #sonar.web.port
Change the value from 9000 to another port, like 9123; and un-comment the line (remove # in the beginning) sonar.web.port=9123
For sonarQube's elasticSearch instace port:
Find: #sonar.search.port
change this line To sonar.search.port=0 (this means that he will search for any available port and bind it)
Save your changes
Start again your sonarqube
Access to the server with the new specified sonarQube-port: http://localhost:9123
I experienced this error when upgrading SonarQube from version 5.6.7 to 6.7.1.
Originally I thought this was due to the port number but upon checking the web.log I noticed that there was an error relating to the LDAP plugin (2.2.0.608).
ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube org.sonar.plugins.ldap.LdapException: The property 'ldap.url' is empty and no realm configured to try auto-discovery.
Updating the sonar.properties file with the correct configuration allowed SonarQube to start.
I just occurred an exactly same question as you did.
I started SonarQube with MariaDB 5.5, but I found some error messages in sonarqube-x.x/logs/web.log:
2021.01.21 14:36:17 INFO web[][o.s.p.ProcessEntryPoint] Starting web
......
2021.01.21 14:36:19 ERROR web[][o.s.s.p.Platform] Web server startup failed: Unsupported mysql version: 5.5. Minimal supported version is 5.6.
So I changed my database to MySQL 5.7 and it started successfully.
Not quite sure you had the same problem, but just check these log files and see what actually happened during starting.
I am Getting java.lang.NoSuchFieldError: INT_8 error when I am trying to execute a spark job using OOzie on Cloudera 5.5.1 version.
Any help on this will be appreciated.
Please find the error stackstrace below.
16/01/28 11:21:17 WARN TaskSetManager: Lost task 0.2 in stage 20.0 (TID 40, Zlab-physrv1): java.lang.NoSuchFieldError: INT_8
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:327)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:312)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$convertField$1.apply(CatalystSchemaConverter.scala:517)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$convertField$1.apply(CatalystSchemaConverter.scala:516)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:108)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:516)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:312)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:521)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:312)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$convert$1.apply(CatalystSchemaConverter.scala:305)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$convert$1.apply(CatalystSchemaConverter.scala:305)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at org.apache.spark.sql.types.StructType.foreach(StructType.scala:92)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at org.apache.spark.sql.types.StructType.map(StructType.scala:92)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convert(CatalystSchemaConverter.scala:305)
at org.apache.spark.sql.execution.datasources.parquet.ParquetTypesConverter$.convertFromAttributes(ParquetTypesConverter.scala:58)
at org.apache.spark.sql.execution.datasources.parquet.RowWriteSupport.init(ParquetTableSupport.scala:55)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:277)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:251)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetRelation.scala:94)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anon$3.newInstance(ParquetRelation.scala:272)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:233)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
As per My idea normally we used to get this error when ever there is some difference on the jars you have used to generate the code and the jars you have used currently.
Note: When I am trying to submit the Same one using spark-submit command it's running fine.
Regards
Nisith
Finally Able to debug and fix the issue. The Issue was with the installation as one of the data nodes are having older version of parquet Jars(5.2 cdh distribution). After replacing the jars with the current version jars it was working fine.
I am getting below error while my storm topology gets first message from kafka and worker dies.
2015-08-13 12:44:58 b.s.d.executor [INFO] Finished loading executor hdfs-bolt:[3 3]
2015-08-13 12:44:58 b.s.util [ERROR] Async loop died!
java.lang.RuntimeException: Could not instantiate a class listed in config under section topology.metrics.consumer.register with fully qualified name org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink
at backtype.storm.metric.MetricsConsumerBolt.prepare(MetricsConsumerBolt.java:46) ~[storm-core-0.9.3.2.2.6.3-1.jar:0.9.3.2.2.6.3-1]
at backtype.storm.daemon.executor$fn__6414$fn__6427.invoke(executor.clj:732) ~[storm-core-0.9.3.2.2.6.3-1.jar:0.9.3.2.2.6.3-1]
at backtype.storm.util$async_loop$fn__451.invoke(util.clj:463) ~[storm-core-0.9.3.2.2.6.3-1.jar:0.9.3.2.2.6.3-1]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink
Can someone help me to solve this issue.
Check the version of Ambari installed with "rpm -qa | grep ambari" and check "/usr/lib/storm/lib" for all hosts for the ambari metrics jar matching the version
Example: ambari-metrics-storm-sink-with-common-2.0.0.151.jar
Run "yum reinstall ambari-metrics-hadoop-sink" on all storm supervisor nodes
Restart the supervisors and re-deploy the topology
Check "/usr/lib/storm/lib" to ensure that the matching ambari version jar is present
Hortonworks has posted a knowledge base article on exactly this issue: https://community.hortonworks.com/content/supportkb/49117/storm-worker-fails-with-javalangclassnotfoundexcep.html
In my case, I needed to install (not reinstall) ambari-metrics-hadoop-sink as it was not installed by default on the HDP sandbox.