Zookeeper : java.lang.ClassNotFoundException: org.apache.zookeeper.admin.ZooKeeperAdmin after updating spring boot - spring-boot

I am trying to update an springboot application which uses org.apache.zookeeper.zookeeper.
After updating the spring boot version. I am getting one of the two errors given below depending upon the version used.
Error 1 - (For new version provided below)
Caused by: org.apache.zookeeper.KeeperException$UnimplementedException: KeeperErrorCode = Unimplemented for /service/**/test/**/************
at org.apache.zookeeper.KeeperException.create(KeeperException.java:106)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1836)
at org.apache.curator.framework.imps.CreateBuilderImpl$16.call(CreateBuilderImpl.java:1131)
at org.apache.curator.framework.imps.CreateBuilderImpl$16.call(CreateBuilderImpl.java:1113)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:1110)
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:593)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:583)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:48)
at org.apache.curator.x.discovery.details.ServiceDiscoveryImpl.internalRegisterService(ServiceDiscoveryImpl.java:237)
at org.apache.curator.x.discovery.details.ServiceDiscoveryImpl.registerService(ServiceDiscoveryImpl.java:192)
at org.springframework.cloud.zookeeper.serviceregistry.ZookeeperServiceRegistry.register(ZookeeperServiceRegistry.java:71)
... 63 more
or
Error 2 - (For some other versions of zookeeper and curator provided in thread 1 provided below)
Caused by: java.lang.ClassNotFoundException: org.apache.zookeeper.admin.ZooKeeperAdmin
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 109 more
Old versions: (Working good)
Java - 8
SpringBoot - 2.3.3.RELEASE
Zookeeper - 3.4.12
Curator - 4.0.1
New version: (Spring managed versions)
Java - 8
SpringBoot - 2.7.4
Zookeeper - 3.6.0
Curator - 5.1.0
Many threads mentions that the issue is because of incompatible zookeeper and curator versions.
There are some threads already available regarding the issue
Zookeeper : java.lang.ClassNotFoundException: org.apache.zookeeper.admin.ZooKeeperAdminI tried every solution provided in the this thread and also some other combinations but none seems to work. I tried to use the old version and updating the rest. This too didn't work.
Apache Curator Unimplemented Errors When Trying to Create zNodesI am not accessing the curator directly as provided in this thread and I believe the zookeeper internally uses curator.
Is there any other dependency I need to upgrade? or Do I need to upgrade the java?
Please mention if you need some more info.

Related

Apache Nifi Web Server keeps failing to start with Decryption exception

I have a setup in which NiFi Web Server suddenly started failing to start when upgrading from 1.15.3 to 1.16.1 version. The following exception keeps occurring on the Apache NiFi Cluster:
2022-05-11 22:53:40,570 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
org.apache.nifi.encrypt.EncryptionException: Decryption Failed with Algorithm [PBEWITHMD5AND256BITAES-CBC-OPENSSL]
at org.apache.nifi.encrypt.CipherPropertyEncryptor.decrypt(CipherPropertyEncryptor.java:78)
at org.apache.nifi.fingerprint.FingerprintFactory.decrypt(FingerprintFactory.java:931)
at org.apache.nifi.fingerprint.FingerprintFactory.getLoggableRepresentationOfSensitiveValue(FingerprintFactory.java:561)
at org.apache.nifi.fingerprint.FingerprintFactory.addParameter(FingerprintFactory.java:330)
at org.apache.nifi.fingerprint.FingerprintFactory.addParameterContext(FingerprintFactory.java:302)
at org.apache.nifi.fingerprint.FingerprintFactory.addFlowControllerFingerprint(FingerprintFactory.java:210)
at org.apache.nifi.fingerprint.FingerprintFactory.createFingerprint(FingerprintFactory.java:153)
at org.apache.nifi.fingerprint.FingerprintFactory.createFingerprint(FingerprintFactory.java:127)
at org.apache.nifi.controller.inheritance.FlowFingerprintCheck.checkInheritability(FlowFingerprintCheck.java:45)
at org.apache.nifi.controller.XmlFlowSynchronizer.sync(XmlFlowSynchronizer.java:200)
at org.apache.nifi.controller.serialization.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:43)
at org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1524)
at org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:104)
at org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:815)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:457)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1086)
at org.apache.nifi.NiFi.<init>(NiFi.java:170)
at org.apache.nifi.NiFi.<init>(NiFi.java:82)
at org.apache.nifi.NiFi.main(NiFi.java:330)
Caused by: javax.crypto.BadPaddingException: pad block corrupted
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher$BufferedGenericBlockCipher.doFinal(Unknown Source)
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher.engineDoFinal(Unknown Source)
at javax.crypto.Cipher.doFinal(Cipher.java:2168)
at org.apache.nifi.encrypt.CipherPropertyEncryptor.decrypt(CipherPropertyEncryptor.java:74)
... 18 common frames omitted
relevant nifi.properties:
nifi.sensitive.props.key=<hidden>
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.additional.keys=
I have already tried to tear it all down and re-install 1.15.3 with not any other changes, but the same issue still persists. Can someone please share any ideas if there are any on how to fix this?

SAP Vora Thrift Server Error: Instantiating dialect 'sapsql' failed

I have deployed a cloudera CDH 5.13.1 Cluster with SAP Vora 1.4 Patch 4.
When I started the Vora thrift server everything looks fine, but as soon as I start SAP Vora tools and login following error shows up:
17/12/20 11:26:52 ERROR thriftserver.SparkExecuteStatementOperation: Error executing query, currentState RUNNING,
org.apache.spark.sql.catalyst.errors.package$DialectException: Instantiating dialect 'sapsql' failed.
Reverting to default dialect 'sapsql'
at org.apache.spark.sql.SQLContext.getSQLDialect(SQLContext.scala:225)
at org.apache.spark.sql.hive.HiveContext.getSQLDialect(HiveContext.scala:577)
at org.apache.spark.sql.hive.SapHiveContext$$anonfun$1.apply(SapHiveContext.scala:54)
at org.apache.spark.sql.hive.SapHiveContext$$anonfun$1.apply(SapHiveContext.scala:54)
at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
at org.apache.spark.sql.hive.SapHiveContext$$anonfun$2.apply(SapHiveContext.scala:58)
at org.apache.spark.sql.hive.SapHiveContext$$anonfun$2.apply(SapHiveContext.scala:58)
at org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43)
at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231)
at org.apache.spark.sql.hive.HiveContext.parseSql(HiveContext.scala:334)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:829)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:211)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.extension.SapSQLDialect
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:177)
at org.apache.spark.sql.SQLContext.getSQLDialect(SQLContext.scala:215)
... 54 more
In the installation guide it says I need to assign the vora user authorization for the Hive Metastore.
Since this is only a test setup authorization is disabled in Hive and the vora user can create and drop tables in the default database and has write access to Hive's warehouse location.
How can I solve it?
This issue is caused by an incompatability with CDH 5.13 and Vora 1.4 patch 4. The issue is currently being investigated by SAP.
Is it an option for you to move to a newer Vora version? Current version is Vora 2.1. Since version 2.0 Vora is deployed in a Kubernetes cluster instead of the Hadoop cluster. This could help to overcome this CDH dependency issue.

How to compile Nutch 2.3.1 with Hbase 1.2.6

I have to setup hadoop stack with Nutch 2.3.1. Supported version of Hbase for hadoop 2.7.4 is 1.2.6 that I have configured and tested successfully. But when I compile Nutch I got following and crawl a sample page I got this error.
/usr/local/nutch/runtime/local/bin/nutch inject urls/ -crawlId kics
InjectorJob: starting at 2017-09-21 14:20:10
InjectorJob: Injecting urlDir: urls
Exception in thread "main" java.lang.NoSuchFieldError: HBASE_CLIENT_PREFETCH_LIMIT
at org.apache.hadoop.hbase.client.HConnectionKey.<clinit>(HConnectionKey.java:43)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:267)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:194)
at org.apache.gora.hbase.store.HBaseStore.initialize(HBaseStore.java:115)
at org.apache.gora.store.DataStoreFactory.initializeDataStore(DataStoreFactory.java:102)
at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:161)
at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:135)
at org.apache.nutch.storage.StorageUtils.createWebStore(StorageUtils.java:78)
at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:218)
at org.apache.nutch.crawl.InjectorJob.inject(InjectorJob.java:252)
at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:275)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.InjectorJob.main(InjectorJob.java:284)
Error running:
According to my search such as this and this, Hbase 1.x can be compiled for Nutch 2.3.1. But How to compile I have no idea. Can someone please guide (steps etc.)
Apache Gora 0.7 is the one supporting HBase 1.2.3(+): https://issues.apache.org/jira/browse/GORA-443
You can take a look at https://stackoverflow.com/a/39837926/582789 where I wrote how to modify Nutch 2.3.1 to work with Apache Gora 0.7. About the patch https://paste.apache.org/jjqz in that answer, use "0.7" where it shows "0.7-SNAPSHOT".
By the way, Apache Gora 0.8 was released yesterday :) Just changing 0.7 for 0.8 should work.
http://gora.apache.org/#20-september-2017-apache-gora-08-release

WSO2 DAS 3.0.0 with API Manager 1.9.0 not working

I am using trying to use DAS 3.0.0 as replacement of BAM with WSO2 API Manager 1.9.0/1.9.1 with Oracle for WSO2AM_STATS_DB.
I am following http://blog.rukspot.com/2015/09/publishing-apim-runtime-statistics-to.html
I can see data in DAS's carbon dashboard in Data Explorer tables ORG_WSO2_APIMGT_STATISTICS_REQUEST and ORG_WSO2_APIMGT_STATISTICS_RESPONSE.
But data is not stored in Oracle. Therefore I am not able to see Statistics in publisher of AM. It keeps saying "Data publishing is enabled. Generate some traffic to see statistics."
I am getting following error in log:
[2015-12-08 13:00:00,022] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: APIM_STAT_script for tenant id: -1234
[2015-12-08 13:00:00,037] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: Throttle_script for tenant id: -1234
Exception in thread "dag-scheduler-event-loop" java.lang.NoClassDefFoundError: o
rg/xerial/snappy/SnappyInputStream
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:66)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:60)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcas
t$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.s
cala:80)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(Torre
ntBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastMan
ager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1291)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitMissingTasks(DAGScheduler.scala:874)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitStage(DAGScheduler.scala:815)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGSchedul
er.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1426)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.SnappyInputStream
cannot be found by spark-core_2.10_1.4.1.wso2v1
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(Bundl
eLoader.java:501)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(De
faultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 15 more
Am I missing something?
Can anyone please help me to figure out this issue?
Thanks in advance.
Move all the libraries(jars) into your project's /WEB-INF/lib. Now all the libraries/jars under /WEB-INF/lib will come under classpath.
use snappy-java jar file and it will work as you want.

WebSocket with Spring 4.0 with MessageBroker and WebServers

I've downloaded the latest version of the websocket with spring example from
https://github.com/rstoyanchev/spring-websocket-portfolio.
I'm running the application on Tomcat 8.0 RC5 and using the message broker RabbitMQ 3.1.5.
The simulation runs only 2-3 times after that the simulation doesn't work anymore, due to the below error. I tried also toi run the application on Glassfish 4.0, but the same error occurs. Can you please help what could be the reason?
15:47:35 [BrokerWebSocketChannel-1] ExceptionWebSocketHandlerDecorator - Unhandl
ed error for ExceptionWebSocketHandlerDecorator [delegate=LoggingWebSocketHandle
rDecorator [delegate=org.springframework.messaging.handler.websocket.SubProtocol
WebSocketHandler#1d2bde4]]
java.lang.IllegalStateException: The WebSocket session has been closed and no me
thod (apart from close()) may be called on a closed session
at org.apache.tomcat.websocket.WsSession.checkState(WsSession.java:642)
at org.apache.tomcat.websocket.WsSession.getNegotiatedSubprotocol(WsSess
ion.java:297)
at org.springframework.web.socket.adapter.StandardWebSocketSession.getAc
ceptedProtocol(StandardWebSocketSession.java:113)
at org.springframework.web.socket.sockjs.transport.session.WebSocketServ
erSockJsSession.getAcceptedProtocol(WebSocketServerSockJsSession.java:91)
at org.springframework.messaging.handler.websocket.SubProtocolWebSocketH
andler.findProtocolHandler(SubProtocolWebSocketHandler.java:149)
at org.springframework.messaging.handler.websocket.SubProtocolWebSocketH
andler.afterConnectionClosed(SubProtocolWebSocketHandler.java:224)
at org.springframework.web.socket.support.WebSocketHandlerDecorator.afte
rConnectionClosed(WebSocketHandlerDecorator.java:69)
at org.springframework.web.socket.support.LoggingWebSocketHandlerDecorat
or.afterConnectionClosed(LoggingWebSocketHandlerDecorator.java:74)
at org.springframework.web.socket.support.ExceptionWebSocketHandlerDecor
ator.afterConnectionClosed(ExceptionWebSocketHandlerDecorator.java:89)
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJ
sSession.close(AbstractSockJsSession.java:237)
at org.springframework.messaging.simp.stomp.StompProtocolHandler.handleM
essageToClient(StompProtocolHandler.java:183)
at org.springframework.messaging.handler.websocket.SubProtocolWebSocketH
andler.handleMessage(SubProtocolWebSocketHandler.java:194)
at org.springframework.messaging.support.channel.ExecutorSubscribableCha
nnel$1.run(ExecutorSubscribableChannel.java:80)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:615)
at java.lang.Thread.run(Thread.java:724)

Resources