Consumer thread error, thread abort on SimpleMessageListenerContainer - spring

I have a application that is running a few months but recently the application is throw a error during receipt message by mq.
When the error is throw the application don't consume more messages. Restarting the application the consume of message is running normaly.
Erro:
4644201:[2018-10-02 10:34:31,068] ERROR [SimpleAsyncTaskExecutor-1] o.s.a.r.l.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1473) - Consumer thread error, thread abort.
4644418-java.lang.NoClassDefFoundError: org/springframework/classify/SubclassClassifier$ClassComparator
4644514- at org.springframework.classify.SubclassClassifier.classify(SubclassClassifier.java:115)
4644604- at org.springframework.classify.BinaryExceptionClassifier.classify(BinaryExceptionClassifier.java:104)
4644708- at org.springframework.retry.policy.SimpleRetryPolicy.retryForException(SimpleRetryPolicy.java:191)
4644809- at org.springframework.retry.policy.SimpleRetryPolicy.canRetry(SimpleRetryPolicy.java:143)
4644901- at org.springframework.retry.support.RetryTemplate.canRetry(RetryTemplate.java:357)
4644986- at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:291)
4645072- at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:172)
4645156- at org.springframework.retry.interceptor.RetryOperationsInterceptor.invoke(RetryOperationsInterceptor.java:98)
4645268- at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
4645378- at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
4645471- at com.sun.proxy.$Proxy89.invokeListener(Unknown Source)
4645529- at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.invokeListener(SimpleMessageListenerContainer.java:1238)
4645662- at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:727)
4645799- at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:1192)
4645937- at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:1176)
4646073- at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$1100(SimpleMessageListenerContainer.java:99)
4646201- at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1370)
4646354- at java.lang.Thread.run(Thread.java:748)
4646396-Caused by: java.lang.ClassNotFoundException: org.springframework.classify.SubclassClassifier$ClassComparator

You are missing spring-retry on the class path. What are you using for dependency management? It should be added to the classpath automatically when using Maven or Gradle since it's a transitive dependency of spring-amqp.
Restarting the application the consume of message is running normaly.
That makes no sense; unless you have some kind of weird classloader problem.
Try running with -verbose to get logs for all the class loading.

Related

Apache Nifi Web Server keeps failing to start with Decryption exception

I have a setup in which NiFi Web Server suddenly started failing to start when upgrading from 1.15.3 to 1.16.1 version. The following exception keeps occurring on the Apache NiFi Cluster:
2022-05-11 22:53:40,570 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
org.apache.nifi.encrypt.EncryptionException: Decryption Failed with Algorithm [PBEWITHMD5AND256BITAES-CBC-OPENSSL]
at org.apache.nifi.encrypt.CipherPropertyEncryptor.decrypt(CipherPropertyEncryptor.java:78)
at org.apache.nifi.fingerprint.FingerprintFactory.decrypt(FingerprintFactory.java:931)
at org.apache.nifi.fingerprint.FingerprintFactory.getLoggableRepresentationOfSensitiveValue(FingerprintFactory.java:561)
at org.apache.nifi.fingerprint.FingerprintFactory.addParameter(FingerprintFactory.java:330)
at org.apache.nifi.fingerprint.FingerprintFactory.addParameterContext(FingerprintFactory.java:302)
at org.apache.nifi.fingerprint.FingerprintFactory.addFlowControllerFingerprint(FingerprintFactory.java:210)
at org.apache.nifi.fingerprint.FingerprintFactory.createFingerprint(FingerprintFactory.java:153)
at org.apache.nifi.fingerprint.FingerprintFactory.createFingerprint(FingerprintFactory.java:127)
at org.apache.nifi.controller.inheritance.FlowFingerprintCheck.checkInheritability(FlowFingerprintCheck.java:45)
at org.apache.nifi.controller.XmlFlowSynchronizer.sync(XmlFlowSynchronizer.java:200)
at org.apache.nifi.controller.serialization.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:43)
at org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1524)
at org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:104)
at org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:815)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:457)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1086)
at org.apache.nifi.NiFi.<init>(NiFi.java:170)
at org.apache.nifi.NiFi.<init>(NiFi.java:82)
at org.apache.nifi.NiFi.main(NiFi.java:330)
Caused by: javax.crypto.BadPaddingException: pad block corrupted
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher$BufferedGenericBlockCipher.doFinal(Unknown Source)
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher.engineDoFinal(Unknown Source)
at javax.crypto.Cipher.doFinal(Cipher.java:2168)
at org.apache.nifi.encrypt.CipherPropertyEncryptor.decrypt(CipherPropertyEncryptor.java:74)
... 18 common frames omitted
relevant nifi.properties:
nifi.sensitive.props.key=<hidden>
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.additional.keys=
I have already tried to tear it all down and re-install 1.15.3 with not any other changes, but the same issue still persists. Can someone please share any ideas if there are any on how to fix this?

worker is getting restarted continuously with closedchannel exception in supervisor

worker which is in one of the supervisor is getting restarted continuously and getting Closedchannel exception . But if run the same topology in another storm cluster which is in another environment , it is running without giving any errors.
Below is the error i can see from Storm UI.
java.lang.RuntimeException: java.nio.channels.ClosedChannelException at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) at org.apache.storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) at org.apache.storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:129) at org.apache.storm.daemon.executor$fn__7990$fn__8005$fn__8036.invoke(executor.clj:648) at org.apache.storm.util$async_loop$fn__624.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78) at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68) at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127) at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:75) at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:65) at org.apache.storm.kafka.PartitionManager.(PartitionManager.java:94) at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ... 6 mo
Can any one please help me to find out the exact issue.Please let me know if need any more information.
I faced this issue and problem was that ZooKeeper host names not being resolved from worker host.

log4j2 shutdownHook="disable" not working

I have the shutdownhook error in my Spring boot application in which i use log4j2.
I saw that it was a bug in log4j2 and shutdownHook="disable" was suppossed to resolve it. But inspite of using the same i still get the same error.
I am using spring boot version 1.3.7.RELEASE and the dependecy for log4j2 is
spring-boot-starter-log4j2 .
Error which i get whetry to close the session using Ctrl-C
2016-08-23 13:38:56,098 Thread-7 ERROR No log4j2 configuration file found. Using
default configuration: logging only errors to the console.
2016-08-23 13:38:56,105 Thread-7 FATAL Unable to register shutdown hook because
JVM is shutting down. java.lang.IllegalStateException: Cannot add new shutdown h
ook as this is not started. Current state: STOPPED
at org.apache.logging.log4j.core.util.DefaultShutdownCallbackRegistry.ad
dShutdownCallback(DefaultShutdownCallbackRegistry.java:113)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCal
lback(Log4jContextFactory.java:271)
at org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerC
ontext.java:256)
at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:
216)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log
4jContextFactory.java:146)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log
4jContextFactory.java:41)
at org.apache.logging.log4j.LogManager.getContext(LogManager.java:185)
at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(Abstrac
tLoggerAdapter.java:103)
at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFac
tory.java:43)
at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(Abstract
LoggerAdapter.java:42)
at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFact
ory.java:29)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:358)
at com.worldline.frm.dataloader.policy.CustomQueueRoutePolicy.onStop(Cus
tomQueueRoutePolicy.java:90)
at org.apache.camel.impl.RouteService.doStop(RouteService.java:249)

WSO2 DAS 3.0.0 with API Manager 1.9.0 not working

I am using trying to use DAS 3.0.0 as replacement of BAM with WSO2 API Manager 1.9.0/1.9.1 with Oracle for WSO2AM_STATS_DB.
I am following http://blog.rukspot.com/2015/09/publishing-apim-runtime-statistics-to.html
I can see data in DAS's carbon dashboard in Data Explorer tables ORG_WSO2_APIMGT_STATISTICS_REQUEST and ORG_WSO2_APIMGT_STATISTICS_RESPONSE.
But data is not stored in Oracle. Therefore I am not able to see Statistics in publisher of AM. It keeps saying "Data publishing is enabled. Generate some traffic to see statistics."
I am getting following error in log:
[2015-12-08 13:00:00,022] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: APIM_STAT_script for tenant id: -1234
[2015-12-08 13:00:00,037] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: Throttle_script for tenant id: -1234
Exception in thread "dag-scheduler-event-loop" java.lang.NoClassDefFoundError: o
rg/xerial/snappy/SnappyInputStream
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:66)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:60)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcas
t$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.s
cala:80)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(Torre
ntBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastMan
ager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1291)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitMissingTasks(DAGScheduler.scala:874)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitStage(DAGScheduler.scala:815)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGSchedul
er.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1426)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.SnappyInputStream
cannot be found by spark-core_2.10_1.4.1.wso2v1
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(Bundl
eLoader.java:501)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(De
faultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 15 more
Am I missing something?
Can anyone please help me to figure out this issue?
Thanks in advance.
Move all the libraries(jars) into your project's /WEB-INF/lib. Now all the libraries/jars under /WEB-INF/lib will come under classpath.
use snappy-java jar file and it will work as you want.

WebSocket with Spring 4.0 with MessageBroker and WebServers

I've downloaded the latest version of the websocket with spring example from
https://github.com/rstoyanchev/spring-websocket-portfolio.
I'm running the application on Tomcat 8.0 RC5 and using the message broker RabbitMQ 3.1.5.
The simulation runs only 2-3 times after that the simulation doesn't work anymore, due to the below error. I tried also toi run the application on Glassfish 4.0, but the same error occurs. Can you please help what could be the reason?
15:47:35 [BrokerWebSocketChannel-1] ExceptionWebSocketHandlerDecorator - Unhandl
ed error for ExceptionWebSocketHandlerDecorator [delegate=LoggingWebSocketHandle
rDecorator [delegate=org.springframework.messaging.handler.websocket.SubProtocol
WebSocketHandler#1d2bde4]]
java.lang.IllegalStateException: The WebSocket session has been closed and no me
thod (apart from close()) may be called on a closed session
at org.apache.tomcat.websocket.WsSession.checkState(WsSession.java:642)
at org.apache.tomcat.websocket.WsSession.getNegotiatedSubprotocol(WsSess
ion.java:297)
at org.springframework.web.socket.adapter.StandardWebSocketSession.getAc
ceptedProtocol(StandardWebSocketSession.java:113)
at org.springframework.web.socket.sockjs.transport.session.WebSocketServ
erSockJsSession.getAcceptedProtocol(WebSocketServerSockJsSession.java:91)
at org.springframework.messaging.handler.websocket.SubProtocolWebSocketH
andler.findProtocolHandler(SubProtocolWebSocketHandler.java:149)
at org.springframework.messaging.handler.websocket.SubProtocolWebSocketH
andler.afterConnectionClosed(SubProtocolWebSocketHandler.java:224)
at org.springframework.web.socket.support.WebSocketHandlerDecorator.afte
rConnectionClosed(WebSocketHandlerDecorator.java:69)
at org.springframework.web.socket.support.LoggingWebSocketHandlerDecorat
or.afterConnectionClosed(LoggingWebSocketHandlerDecorator.java:74)
at org.springframework.web.socket.support.ExceptionWebSocketHandlerDecor
ator.afterConnectionClosed(ExceptionWebSocketHandlerDecorator.java:89)
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJ
sSession.close(AbstractSockJsSession.java:237)
at org.springframework.messaging.simp.stomp.StompProtocolHandler.handleM
essageToClient(StompProtocolHandler.java:183)
at org.springframework.messaging.handler.websocket.SubProtocolWebSocketH
andler.handleMessage(SubProtocolWebSocketHandler.java:194)
at org.springframework.messaging.support.channel.ExecutorSubscribableCha
nnel$1.run(ExecutorSubscribableChannel.java:80)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:615)
at java.lang.Thread.run(Thread.java:724)

Resources