I am using a NiFi process flow to consumes the data from Kafka and the messages will go through multiple processors which will internally call the spring boot service to perform the business logics.
I am facing thread blocking issue while running this process in a 3 node NiFi cluster. Please find the thread dump.
"Timer-Driven Process Thread-490" #903 prio=5 os_prio=0 tid=0x00007fdf181bb000 nid=0xc99 waiting for monitor entry [0x00007fde879fa000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.nifi.wali.LengthDelimitedJournal.fsync(LengthDelimitedJournal.java:356)
- waiting to lock <0x0000000748ad8080> (a org.apache.nifi.wali.LengthDelimitedJournal)
at org.apache.nifi.wali.SequentialAccessWriteAheadLog.update(SequentialAccessWriteAheadLog.java:124)
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:300)
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:257)
at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:406)
at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:342)
- locked <0x000000073e98e7f8> (a org.apache.nifi.controller.repository.StandardProcessSession)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Related
I got a case where the application consumes messages and produces messages as a response to the consumed messages. This is done using kafka transactions, BUT the app also has a scheduled job that sends Kafka messages at regular intervals (also using transactions since it sends to two topics).
When the scheduled job starts sending, I get this exception:
org.apache.kafka.common.KafkaException: TransactionalId aura-transaction-1: Invalid transition attempted from state IN_TRANSACTION to state IN_TRANSACTION
Anyone know what might be the reason?
I'm considering trying with different kafkaTemplates (+ producer factory) to see if that fixes the issue. Since then I can assign a new transaction-id-prefix to the scheduled job. Currently they have the same.
Consumer uses a basic #KafkaListener that is already registered in a transaction from the KafkaMessageListenerContainer. It then produces a message using KafkaTemplate.send(Object).
The scheduled job uses the KafkaTemplate.executeInTransaction functionality and sends to two topics.
Versions:
Spring Boot 2.1.1
Spring Kafka: 2.2.2
StackTrace:
org.apache.kafka.common.KafkaException: TransactionalId person-identhendelse-lager-1.privat-person-fregIdenthendelse-v1.0: Invalid transition attempted from state IN_TRANSACTION to state IN_TRANSACTION
at org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:758)
at org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:751)
at org.apache.kafka.clients.producer.internals.TransactionManager.beginTransaction(TransactionManager.java:216)
at org.apache.kafka.clients.producer.KafkaProducer.beginTransaction(KafkaProducer.java:606)
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.beginTransaction(DefaultKafkaProducerFactory.java:459)
at org.springframework.kafka.core.KafkaTemplate.executeInTransaction(KafkaTemplate.java:278)
at no.nav.person.identhendelse.lager.app.aggregat.AggregatIdenthendelsePublisher.sendForPerson(AggregatIdenthendelsePublisher.java:52)
at no.nav.person.identhendelse.lager.app.aggregat.AggregatScheduledTask.aggregate(AggregatScheduledTask.java:54)
at no.nav.person.identhendelse.lager.app.aggregat.AggregatScheduledTask$$FastClassBySpringCGLIB$$7f682c33.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:88)
at io.micrometer.core.aop.TimedAspect.timedMethod(TimedAspect.java:77)
at sun.reflect.GeneratedMethodAccessor58.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:633)
at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at no.nav.person.utils.precondition.feature.annotation.PreconditionMethodInterceptor.invoke(PreconditionMethodInterceptor.java:22)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
at no.nav.person.identhendelse.lager.app.aggregat.AggregatScheduledTask$$EnhancerBySpringCGLIB$$e0b597f7.aggregate(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Added example code:
https://github.com/Lg87/kafka-transaction-example
See readme.md and FIND KafkaException to see the exception that occurs.
When asking questions like this, always provide version information.
Show your code and the complete stack trace.
You mentioned transactionTemplate - don't use a template as well as executeInTransaction - they are redundant since they both start a transaction.
We recently fixed a problem where such "nested" transactions were broken.
EDIT
I found the problem; when using producerPerConsumerPartition (default true), producers used by the container should not be added to cache for use by arbitrary KafkaTemplate operations.
As a work-around, use a different DefaultKafkaProducerFactory for the stand-alone template operations.
https://github.com/spring-projects/spring-kafka/issues/908
I am trying to insert rows into my Oracle table using Kafka jdbc sink connect. I have messages in my Kafka topic (JSON) like below;
[{"f1":"qws","f2":"zcz","f3":"SDFF","f4":"f33bfed577bcd7c4625479bd3cd13323--1132061303","f5":null,"f6":null,"f7":"ghSDAgh/akdjytfd/jhsgd","f8":"hsfgd/sdfjghsfjd/jsg","f9":null,"f10":"ASD","f11":"sdfg/vbnm","f12":"S","startTime":"2018-01-30T05:24:41.162","_startTime":"DATE","f13":219,"f14":"http://192.168.0.1:1234/asd/fgh/jkl/zxc/vbn/qwe/rty","f15":"fe80:0:0:0:7501:14d9:b44b:2a95%eth5","f16":1234,"f17":"ABCD-1234","f18":"192.168.0.1","f19":"sdfgd","dfgVO":{"fa1":null,"fa2":"formats","fa3":""qwe.rty.uiop.asd.fgh.jkl.zxc.vbn.asdf#61e97f29"","fa4":7,"fa5":79,"fa6":null,"fa7":"{}","fa8":1517289881381},"f20":null,"f21":"http-drte-1234-uik-7","f22":false,"f23":false,"f24":false}]
I have the connector configuration like below;
name=jdbc-sink-2
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=my_topic_1
connection.url=jdbc:oracle:thin:#192.168.0.1:1521:user01
connection.user=USER1
connection.password=PASSWD1
auto.create=true
table.name.format=MY_TABLE_2
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
producer.retries=1
When I start the connector, I am getting the error below;
[2018-01-30 11:16:55,417] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:308)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:406)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:250)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:16:55,422] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:149)
Then I added the below configurations to my existing connector configuration;
key.converter.schemas.enable=false
value.converter.schemas.enable=false
Now, I am getting another error like below;
[2018-01-30 11:36:58,118] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:455)
org.apache.kafka.connect.errors.ConnectException: No fields found using key and value schemas for table: MY_TABLE_2
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:190)
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:58)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:65)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:62)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:66)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:435)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:36:58,123] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerSinkTask:456)
[2018-01-30 11:36:58,124] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:457)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:36:58,125] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:149)
This says that I need to modify my Kafka message like key value schema format. I cannot modify my Kafka message format since it is published by someone else. How can I fix this error?
Thank you.
Per doc, if you want to use the JDBC Sink, you need to provide a schema. You can do this either using Avro + Schema Registry, or using JSON with embedded schema. You can see a sample of the expected JSON structure here.
Where is your data coming from? If it's Kafka Connect source, you can just use Avro or JSON with schemas enabled. If it's elsewhere, you'll need to amend that to provide the data to include schema - the Avro serialiser provided with the Schema Registry can do just this for you.
The Grails Project that I am working on has the war deployed to mulitple Tomcats with Session Replication enabled.
We have made little changes recently related to sessions except for starting to use useToken in all g:form entities. (There may be others, but an initial look at all the recent code submissions could not find anything related.)
I understand that the error below is related to session objects not being serializable, but I am not sure where to begin looking.
Does anyone have experience with the error below in Grails?
Could it be related to the useToken in the forms?
Is this a known limitation of useToken in g:forms - that they do not support serialization?
Alternately, how can I find all object in the session to see which one is causing this error?
org.apache.catalina.ha.session.DeltaManager.requestCompleted Unable to serialize delta request for sessionid [6F6A26B3FF57A901F5D868FB68CA4A6F]
java.io.NotSerializableException: groovy.lang.MapWithDefault
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1183)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
at org.apache.catalina.ha.session.DeltaRequest$AttributeInfo.writeExternal(DeltaRequest.java:384)
at org.apache.catalina.ha.session.DeltaRequest.writeExternal(DeltaRequest.java:277)
at org.apache.catalina.ha.session.DeltaRequest.serialize(DeltaRequest.java:291)
at org.apache.catalina.ha.session.DeltaManager.serializeDeltaRequest(DeltaManager.java:617)
at org.apache.catalina.ha.session.DeltaManager.requestCompleted(DeltaManager.java:1000)
at org.apache.catalina.ha.session.DeltaManager.requestCompleted(DeltaManager.java:965)
at org.apache.catalina.ha.tcp.ReplicationValve.send(ReplicationValve.java:525)
at org.apache.catalina.ha.tcp.ReplicationValve.sendMessage(ReplicationValve.java:513)
at org.apache.catalina.ha.tcp.ReplicationValve.sendSessionReplicationMessage(ReplicationValve.java:495)
at org.apache.catalina.ha.tcp.ReplicationValve.sendReplicationMessage(ReplicationValve.java:406)
at org.apache.catalina.ha.tcp.ReplicationValve.invoke(ReplicationValve.java:329)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:537)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1085)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:658)
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:222)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1556)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1513)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Found the answer: This is a known issue with Grails 2.2.1.
See https://jira.grails.org/browse/GRAILS-9923
So essentially, if you are using Grails 2.2.1 with session replication enabled across the cluster, and are using Tokens, it will break session replication.
I am trying to use direct binding on the following stream.
stream create --definition "time | log" --name ticktock
stream deploy ticktock --properties module.*.count=0
Deployment fails with this exception on both the admin and container node:
java.lang.IllegalArgumentException: Module count cannot be zero
at org.springframework.xd.dirt.integration.kafka.KafkaMessageBus$KafkaPropertiesAccessor.getNumberOfKafkaPartitionsForProducer(KafkaMessageBus.java:799)
at org.springframework.xd.dirt.integration.kafka.KafkaMessageBus.bindProducer(KafkaMessageBus.java:500)
at org.springframework.xd.dirt.plugins.AbstractMessageBusBinderPlugin.bindMessageProducer(AbstractMessageBusBinderPlugin.java:287)
at org.springframework.xd.dirt.plugins.AbstractMessageBusBinderPlugin.bindConsumerAndProducers(AbstractMessageBusBinderPlugin.java:143)
at org.springframework.xd.dirt.plugins.stream.StreamPlugin.postProcessModule(StreamPlugin.java:73)
at org.springframework.xd.dirt.module.ModuleDeployer.postProcessModule(ModuleDeployer.java:238)
at org.springframework.xd.dirt.module.ModuleDeployer.doDeploy(ModuleDeployer.java:218)
at org.springframework.xd.dirt.module.ModuleDeployer.deploy(ModuleDeployer.java:200)
at org.springframework.xd.dirt.server.container.DeploymentListener.deployModule(DeploymentListener.java:365)
at org.springframework.xd.dirt.server.container.DeploymentListener.deployStreamModule(DeploymentListener.java:334)
at org.springframework.xd.dirt.server.container.DeploymentListener.onChildAdded(DeploymentListener.java:181)
at org.springframework.xd.dirt.server.container.DeploymentListener.childEvent(DeploymentListener.java:149)
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:509)
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:503)
at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92)
at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:500)
at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35)
at org.apache.curator.framework.recipes.cache.PathChildrenCache$10.run(PathChildrenCache.java:762)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I have a Spring-XD(1.2.0) cluster with one admin and two container nodes using Kafka as a message bus.
Am I doing something wrong? Or is there a problem with direct binding and the Kafka message bus?
Per the documentation, the XD KafkaMessageBus does not currently support direct binding...
NOTE: The Kafka message bus does not support count=0 for module deployments, and therefore, it does not support direct binding of modules. This feature will be available in a future release. In the meantime, if direct communication between modules is necessary for Kafka deployments, composite modules should be used instead.
Yesterday we had a power outage and were able to get all of our machines back online with the exception of one box.
When firing up our application we see the log
Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [com.levelsbeyond.search.elasticsearch.ElasticSearchTransportClientProvider]: Constructor threw exception; nested exception is org.elasticsearch.client.transport.NoNodeAvailableException: No node available (org.mule.api.lifecycle.InitialisationException)
at org.mule.config.builders.AbstractConfigurationBuilder.configure(AbstractConfigurationBuilder.java:52)
at org.mule.config.builders.AbstractResourceConfigurationBuilder.configure(AbstractResourceConfigurationBuilder.java:78)
at org.mule.context.DefaultMuleContextFactory.createMuleContext(DefaultMuleContextFactory.java:97)
at org.mule.config.builders.MuleXmlBuilderContextListener.createMuleContext(MuleXmlBuilderContextListener.java:169)
at org.mule.config.builders.MuleXmlBuilderContextListener.initialize(MuleXmlBuilderContextListener.java:98)
at org.mule.config.builders.MuleXmlBuilderContextListener.contextInitialized(MuleXmlBuilderContextListener.java:74)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4939)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5434)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:983)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1660)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
and then consistently repeated we see
2014-09-19 11:35:19,200 WARN [org.elasticsearch.transport.netty] (elasticsearch[Dominic Fortune][transport_client_worker][T#5]{New I/O worker #5}) - <[Dominic Fortune] Message not fully read (response) for [12] handler future(org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler$1#625badaa), error [true], resetting>
Everything was working perfectly fine until the power outage. This is a single node in the cluster and it is running on the same machine as the java application (centos 6.5) so I know this isn't the same issue you keep finding on SO and on google that states this issue is caused by different version of elasticsearch and/or Java.
Does anyone no how to recover from this and get back up and running?
Thanks.
Turns out that when the power went out, a restart triggered an auto update of elasticsearch and the upgraded version didn't support the transport drivers in use.