Direct binding deployment with Kafka as message bus - spring-xd

I am trying to use direct binding on the following stream.
stream create --definition "time | log" --name ticktock
stream deploy ticktock --properties module.*.count=0
Deployment fails with this exception on both the admin and container node:
java.lang.IllegalArgumentException: Module count cannot be zero
at org.springframework.xd.dirt.integration.kafka.KafkaMessageBus$KafkaPropertiesAccessor.getNumberOfKafkaPartitionsForProducer(KafkaMessageBus.java:799)
at org.springframework.xd.dirt.integration.kafka.KafkaMessageBus.bindProducer(KafkaMessageBus.java:500)
at org.springframework.xd.dirt.plugins.AbstractMessageBusBinderPlugin.bindMessageProducer(AbstractMessageBusBinderPlugin.java:287)
at org.springframework.xd.dirt.plugins.AbstractMessageBusBinderPlugin.bindConsumerAndProducers(AbstractMessageBusBinderPlugin.java:143)
at org.springframework.xd.dirt.plugins.stream.StreamPlugin.postProcessModule(StreamPlugin.java:73)
at org.springframework.xd.dirt.module.ModuleDeployer.postProcessModule(ModuleDeployer.java:238)
at org.springframework.xd.dirt.module.ModuleDeployer.doDeploy(ModuleDeployer.java:218)
at org.springframework.xd.dirt.module.ModuleDeployer.deploy(ModuleDeployer.java:200)
at org.springframework.xd.dirt.server.container.DeploymentListener.deployModule(DeploymentListener.java:365)
at org.springframework.xd.dirt.server.container.DeploymentListener.deployStreamModule(DeploymentListener.java:334)
at org.springframework.xd.dirt.server.container.DeploymentListener.onChildAdded(DeploymentListener.java:181)
at org.springframework.xd.dirt.server.container.DeploymentListener.childEvent(DeploymentListener.java:149)
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:509)
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:503)
at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92)
at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:500)
at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35)
at org.apache.curator.framework.recipes.cache.PathChildrenCache$10.run(PathChildrenCache.java:762)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I have a Spring-XD(1.2.0) cluster with one admin and two container nodes using Kafka as a message bus.
Am I doing something wrong? Or is there a problem with direct binding and the Kafka message bus?

Per the documentation, the XD KafkaMessageBus does not currently support direct binding...
NOTE: The Kafka message bus does not support count=0 for module deployments, and therefore, it does not support direct binding of modules. This feature will be available in a future release. In the meantime, if direct communication between modules is necessary for Kafka deployments, composite modules should be used instead.

Related

NiFi processor threads are getting blocked during session commit

I am using a NiFi process flow to consumes the data from Kafka and the messages will go through multiple processors which will internally call the spring boot service to perform the business logics.
I am facing thread blocking issue while running this process in a 3 node NiFi cluster. Please find the thread dump.
"Timer-Driven Process Thread-490" #903 prio=5 os_prio=0 tid=0x00007fdf181bb000 nid=0xc99 waiting for monitor entry [0x00007fde879fa000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.nifi.wali.LengthDelimitedJournal.fsync(LengthDelimitedJournal.java:356)
- waiting to lock <0x0000000748ad8080> (a org.apache.nifi.wali.LengthDelimitedJournal)
at org.apache.nifi.wali.SequentialAccessWriteAheadLog.update(SequentialAccessWriteAheadLog.java:124)
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:300)
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:257)
at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:406)
at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:342)
- locked <0x000000073e98e7f8> (a org.apache.nifi.controller.repository.StandardProcessSession)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Spring-Kafka with transactions, multiple threads

I got a case where the application consumes messages and produces messages as a response to the consumed messages. This is done using kafka transactions, BUT the app also has a scheduled job that sends Kafka messages at regular intervals (also using transactions since it sends to two topics).
When the scheduled job starts sending, I get this exception:
org.apache.kafka.common.KafkaException: TransactionalId aura-transaction-1: Invalid transition attempted from state IN_TRANSACTION to state IN_TRANSACTION
Anyone know what might be the reason?
I'm considering trying with different kafkaTemplates (+ producer factory) to see if that fixes the issue. Since then I can assign a new transaction-id-prefix to the scheduled job. Currently they have the same.
Consumer uses a basic #KafkaListener that is already registered in a transaction from the KafkaMessageListenerContainer. It then produces a message using KafkaTemplate.send(Object).
The scheduled job uses the KafkaTemplate.executeInTransaction functionality and sends to two topics.
Versions:
Spring Boot 2.1.1
Spring Kafka: 2.2.2
StackTrace:
org.apache.kafka.common.KafkaException: TransactionalId person-identhendelse-lager-1.privat-person-fregIdenthendelse-v1.0: Invalid transition attempted from state IN_TRANSACTION to state IN_TRANSACTION
at org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:758)
at org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:751)
at org.apache.kafka.clients.producer.internals.TransactionManager.beginTransaction(TransactionManager.java:216)
at org.apache.kafka.clients.producer.KafkaProducer.beginTransaction(KafkaProducer.java:606)
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.beginTransaction(DefaultKafkaProducerFactory.java:459)
at org.springframework.kafka.core.KafkaTemplate.executeInTransaction(KafkaTemplate.java:278)
at no.nav.person.identhendelse.lager.app.aggregat.AggregatIdenthendelsePublisher.sendForPerson(AggregatIdenthendelsePublisher.java:52)
at no.nav.person.identhendelse.lager.app.aggregat.AggregatScheduledTask.aggregate(AggregatScheduledTask.java:54)
at no.nav.person.identhendelse.lager.app.aggregat.AggregatScheduledTask$$FastClassBySpringCGLIB$$7f682c33.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:88)
at io.micrometer.core.aop.TimedAspect.timedMethod(TimedAspect.java:77)
at sun.reflect.GeneratedMethodAccessor58.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:633)
at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at no.nav.person.utils.precondition.feature.annotation.PreconditionMethodInterceptor.invoke(PreconditionMethodInterceptor.java:22)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
at no.nav.person.identhendelse.lager.app.aggregat.AggregatScheduledTask$$EnhancerBySpringCGLIB$$e0b597f7.aggregate(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Added example code:
https://github.com/Lg87/kafka-transaction-example
See readme.md and FIND KafkaException to see the exception that occurs.
When asking questions like this, always provide version information.
Show your code and the complete stack trace.
You mentioned transactionTemplate - don't use a template as well as executeInTransaction - they are redundant since they both start a transaction.
We recently fixed a problem where such "nested" transactions were broken.
EDIT
I found the problem; when using producerPerConsumerPartition (default true), producers used by the container should not be added to cache for use by arbitrary KafkaTemplate operations.
As a work-around, use a different DefaultKafkaProducerFactory for the stand-alone template operations.
https://github.com/spring-projects/spring-kafka/issues/908

Schema exception on Kafka jdbc sink connect

I am trying to insert rows into my Oracle table using Kafka jdbc sink connect. I have messages in my Kafka topic (JSON) like below;
[{"f1":"qws","f2":"zcz","f3":"SDFF","f4":"f33bfed577bcd7c4625479bd3cd13323--1132061303","f5":null,"f6":null,"f7":"ghSDAgh/akdjytfd/jhsgd","f8":"hsfgd/sdfjghsfjd/jsg","f9":null,"f10":"ASD","f11":"sdfg/vbnm","f12":"S","startTime":"2018-01-30T05:24:41.162","_startTime":"DATE","f13":219,"f14":"http://192.168.0.1:1234/asd/fgh/jkl/zxc/vbn/qwe/rty","f15":"fe80:0:0:0:7501:14d9:b44b:2a95%eth5","f16":1234,"f17":"ABCD-1234","f18":"192.168.0.1","f19":"sdfgd","dfgVO":{"fa1":null,"fa2":"formats","fa3":""qwe.rty.uiop.asd.fgh.jkl.zxc.vbn.asdf#61e97f29"","fa4":7,"fa5":79,"fa6":null,"fa7":"{}","fa8":1517289881381},"f20":null,"f21":"http-drte-1234-uik-7","f22":false,"f23":false,"f24":false}]
I have the connector configuration like below;
name=jdbc-sink-2
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=my_topic_1
connection.url=jdbc:oracle:thin:#192.168.0.1:1521:user01
connection.user=USER1
connection.password=PASSWD1
auto.create=true
table.name.format=MY_TABLE_2
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
producer.retries=1
When I start the connector, I am getting the error below;
[2018-01-30 11:16:55,417] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:308)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:406)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:250)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:16:55,422] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:149)
Then I added the below configurations to my existing connector configuration;
key.converter.schemas.enable=false
value.converter.schemas.enable=false
Now, I am getting another error like below;
[2018-01-30 11:36:58,118] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:455)
org.apache.kafka.connect.errors.ConnectException: No fields found using key and value schemas for table: MY_TABLE_2
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:190)
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:58)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:65)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:62)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:66)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:435)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:36:58,123] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerSinkTask:456)
[2018-01-30 11:36:58,124] ERROR Task jdbc-sink-2 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:457)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-01-30 11:36:58,125] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:149)
This says that I need to modify my Kafka message like key value schema format. I cannot modify my Kafka message format since it is published by someone else. How can I fix this error?
Thank you.
Per doc, if you want to use the JDBC Sink, you need to provide a schema. You can do this either using Avro + Schema Registry, or using JSON with embedded schema. You can see a sample of the expected JSON structure here.
Where is your data coming from? If it's Kafka Connect source, you can just use Avro or JSON with schemas enabled. If it's elsewhere, you'll need to amend that to provide the data to include schema - the Avro serialiser provided with the Schema Registry can do just this for you.

How can I use spring-boot/spring-data-cassandra to connect to Cassandra 3.3.0? [duplicate]

I am trying to configure spring data with cassandra.
But I am getting bellow error , when my app is deploying in tomcat.
When I check the connection, it is available to the given port. (127.0.0.1:9042). I have include stack trace and spring configuration bellow.
Does anyone having idea on this error?
Full stack trace :
2015-12-06 17:46:25 ERROR web.context.ContextLoader:331 - Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraSession': Invocation of init method failed; nested exception is com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1572)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:736)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:759)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:480)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:434)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:306)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:106)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4994)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5492)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1245)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1895)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1230)
at com.datastax.driver.core.Cluster.init(Cluster.java:157)
at com.datastax.driver.core.Cluster.connect(Cluster.java:245)
at com.datastax.driver.core.Cluster.connect(Cluster.java:278)
at org.springframework.cassandra.config.CassandraCqlSessionFactoryBean.afterPropertiesSet(CassandraCqlSessionFactoryBean.java:82)
at org.springframework.data.cassandra.config.CassandraSessionFactoryBean.afterPropertiesSet(CassandraSessionFactoryBean.java:43)
===================================================================
Spring Configuration :
<?xml version="1.0" encoding="UTF-8"?>
<beans:beans ...>
<cassandra:cluster id="cassandraCluster"
contact-points="127.0.0.1" port="9042" />
<cassandra:converter />
<cassandra:session id="cassandraSession" cluster-ref="cassandraCluster"
keyspace-name="blood" />
<cassandra:template id="cqlTemplate" />
<cassandra:repositories base-package="com.blood.dao.nosql" />
<cassandra:mapping entity-base-packages="com.blood.domain.nosql" />
</beans:beans>
The problem is that Spring Data Cassandra (as of December 2015 when I write this) does not provide support for Cassandra 3.x. Here's an excerpt from a conversation with one of the developers in the #spring channel on freenode:
[13:49] <_amicable> Hi all, does anybody know if spring data cassandra supports cassandra 3.x? All dependencies & datastax drivers seem to be 2.x
[13:49] <#_ollie> amicable: Not in the near future.
[13:49] <_amicable> _ollie: thanks.
[13:50] <_amicable> I'll go and look at the relative merits of 2.x vs 3.x then
;)
[13:51] <#_ollie> SD Cassandra is a community project (so far) and its progress highly depends on how much time the developers can actually spend on it.
[13:51] <#_ollie> We will have someone joining the team in February 2016 to get the project more closely aligned to the core Spring Data projects.
It looks like you are using an older version of the driver with Cassandra 3.0. Cassandra 3.0 changed its internal schema metadata representation, and only the latest drivers can parse this metadata.
Use Java Cassandra driver 3.0.0-alpha5 to connect to Cassandra 3.0.

Elasticsearch no node available (after power outage)

Yesterday we had a power outage and were able to get all of our machines back online with the exception of one box.
When firing up our application we see the log
Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [com.levelsbeyond.search.elasticsearch.ElasticSearchTransportClientProvider]: Constructor threw exception; nested exception is org.elasticsearch.client.transport.NoNodeAvailableException: No node available (org.mule.api.lifecycle.InitialisationException)
at org.mule.config.builders.AbstractConfigurationBuilder.configure(AbstractConfigurationBuilder.java:52)
at org.mule.config.builders.AbstractResourceConfigurationBuilder.configure(AbstractResourceConfigurationBuilder.java:78)
at org.mule.context.DefaultMuleContextFactory.createMuleContext(DefaultMuleContextFactory.java:97)
at org.mule.config.builders.MuleXmlBuilderContextListener.createMuleContext(MuleXmlBuilderContextListener.java:169)
at org.mule.config.builders.MuleXmlBuilderContextListener.initialize(MuleXmlBuilderContextListener.java:98)
at org.mule.config.builders.MuleXmlBuilderContextListener.contextInitialized(MuleXmlBuilderContextListener.java:74)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4939)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5434)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:983)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1660)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
and then consistently repeated we see
2014-09-19 11:35:19,200 WARN [org.elasticsearch.transport.netty] (elasticsearch[Dominic Fortune][transport_client_worker][T#5]{New I/O worker #5}) - <[Dominic Fortune] Message not fully read (response) for [12] handler future(org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler$1#625badaa), error [true], resetting>
Everything was working perfectly fine until the power outage. This is a single node in the cluster and it is running on the same machine as the java application (centos 6.5) so I know this isn't the same issue you keep finding on SO and on google that states this issue is caused by different version of elasticsearch and/or Java.
Does anyone no how to recover from this and get back up and running?
Thanks.
Turns out that when the power went out, a restart triggered an auto update of elasticsearch and the upgraded version didn't support the transport drivers in use.

Resources