Exception while consuming data from kafka source in Spring Cloud Data Flow - spring-xd

I am getting kafka exception when tried kafka topic as source
Here is how i created stream
stream create --definition ":myDestination > log" --name ingest_from_broker
stream deploy ingest_from_broker --properties "spring.cloud.stream.bindings.input.consumer.headerMode=raw"
When I am running it I am getting exception at log file
java.lang.StringIndexOutOfBoundsException: String index out of range: 113
at java.lang.String.checkBounds(String.java:385) ~[na:1.8.0_66]
at java.lang.String.<init>(String.java:425) ~[na:1.8.0_66]
at org.springframework.cloud.stream.binder.EmbeddedHeadersMessageConverter.oldExtractHeaders(EmbeddedHeadersMessageConverter.java:132) ~[spring-cloud-stream-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
at org.springframework.cloud.stream.binder.EmbeddedHeadersMessageConverter.extractHeaders(EmbeddedHeadersMessageConverter.java:105) ~[spring-cloud-stream-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
at org.springframework.cloud.stream.binder.AbstractBinder.extractMessageValues(AbstractBinder.java:153) ~[spring-cloud-stream-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$ReceivingHandler.handleRequestMessage(KafkaMessageChannelBinder.java:698) [spring-cloud-stream-binder-kafka-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:99) [spring-integration-core-4.2.4.RELEASE.jar!/:na]
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127) [spring-integration-core-4.2.4.RELEASE.jar!/:na]
at org.springframework.integration.channel.FixedSubscriberChannel.send(FixedSubscriberChannel.java:69) [spring-integration-core-4.2.4.RELEASE.jar!/:na]
at org.springframework.integration.channel.FixedSubscriberChannel.send(FixedSubscriberChannel.java:63) [spring-integration-core-4.2.4.RELEASE.jar!/:na]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115) [spring-messaging-4.2.7.RELEASE.jar!/:4.2.7.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45) [spring-messaging-4.2.7.RELEASE.jar!/:4.2.7.RELEASE]
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105) [spring-messaging-4.2.7.RELEASE.jar!/:4.2.7.RELEASE]
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:105) [spring-integration-core-4.2.4.RELEASE.jar!/:na]
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$300(KafkaMessageDrivenChannelAdapter.java:43) [spring-integration-kafka-1.3.1.RELEASE.jar!/:na]
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$AutoAcknowledgingChannelForwardingMessageListener.doOnMessage(KafkaMessageDrivenChannelAdapter.java:171) [spring-integration-kafka-1.3.1.RELEASE.jar!/:na]
at org.springframework.integration.kafka.listener.AbstractDecodingMessageListener.onMessage(AbstractDecodingMessageListener.java:50) [spring-integration-kafka-1.3.1.RELEASE.jar!/:na]
at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$4$1.doWithRetry(KafkaMessageChannelBinder.java:516) [spring-cloud-stream-binder-kafka-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:276) [spring-retry-1.1.3.RELEASE.jar!/:na]
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:157) [spring-retry-1.1.3.RELEASE.jar!/:na]
at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$4.onMessage(KafkaMessageChannelBinder.java:513) [spring-cloud-stream-binder-kafka-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
at org.springframework.integration.kafka.listener.QueueingMessageListenerInvoker$KafkaMessageDispatchingSubscriber.onNext(QueueingMessageListenerInvoker.java:221) [spring-integration-kafka-1.3.1.RELEASE.jar!/:na]
at org.springframework.integration.kafka.listener.QueueingMessageListenerInvoker$KafkaMessageDispatchingSubscriber.onNext(QueueingMessageListenerInvoker.java:209) [spring-integration-kafka-1.3.1.RELEASE.jar!/:na]
at reactor.core.processor.util.RingBufferSubscriberUtils.route(RingBufferSubscriberUtils.java:67) [reactor-core-2.0.8.RELEASE.jar!/:na]
at reactor.core.processor.RingBufferProcessor$BatchSignalProcessor.run(RingBufferProcessor.java:789) [reactor-core-2.0.8.RELEASE.jar!/:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_66]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_66]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_66]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_66]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]
I saw some threads where it is mentioned that setting up this property - 'spring.cloud.stream.bindings.input.consumer.headerMode=raw' will help but somehow this is not working.

Try either
stream create --definition ":myDestination > log --spring.cloud.stream.bindings.input.consumer.headerMode=raw" --name ingest_from_broker
or
stream deploy ingest_from_broker --properties "apps.log.spring.cloud.stream.bindings.input.consumer.headerMode=raw"
i.e. either the property must be specified on the application directly during Stream definition, or if supplied at deployment time, it must indicate what application does it apply to.

Related

microservice showing errors on Dev environment

I have no idea what kind of errors are these and how it is related to microservice when i m not using any message converting in the code. Any help would be grateful. Thanks in advance! Any idea on below errors?
2020-09-08 04:13:54.304 ERROR [uniban-service,129475cb8b32b7ad,0053b1e337a7defb,true] 11 --- [ask-scheduler-4] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Failed to invoke method; nested exception is org.springframework.messaging.converter.MessageConversionException: Could Not Convert Output
at org.springframework.integration.endpoint.MethodInvokingMessageSource.doReceive(MethodInvokingMessageSource.java:115)
at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:167)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:250)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:359)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.pollForMessage(AbstractPollingEndpoint.java:328)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$null$1(AbstractPollingEndpoint.java:275)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.lambda$execute$0(ErrorHandlingTaskExecutor.java:57)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:55)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$2(AbstractPollingEndpoint.java:272)
at org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:68)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.springframework.messaging.converter.MessageConversionException: Could Not Convert Output
at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.convertOutputValueIfNecessary(SimpleFunctionRegistry.java:674)
at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.doApply(SimpleFunctionRegistry.java:600)
at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.get(SimpleFunctionRegistry.java:463)
at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.get(SimpleFunctionRegistry.java:448)
at org.springframework.cloud.stream.function.PartitionAwareFunctionWrapper.get(PartitionAwareFunctionWrapper.java:86)
at sun.reflect.GeneratedMethodAccessor79.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:282)
at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:266)
at org.springframework.integration.endpoint.MethodInvokingMessageSource.doReceive(MethodInvokingMessageSource.java:112)
... 19 more
2020-09-08 04:13:54.304 DEBUG [uniban-service,129475cb8b32b7ad,0053b1e337a7defb,true] 11 ---
[ask-scheduler-4] o.s.i.channel.PublishSubscribeChannel : postSend (sent=true) on channel
'bean 'errorChannel'', message: ErrorMessage
[payload=org.springframework.messaging.MessagingException: Failed to invoke method; nested
exception is org.springframework.messaging.converter.MessageConversionException: Could Not
Convert Output, headers={b3=129475cb8b32b7ad-0053b1e337a7defb-1, id=35b1a363-aa5e-ada3-0c23-
5347c337d951, timestamp=1599552834304}]
2020-09-08 04:13:54.304 DEBUG [uniban-service,129475cb8b32b7ad,0053b1e337a7defb,true] 11 ---
[ask-scheduler-4] o.s.c.s.i.m.TracingChannelInterceptor : Will finish the current span after
completion LazySpan(129475cb8b32b7ad/0053b1e337a7defb)
I found the bug. My bug is caused by the jolokia-core. When I upgrade springboot, I do not upgrade springbootadmin synchronouslyenter image description here
enter image description here

Exception in publishing message through GCP pubsub

I am trying to publish message using gcp pubsub, but getting below exception at publishers end. While sending message through pubsuboutboundgatewayinterface this exception is happing.
2019-12-19 18:47:27.408 WARN 2856 --- [pool-1-thread-2] o.s.c.g.p.c.p.PubSubPublisherTemplate : Publishing to topic_name topic failed.
com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1070)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1138)
at com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:707)
at com.google.common.util.concurrent.ForwardingListenableFuture.addListener(ForwardingListenableFuture.java:45)
at com.google.api.core.ApiFutureToListenableFuture.addListener(ApiFutureToListenableFuture.java:52)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1051)
at com.google.api.core.ApiFutures.addCallback(ApiFutures.java:63)
at com.google.api.gax.grpc.GrpcExceptionCallable.futureCall(GrpcExceptionCallable.java:67)
at com.google.api.gax.rpc.AttemptCallable.call(AttemptCallable.java:81)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at io.grpc.Status.asRuntimeException(Status.java:526)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:482)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:678)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:397)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
... 7 common frames omitted
Caused by: java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.grpc.netty.shaded.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
at io.grpc.netty.shaded.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1128)
at io.grpc.netty.shaded.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 common frames omitted
Is it some port issue ? I am doing this for first time. Can anyone please help.

Nifi executesql connecting to hana fails on creating avro schema

Reviewing code the error gets thrown on undefined datatype that needs to be converted for avro schema. however the only column I am selecting is NVARCHAR(5000) type which is there in the code.
2017-04-21 01:33:51,446 WARN [Timer-Driven Process Thread-1] o.a.n.c.t.ContinuallyRunProcessorTask
java.lang.IllegalArgumentException: createSchema: Unknown SQL type 2011 cannot be converted to Avro type
at org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:349) ~[na:na]
at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:92) ~[na:na]
at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:87) ~[na:na]
at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:77) ~[na:na]
at org.apache.nifi.processors.standard.ExecuteSQL$2.process(ExecuteSQL.java:205) ~[na:na]
at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2329) ~[nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:199) ~[na:na]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) ~[nifi-api-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) ~[nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.2.jar:1.1.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_102]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_102]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_102]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_102]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_102]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
In the JDBC API from JDK8, NCLOB is 2011 and NVARCHAR is -9:
public static final int NCLOB = 2011;
public static final int NVARCHAR = -9;
It looks like the driver you are using is returning 2011 for the column even though you believe the column is NVARCHAR. I'm not totally sure but that seems like incorrect behavior from the hana driver.
It could probably be handled on the NiFi side of things by adding NCLOB to this case statement in JdbcCommon:
case CHAR:
case LONGNVARCHAR:
case LONGVARCHAR:
case NCHAR:
case NVARCHAR:
case VARCHAR:
case CLOB:

Error retrieving state map in kerberized cluster

HDP-2.5.3.0.
A custom processor uses the State api to persist some data.
try {
stateMap = stateManager.getState(Scope.CLUSTER);
stateMapProperties = new HashMap<>(stateMap.toMap());
logger.debug("Retrieved the statemap : " + stateMapProperties);
...
...
...
} catch (IOException ioe) {
logger.error("Couldn't load the state map", ioe);
throw new ProcessException(ioe);
}
The processor works fine on my local machine's NiFi but when I put it on our (kerberized)dev cluster which has 2 NiFi nodes, it fails with the following error(Exception) :
java.io.IOException: Failed to obtain value from ZooKeeper for component with ID d7fff389-015a-1000-ffff-ffffd04d1279 with exception code NOAUTH
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420) ~[na:na]
at org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63) ~[na:na]
at com.datalake.processors.SQLServerCDCProcessor.getDataFromChangeTables(SQLServerCDCProcessor.java:480) [nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at com.datalake.processors.SQLServerCDCProcessor.onTrigger(SQLServerCDCProcessor.java:191) [nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.2.jar:1.1.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_112]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
Caused by: org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /nifi/components/d7fff389-015a-1000-ffff-ffffd04d1279
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113) ~[na:na]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184) ~[na:na]
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403) ~[na:na]
.
.
.
.
.
.
.
.
.
org.apache.nifi.processor.exception.ProcessException: java.io.IOException: Failed to obtain value from ZooKeeper for component with ID d7fff389-015a-1000-ffff-ffffd04d1279 with exception code NOAUTH
at com.datalake.processors.SQLServerCDCProcessor.getDataFromChangeTables(SQLServerCDCProcessor.java:493) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at com.datalake.processors.SQLServerCDCProcessor.onTrigger(SQLServerCDCProcessor.java:191) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.2.jar:1.1.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_112]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
Caused by: java.io.IOException: Failed to obtain value from ZooKeeper for component with ID d7fff389-015a-1000-ffff-ffffd04d1279 with exception code NOAUTH
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420) ~[na:na]
at org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63) ~[na:na]
at com.datalake.processors.SQLServerCDCProcessor.getDataFromChangeTables(SQLServerCDCProcessor.java:480) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
... 13 common frames omitted
Caused by: org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /nifi/components/d7fff389-015a-1000-ffff-ffffd04d1279
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113) ~[na:na]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184) ~[na:na]
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403) ~[na:na]
... 15 common frames omitted
Following are the entries in the state-management.xml
<cluster-provider>
<id>zk-provider</id>
<class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String">l4373t.sss.se.scania.com:2181,l4283t.sss.se.scania.com:2181,l4284t.sss.se.scania.com:2181</property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">CreatorOnly</property>
</cluster-provider>
Any ideas ?
*****Edit-1*****
Adding the zk jaas configuration.
bash-4.2$ cat zookeeper-jaas.conf
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
storeKey=true
useTicketCache=true
principal="nifi/l4513t.sss.se.com#GLOBAL.SCD.COM";
};
The entry(as 'java.arg.16') in the bootstrap.conf file :
bash-4.2$ vi bootstrap.conf
#
# Java command to use when running NiFi
java=java
# Username to use when running NiFi. This value will be ignored on Windows.
run.as=
# Configure where NiFi's lib and conf directories live
lib.dir=./lib
conf.dir=./conf
# How long to wait after telling NiFi to shutdown before explicitly killing the Process
graceful.shutdown.seconds=20
# Disable JSR 199 so that we can use JSP's without running a JDK
java.arg.1=-Dorg.apache.jasper.compiler.disablejsr199=true
# JVM memory settings
java.arg.2=-Xms1024m
java.arg.3=-Xmx2048m
# Enable Remote Debugging
#java.arg.debug=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000
java.arg.4=-Djava.net.preferIPv4Stack=true
# allowRestrictedHeaders is required for Cluster/Node communications to work properly
java.arg.5=-Dsun.net.http.allowRestrictedHeaders=true
java.arg.6=-Djava.protocol.handler.pkgs=sun.net.www.protocol
java.arg.7=-Dorg.apache.nifi.bootstrap.config.log.dir=/var/log/nifi
# The G1GC is still considered experimental but has proven to be very advantageous in providing great
# performance without significant "stop-the-world" delays.
java.arg.13=-XX:+UseG1GC
#Set headless mode by default
java.arg.14=-Djava.awt.headless=true
java.arg.15=-Djava.security.auth.login.config=/usr/local/nifi/conf/kafka-jaas.conf
java.arg.16=-Djava.security.auth.login.config=/usr/local/nifi/conf/zookeeper-jaas.conf
# Master key in hexadecimal format for encrypted sensitive configuration values
nifi.bootstrap.sensitive.key=
###
# Notification Services for notifying interested parties when NiFi is stopped, started, dies
###
*****Edit-2***** Providing the existing kafka-jaas.conf
bash-4.2$ cat kafka-jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
renewTicket=true
useTicketCache=true
serviceName="kafka"
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
principal="nifi/l4513t.sss.se.com#GLOBAL.SCD.COM";
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=true
renewTicket=true
serviceName="kafka"
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
principal="nifi/l4513t.sss.se.com#GLOBAL.SCD.COM";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=true
serviceName="kafka"
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
principal="nifi/l4513t.sss.se.com#GLOBAL.SCD.COM";
};
If you are talking to a Kerberized ZooKeeper then there is additional config required beyond the state-management.xml. Take a look at the Admin Guide section on securing ZooKeeper, specifically the section "Kerberizing NiFi's ZooKeeper Client":
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#zk_kerberos_client
EDIT:
This article shows an example of the different JAAS scenarios:
https://community.hortonworks.com/content/kbentry/28180/how-to-configure-hdf-12-to-send-to-and-get-data-fr.html
Most of the examples in article use an embedded ZK, so taking that out I think you would need something like:
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="./conf/nifi.keytab"
storeKey=true
useTicketCache=false
principal="nifi#EXAMPLE.COM”;
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka"
useKeyTab=true
keyTab="./conf/nifi.keytab"
principal="nifi#EXAMPLE.COM";
};

How to pass payload using HttpOutboundAdapter in Spring-Integration

We are trying to pass a payload to a POST call that is being made using HttpOutboundGateway. A file is read and the content is stored in the channel. In order to retrieve the content of the channel, we are setting the extractPayload to true. We are using Spring-Integration along with DSL and not the XML approach. Any guidance would be appreciated.
return IntegrationFlows.from("FileContentChannel")
.channel("FileContentChannel")
.enrichHeaders(h -> h.header("xxx", "xxx")
.header("xxx", "xxx"))
.handle(Http.outboundGateway("http://payment/gateway")
.charset("UTF-8")
.httpMethod(HttpMethod.POST)
.extractPayload(true)
.mappedRequestHeaders("pay*")
.headerMapper(headerMapper())
.expectedResponseType(String.class))
.channel(MessageChannels.queue("paymentResponseChannel"))
.get();
Error log :
2016-10-26 16:34:22.343 ERROR 1584 --- [ask-scheduler-6] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessageHandlingException: HTTP request execution failed for URI [http://payment/gateway]; nested exception is org.springframework.web.client.HttpClientErrorException: 404 Not Found
at org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler.handleRequestMessage(HttpRequestExecutingMessageHandler.java:409)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:148)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:121)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:423)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:373)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:292)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:212)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:129)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:115)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.endpoint.PollingConsumer.handleMessage(PollingConsumer.java:129)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:272)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:58)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:190)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:186)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:353)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:51)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:344)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: org.springframework.web.client.HttpClientErrorException: 404 Not Found
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:91)
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:667)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:620)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:595)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:516)
at org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler.handleRequestMessage(HttpRequestExecutingMessageHandler.java:382)
... 35 more
The .handle() is for Service Activator EI pattern. And exactly its responsibility to deliver payload from the input channel to the target service. In your case it is RestTemplate underneath and an HTTP body is built from the payload.
Your configuration looks fully correct. I don't understand where is the problem...
EDIT
Caused by: org.springframework.web.client.HttpClientErrorException: 404 Not Found
Not, sure what makes you think that there is something wrong with payload. Looks like you are missing some HTTP header or use them with wrong value.
You should consult with the server to determine what should be done else.
There is no any clue in your StackTrace that there is something wrong with payload.
OTOH, since we get 404 Not Found that means that the payload is passed from the channel to the Http.outboundGateway().

Resources