Nifi executesql connecting to hana fails on creating avro schema - apache-nifi

Reviewing code the error gets thrown on undefined datatype that needs to be converted for avro schema. however the only column I am selecting is NVARCHAR(5000) type which is there in the code.
2017-04-21 01:33:51,446 WARN [Timer-Driven Process Thread-1] o.a.n.c.t.ContinuallyRunProcessorTask
java.lang.IllegalArgumentException: createSchema: Unknown SQL type 2011 cannot be converted to Avro type
at org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:349) ~[na:na]
at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:92) ~[na:na]
at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:87) ~[na:na]
at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:77) ~[na:na]
at org.apache.nifi.processors.standard.ExecuteSQL$2.process(ExecuteSQL.java:205) ~[na:na]
at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2329) ~[nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:199) ~[na:na]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) ~[nifi-api-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) ~[nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.2.jar:1.1.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_102]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_102]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_102]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_102]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_102]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]

In the JDBC API from JDK8, NCLOB is 2011 and NVARCHAR is -9:
public static final int NCLOB = 2011;
public static final int NVARCHAR = -9;
It looks like the driver you are using is returning 2011 for the column even though you believe the column is NVARCHAR. I'm not totally sure but that seems like incorrect behavior from the hana driver.
It could probably be handled on the NiFi side of things by adding NCLOB to this case statement in JdbcCommon:
case CHAR:
case LONGNVARCHAR:
case LONGVARCHAR:
case NCHAR:
case NVARCHAR:
case VARCHAR:
case CLOB:

Related

Using XML DSL route with camel-spring-boot: Misdeclaration of xmlns namespace

According to documentation http://camel.apache.org/spring-boot.html
, we can use XML DSL route with spring-boot.
When I use xquery as endpoint with syntax <to uri="xquery:xquery/test.xq />, then it works, but with syntax like below,
<transform>
<xquery>resource:test.xq</xquery>
</transform>
or inline xquery transformation:
<transform>
<xquery>concat('hello', 'world')</xquery>
</transform>
Then I get then the error:
org.apache.camel.RuntimeExpressionException:
java.lang.IllegalArgumentException: Misdeclaration of xmlns namespace
at org.apache.camel.component.xquery.XQueryBuilder.evaluate(XQueryBuilder.java:155)
~[camel-saxon-2.22.1.jar:2.22.1]
at org.apache.camel.component.xquery.XQueryBuilder.evaluate(XQueryBuilder.java:120)
~[camel-saxon-2.22.1.jar:2.22.1]
at org.apache.camel.processor.TransformProcessor.process(TransformProcessor.java:50)
~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:454)
[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:223)
[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:187)
[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:174)
[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:101)
[camel-core-2.22.1.jar:2.22.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_102]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[na:1.8.0_102]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_102]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
[na:1.8.0_102]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_102]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102] Caused by: java.lang.IllegalArgumentException: Misdeclaration of xmlns
namespace
at net.sf.saxon.query.StaticQueryContext.declareNamespace(StaticQueryContext.java:719)
~[Saxon-HE-9.8.0-12.jar:na]
at org.apache.camel.component.xquery.XQueryBuilder.initialize(XQueryBuilder.java:721)
~[camel-saxon-2.22.1.jar:2.22.1]
at org.apache.camel.component.xquery.XQueryBuilder.evaluateAsString(XQueryBuilder.java:208)
~[camel-saxon-2.22.1.jar:2.22.1]
at org.apache.camel.component.xquery.XQueryBuilder.evaluate(XQueryBuilder.java:130)
~[camel-saxon-2.22.1.jar:2.22.1]
... 19 common frames omitted
I tested the syntax with the sample project https://github.com/apache/camel/tree/master/examples/camel-example-spring, and it works, so maybe there is something wrong with camel-spring-boot component?
A ticket was logged about this issue
https://issues.apache.org/jira/browse/CAMEL-12994
And the issue/bug has now been fixed in Apache Camel.
As a workaround you need to either downgrade Camel/Saxon to a previous version that works. Or wait for the next release which has the fix. The JIRA ticket has fixed version numbers declared.

Spring Boot Admin illegal character while health check

I have deployed few Spring boot applications using spring cloud Netflix OSS and configured Spring boot Admin to monitor those application connecting to Eureka server. But for two of the applications, I am getting below error. Could you please let me know what could be the cause of this error and how to mitigate this.
This endpoint health URL is provided by Spring boot actuator and output of the heath URL is {"description":"Spring Cloud Eureka Discovery Client","status":"UP"}
Error in the log
2017-10-02 18:29:31.790 ERROR 5976 --- [DiscoveryClient-CacheRefreshExecutor-0] d.c.b.a.d.ApplicationDiscoveryListener : Couldn't register application for service org.springframework.cloud.netflix.eureka.EurekaDiscoveryClient$EurekaServiceInstance#3e988b86
java.lang.IllegalArgumentException: Illegal character in path at index 61: http://xxxx:56412/manage/health
at java.net.URI.create(URI.java:852) ~[na:1.8.0_131]
at de.codecentric.boot.admin.discovery.EurekaServiceInstanceConverter.getHealthUrl(EurekaServiceInstanceConverter.java:46) ~[spring-boot-admin-server-1.5.0.jar!/:1.5.0]
at de.codecentric.boot.admin.discovery.DefaultServiceInstanceConverter.convert(DefaultServiceInstanceConverter.java:64) ~[spring-boot-admin-server-1.5.0.jar!/:1.5.0]
at de.codecentric.boot.admin.discovery.ApplicationDiscoveryListener.register(ApplicationDiscoveryListener.java:138) [spring-boot-admin-server-1.5.0.jar!/:1.5.0]
at de.codecentric.boot.admin.discovery.ApplicationDiscoveryListener.discover(ApplicationDiscoveryListener.java:94) [spring-boot-admin-server-1.5.0.jar!/:1.5.0]
at de.codecentric.boot.admin.discovery.ApplicationDiscoveryListener.discoverIfNeeded(ApplicationDiscoveryListener.java:85) [spring-boot-admin-server-1.5.0.jar!/:1.5.0]
at de.codecentric.boot.admin.discovery.ApplicationDiscoveryListener.onApplicationEvent(ApplicationDiscoveryListener.java:80) [spring-boot-admin-server-1.5.0.jar!/:1.5.0]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_131]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_131]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_131]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_131]
at org.springframework.context.event.ApplicationListenerMethodAdapter.doInvoke(ApplicationListenerMethodAdapter.java:253) [spring-context-4.3.9.RELEASE.jar!/:4.3.9.RELEASE]
at org.springframework.context.event.ApplicationListenerMethodAdapter.processEvent(ApplicationListenerMethodAdapter.java:174) [spring-context-4.3.9.RELEASE.jar!/:4.3.9.RELEASE]
at org.springframework.context.event.ApplicationListenerMethodAdapter.onApplicationEvent(ApplicationListenerMethodAdapter.java:137) [spring-context-4.3.9.RELEASE.jar!/:4.3.9.RELEASE]
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:167) [spring-context-4.3.9.RELEASE.jar!/:4.3.9.RELEASE]
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139) [spring-context-4.3.9.RELEASE.jar!/:4.3.9.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:393) [spring-context-4.3.9.RELEASE.jar!/:4.3.9.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:347) [spring-context-4.3.9.RELEASE.jar!/:4.3.9.RELEASE]
at org.springframework.cloud.netflix.eureka.CloudEurekaClient.onCacheRefreshed(CloudEurekaClient.java:98) [spring-cloud-netflix-eureka-client-1.3.1.RELEASE.jar!/:1.3.1.RELEASE]
at com.netflix.discovery.DiscoveryClient.fetchRegistry(DiscoveryClient.java:943) [eureka-client-1.6.2.jar!/:1.6.2]
at com.netflix.discovery.DiscoveryClient.refreshRegistry(DiscoveryClient.java:1451) [eureka-client-1.6.2.jar!/:1.6.2]
at com.netflix.discovery.DiscoveryClient$CacheRefreshThread.run(DiscoveryClient.java:1418) [eureka-client-1.6.2.jar!/:1.6.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: java.net.URISyntaxException: Illegal character in path at index 61: http://xxxxx:56412/manage/health
at java.net.URI$Parser.fail(URI.java:2848) ~[na:1.8.0_131]
at java.net.URI$Parser.checkChars(URI.java:3021) ~[na:1.8.0_131]
at java.net.URI$Parser.parseHierarchical(URI.java:3105) ~[na:1.8.0_131]
at java.net.URI$Parser.parse(URI.java:3053) ~[na:1.8.0_131]
at java.net.URI.<init>(URI.java:588) ~[na:1.8.0_131]
at java.net.URI.create(URI.java:850) ~[na:1.8.0_131]
... 26 common frames omitted
There was an extra space at the end of the URL
eureka.instance.healthCheckUrlPath=${management.context-path}/health
It should be
eureka.instance.healthCheckUrlPath=${management.context-path}/health

Error retrieving state map in kerberized cluster

HDP-2.5.3.0.
A custom processor uses the State api to persist some data.
try {
stateMap = stateManager.getState(Scope.CLUSTER);
stateMapProperties = new HashMap<>(stateMap.toMap());
logger.debug("Retrieved the statemap : " + stateMapProperties);
...
...
...
} catch (IOException ioe) {
logger.error("Couldn't load the state map", ioe);
throw new ProcessException(ioe);
}
The processor works fine on my local machine's NiFi but when I put it on our (kerberized)dev cluster which has 2 NiFi nodes, it fails with the following error(Exception) :
java.io.IOException: Failed to obtain value from ZooKeeper for component with ID d7fff389-015a-1000-ffff-ffffd04d1279 with exception code NOAUTH
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420) ~[na:na]
at org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63) ~[na:na]
at com.datalake.processors.SQLServerCDCProcessor.getDataFromChangeTables(SQLServerCDCProcessor.java:480) [nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at com.datalake.processors.SQLServerCDCProcessor.onTrigger(SQLServerCDCProcessor.java:191) [nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.2.jar:1.1.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_112]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
Caused by: org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /nifi/components/d7fff389-015a-1000-ffff-ffffd04d1279
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113) ~[na:na]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184) ~[na:na]
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403) ~[na:na]
.
.
.
.
.
.
.
.
.
org.apache.nifi.processor.exception.ProcessException: java.io.IOException: Failed to obtain value from ZooKeeper for component with ID d7fff389-015a-1000-ffff-ffffd04d1279 with exception code NOAUTH
at com.datalake.processors.SQLServerCDCProcessor.getDataFromChangeTables(SQLServerCDCProcessor.java:493) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at com.datalake.processors.SQLServerCDCProcessor.onTrigger(SQLServerCDCProcessor.java:191) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.2.jar:1.1.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_112]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
Caused by: java.io.IOException: Failed to obtain value from ZooKeeper for component with ID d7fff389-015a-1000-ffff-ffffd04d1279 with exception code NOAUTH
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420) ~[na:na]
at org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63) ~[na:na]
at com.datalake.processors.SQLServerCDCProcessor.getDataFromChangeTables(SQLServerCDCProcessor.java:480) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
... 13 common frames omitted
Caused by: org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /nifi/components/d7fff389-015a-1000-ffff-ffffd04d1279
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113) ~[na:na]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184) ~[na:na]
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403) ~[na:na]
... 15 common frames omitted
Following are the entries in the state-management.xml
<cluster-provider>
<id>zk-provider</id>
<class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String">l4373t.sss.se.scania.com:2181,l4283t.sss.se.scania.com:2181,l4284t.sss.se.scania.com:2181</property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">CreatorOnly</property>
</cluster-provider>
Any ideas ?
*****Edit-1*****
Adding the zk jaas configuration.
bash-4.2$ cat zookeeper-jaas.conf
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
storeKey=true
useTicketCache=true
principal="nifi/l4513t.sss.se.com#GLOBAL.SCD.COM";
};
The entry(as 'java.arg.16') in the bootstrap.conf file :
bash-4.2$ vi bootstrap.conf
#
# Java command to use when running NiFi
java=java
# Username to use when running NiFi. This value will be ignored on Windows.
run.as=
# Configure where NiFi's lib and conf directories live
lib.dir=./lib
conf.dir=./conf
# How long to wait after telling NiFi to shutdown before explicitly killing the Process
graceful.shutdown.seconds=20
# Disable JSR 199 so that we can use JSP's without running a JDK
java.arg.1=-Dorg.apache.jasper.compiler.disablejsr199=true
# JVM memory settings
java.arg.2=-Xms1024m
java.arg.3=-Xmx2048m
# Enable Remote Debugging
#java.arg.debug=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000
java.arg.4=-Djava.net.preferIPv4Stack=true
# allowRestrictedHeaders is required for Cluster/Node communications to work properly
java.arg.5=-Dsun.net.http.allowRestrictedHeaders=true
java.arg.6=-Djava.protocol.handler.pkgs=sun.net.www.protocol
java.arg.7=-Dorg.apache.nifi.bootstrap.config.log.dir=/var/log/nifi
# The G1GC is still considered experimental but has proven to be very advantageous in providing great
# performance without significant "stop-the-world" delays.
java.arg.13=-XX:+UseG1GC
#Set headless mode by default
java.arg.14=-Djava.awt.headless=true
java.arg.15=-Djava.security.auth.login.config=/usr/local/nifi/conf/kafka-jaas.conf
java.arg.16=-Djava.security.auth.login.config=/usr/local/nifi/conf/zookeeper-jaas.conf
# Master key in hexadecimal format for encrypted sensitive configuration values
nifi.bootstrap.sensitive.key=
###
# Notification Services for notifying interested parties when NiFi is stopped, started, dies
###
*****Edit-2***** Providing the existing kafka-jaas.conf
bash-4.2$ cat kafka-jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
renewTicket=true
useTicketCache=true
serviceName="kafka"
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
principal="nifi/l4513t.sss.se.com#GLOBAL.SCD.COM";
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=true
renewTicket=true
serviceName="kafka"
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
principal="nifi/l4513t.sss.se.com#GLOBAL.SCD.COM";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=true
serviceName="kafka"
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
principal="nifi/l4513t.sss.se.com#GLOBAL.SCD.COM";
};
If you are talking to a Kerberized ZooKeeper then there is additional config required beyond the state-management.xml. Take a look at the Admin Guide section on securing ZooKeeper, specifically the section "Kerberizing NiFi's ZooKeeper Client":
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#zk_kerberos_client
EDIT:
This article shows an example of the different JAAS scenarios:
https://community.hortonworks.com/content/kbentry/28180/how-to-configure-hdf-12-to-send-to-and-get-data-fr.html
Most of the examples in article use an embedded ZK, so taking that out I think you would need something like:
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="./conf/nifi.keytab"
storeKey=true
useTicketCache=false
principal="nifi#EXAMPLE.COM”;
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka"
useKeyTab=true
keyTab="./conf/nifi.keytab"
principal="nifi#EXAMPLE.COM";
};

Sonarqube analysis fails with PersistenceException - Cannot insert duplicate key row in object 'dbo.issues' with unique index 'issues_kee'

This is a new Sonarqube 5.2 installation on windows server 2012R2 with a MS SQL Server 2012 database
Driver: Microsoft JDBC Driver 4.1 for SQL Server, version: 4.1.5605.100
The analysis files are uploaded ok but when the job is processed I see errors like the following
2015.12.03 17:11:22 ERROR [o.s.s.c.t.CeWorkerRunnableImpl] Failed to execute task AVFo0pZECnZ7xJR5PoU8
org.apache.ibatis.exceptions.PersistenceException:
### Error committing transaction. Cause: org.apache.ibatis.executor.BatchExecutorException: org.sonar.db.issue.IssueMapper.insert (batch index #1) failed. Cause: java.sql.BatchUpdateException: Cannot insert duplicate key row in object 'dbo.issues' with unique index 'issues_kee'. The duplicate key value is (AVFo0wtzCnZ7xJR5PrRx).
### Cause: org.apache.ibatis.executor.BatchExecutorException: org.sonar.db.issue.IssueMapper.insert (batch index #1) failed. Cause: java.sql.BatchUpdateException: Cannot insert duplicate key row in object 'dbo.issues' with unique index 'issues_kee'. The duplicate key value is (AVFo0wtzCnZ7xJR5PrRx).
at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:26) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.session.defaults.DefaultSqlSession.commit(DefaultSqlSession.java:177) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.session.defaults.DefaultSqlSession.commit(DefaultSqlSession.java:169) ~[mybatis-3.2.7.jar:3.2.7]
at org.sonar.db.DbSession.commit(DbSession.java:60) ~[sonar-db-5.2.jar:na]
at org.sonar.db.BatchSession.commit(BatchSession.java:176) ~[sonar-db-5.2.jar:na]
at org.sonar.db.BatchSession.increment(BatchSession.java:213) ~[sonar-db-5.2.jar:na]
at org.sonar.db.BatchSession.insert(BatchSession.java:133) ~[sonar-db-5.2.jar:na]
at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:51) ~[mybatis-3.2.7.jar:3.2.7]
at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:52) ~[mybatis-3.2.7.jar:3.2.7]
at com.sun.proxy.$Proxy103.insert(Unknown Source) ~[na:na]
at org.sonar.server.computation.step.PersistIssuesStep.execute(PersistIssuesStep.java:70) ~[sonar-server-5.2.jar:na]
at org.sonar.server.computation.step.ComputationStepExecutor.execute(ComputationStepExecutor.java:39) ~[sonar-server-5.2.jar:na]
at org.sonar.server.computation.taskprocessor.report.ReportTaskProcessor.process(ReportTaskProcessor.java:53) ~[sonar-server-5.2.jar:na]
at org.sonar.server.computation.taskprocessor.CeWorkerRunnableImpl.executeTask(CeWorkerRunnableImpl.java:78) [sonar-server-5.2.jar:na]
at org.sonar.server.computation.taskprocessor.CeWorkerRunnableImpl.run(CeWorkerRunnableImpl.java:55) [sonar-server-5.2.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [na:1.8.0_45]
at java.util.concurrent.FutureTask.runAndReset(Unknown Source) [na:1.8.0_45]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source) [na:1.8.0_45]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) [na:1.8.0_45]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.8.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.8.0_45]
at java.lang.Thread.run(Unknown Source) [na:1.8.0_45]
One of the projects has run a couple times successfully and I can see results in the dashboard but other projects fail !00% of the time
Is this a bug or something in the analysis being uploaded?
Try to change SQL collation :
https://www.mssqltips.com/sqlservertip/3519/changing-sql-server-collation-after-installation/
You must have a collation with CS_AS.
It worked for me.

prestodb hive sql query error

Dear friends I am able to configure Presto with Hive.
I can see results for "SHOW TABLES". I can see "books" table in results.
Also DESCRIBE books showing all column details.
I do have a "books" table - able to query through hive and see results.
Ex: hive>select * from books;
but when I try through presto. I am getting following error
please guide me
Error
presto:default> select * from books;
Query 20131121_025845_00004_qqe25, FAILED, 1 node
Splits: 1 total, 0 done (0.00%)
0:00 [0 rows, 0B] [0 rows/s, 0B/s]
Query 20131121_025845_00004_qqe25 failed: java.io.IOException: Failed on local exception: java.io.IOException: Broken pipe; Host Details : local host is: "ubuntu/192.168.56.101"; destination host is: "localhost":54310;
presto:default>
exception on server
45_00004_qqe25.1
java.lang.RuntimeException: java.io.IOException: Failed on local exception: java.io.IOException: Broken pipe; Host Details : local host is: "ubuntu/192.168.56.101"; destination host is: "localhost":54310;
at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-15.0.jar:na]
at com.facebook.presto.hive.HiveSplitIterable$HiveSplitQueue.computeNext(HiveSplitIterable.java:433) ~[na:na]
at com.facebook.presto.hive.HiveSplitIterable$HiveSplitQueue.computeNext(HiveSplitIterable.java:392) ~[na:na]
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) ~[guava-15.0.jar:na]
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) ~[guava-15.0.jar:na]
at com.facebook.presto.execution.SqlStageExecution.startTasks(SqlStageExecution.java:463) [presto-main-0.52.jar:0.52]
at com.facebook.presto.execution.SqlStageExecution.access$300(SqlStageExecution.java:80) [presto-main-0.52.jar:0.52]
at com.facebook.presto.execution.SqlStageExecution$5.run(SqlStageExecution.java:435) [presto-main-0.52.jar:0.52]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_45]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
Caused by: java.io.IOException: Failed on local exception: java.io.IOException: Broken pipe; Host Details : local host is: "ubuntu/192.168.56.101"; destination host is: "localhost":54310;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:763) ~[na:na]
at org.apache.hadoop.ipc.Client.call(Client.java:1229) ~[na:na]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) ~[na:na]
at com.sun.proxy.$Proxy155.getListing(Unknown Source) ~[na:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_45]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_45]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_45]
at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) ~[na:na]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) ~[na:na]
at com.sun.proxy.$Proxy155.getListing(Unknown Source) ~[na:na]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:441) ~[na:na]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1526) ~[na:na]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1509) ~[na:na]
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:406) ~[na:na]
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1462) ~[na:na]
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1502) ~[na:na]
at com.facebook.presto.hive.ForwardingFileSystem.listStatus(ForwardingFileSystem.java:298) ~[na:na]
at com.facebook.presto.hive.ForwardingFileSystem.listStatus(ForwardingFileSystem.java:298) ~[na:na]
at com.facebook.presto.hive.FileSystemWrapper$3.listStatus(FileSystemWrapper.java:146) ~[na:na]
at org.apache.hadoop.fs.FileSystem$4.<init>(FileSystem.java:1778) ~[na:na]
at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1777) ~[na:na]
at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1760) ~[na:na]
at com.facebook.presto.hive.util.AsyncRecursiveWalker$1.run(AsyncRecursiveWalker.java:58) ~[na:na]
at com.facebook.presto.hive.util.SuspendingExecutor$1.run(SuspendingExecutor.java:67) ~[na:na]
at com.facebook.presto.hive.util.BoundedExecutor.executeOrMerge(BoundedExecutor.java:82) ~[na:na]
at com.facebook.presto.hive.util.BoundedExecutor.access$000(BoundedExecutor.java:41) ~[na:na]
at com.facebook.presto.hive.util.BoundedExecutor$1.run(BoundedExecutor.java:53) ~[na:na]
... 3 common frames omitted
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.7.0_45]
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.7.0_45]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.7.0_45]
at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.7.0_45]
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487) ~[na:1.7.0_45]
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:62) ~[na:na]
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:143) ~[na:na]
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:153) ~[na:na]
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:114) ~[na:na]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[na:1.7.0_45]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[na:1.7.0_45]
at java.io.DataOutputStream.flush(DataOutputStream.java:123) ~[na:1.7.0_45]
at org.apache.hadoop.ipc.Client$Connection$3.run(Client.java:897) ~[na:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_45]
... 3 common frames omitted
2013-11-20T21:58:45.915-0500 DEBUG task-notification-1 com.facebook.presto.execution.TaskStateMachine Task 20131121_025845_00004_qqe25.0.0 is CANCELED
I got some answers from google group
https://groups.google.com/forum/#!topic/presto-users/lVLvMGP1sKE
Dain Sundstrom
Nov 8
That is an error from within the HDFS client (org.apache.hadoop.ipc.Client:941 in my code) and after a quick review of that code, it looks like this mean the client could not parse the server response. My guess is the client we bundle with the presto-hive-cdh4 plugin is not compatible with your version of Hadoop. This code includes Cloudera Hadoop version 2.0.0-cdh4.3.0. What version are you using?

Resources