Impala JDBC connection: Error setting/closing session: Open Session Error - jdbc

If I have the following type of impala connection, can I still use SquirreL SQL:
self.impala_con = connect(host='sql.edamame.com', port=25003, use_ssl=True, auth_mechanism="PLAIN",
user='edamame1', password='edamamePass')
Here is how I set the Alias in Squirrel:
I got the following errors during test connection:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: [Simba][ImpalaJDBCDriver](500151) Error setting/closing session: Open Session Error.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Update:
I also tried the URL below:
jdbc:impala://sql.edamame.com:25003/default;AuthMech=3;SSL=1
and get the following new errors:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: [Simba][ImpalaJDBCDriver](500310) Invalid operation: null;
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
What else I might be missing? Thanks!

Short answer: RTFM.
Long answer: the Cloudera JDBC driver ships with a 80+ pages PDF "Install and Config guide". Don't be shy, open it. You may go directly to Appendix A - driver config and look at these entries:
AuthMech The authentication mechanism to use.
Set the value to one of the following numbers: 0 for No Authentication 1 for Kerberos 2 for User Name 3 for User Name and Password
SSL
When this property is set to 1, the driver communicates with the Impala server through an SSL-enabled socket. When this property is set to 0, the driver does not connect to SSL-enabled sockets.
So your URL should look like
jdbc:impala://blahblah:25003/default;AuthMech=3;SSL=1
One more thing: to troubleshoot SSL handshake issues, you may enable the debug traces with this Java system property...
-Djavax.net.debug=ssl

Related

Not able to connect Apache drill/dremio with Nifi 1.8.0

I am not able to connect with apache drill with nifi-1.8.0 but able to connect with nifi-1.7.1.
I am using ExecuteSql processor to connect with apache drill using jdbc connection string.
In the case of nifi-1.8.0 poolableExcecption is coming.
Unable to execute SQL select query SELECT * from test.wellRecords due to org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot create PoolableConnectionFactory (null). No FlowFile to route to failure: org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot create PoolableConnectionFactory (null)
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot create PoolableConnectionFactory (null)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:308)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
at com.sun.proxy.$Proxy95.getConnection(Unknown Source)
at org.apache.nifi.processors.standard.AbstractExecuteSQL.onTrigger(AbstractExecuteSQL.java:195)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Cannot create PoolableConnectionFactory (null)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2385)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2110)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:305)
... 19 common frames omitted
Caused by: java.sql.SQLFeatureNotSupportedException: null
at oadd.org.apache.calcite.avatica.Helper.unsupported(Helper.java:53)
at oadd.org.apache.calcite.avatica.AvaticaConnection.isValid(AvaticaConnection.java:325)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.isValid(DrillConnectionImpl.java:657)
at org.apache.commons.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:874)
at org.apache.commons.dbcp2.PoolableConnection.validate(PoolableConnection.java:270)
at org.apache.commons.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:389)
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2398)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2381)
... 22 common frames omitted
I want to connect with nifi-1.8.0 processor because all the nifi configuration is written in this current nifi version.
I cannot comment yet on the question, so here are some thoughts that will hopefully be of help.
Are you using the same jdbc driver as in nifi-1.7?
Nifi has dropped commons-dbcp-1.4 in favour of commons-dbcp2-2.5 in version 1.8.
Link to pom.xml
From reading the stacktrace there is a problem validating the connection to the database server. Maybe others have similar issues? Are you setting a validation query?

Apache nifi: Timed out while waiting for OnScheduled of 'QueryDatabaseTable' processor to finish

I'm trying to create my first flow using the QueryDatabaseTable to incrementally extract rows from an Oracle database table.
I'm getting the errors below. I enabled full debug but nothing else useful is logged.
Thoughts on what to try next?
2017-07-10 14:43:52,280 WARN [StandardProcessScheduler Thread-4] o.a.n.controller.StandardProcessorNode Timed out while waiting for OnScheduled of 'QueryDatabaseTable' processor to finish. An attempt is made to cancel the task via Thread.interrupt(). However it does not guarantee that the task will be canceled since the code inside current OnScheduled operation may have been written to ignore interrupts which may result in a runaway thread. This could lead to more issues, eventually requiring NiFi to be restarted. This is usually a bug in the target Processor 'QueryDatabaseTable[id=1e535f00-015d-1000-236d-7adebe14958a]' that needs to be documented, reported and eventually fixed.
2017-07-10 14:43:52,280 ERROR [StandardProcessScheduler Thread-4] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=1e535f00-015d-1000-236d-7adebe14958a] QueryDatabaseTable[id=1e535f00-015d-1000-236d-7adebe14958a] failed to invoke #OnScheduled method due to java.lang.RuntimeException: Timed out while executing one of processor's OnScheduled task.; processor will not be scheduled to run for 30 seconds: java.lang.RuntimeException: Timed out while executing one of processor's OnScheduled task.
java.lang.RuntimeException: Timed out while executing one of processor's OnScheduled task.
at org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
at org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
at org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: null
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
... 9 common frames omitted
2017-07-10 14:43:52,280 ERROR [StandardProcessScheduler Thread-4] o.a.n.controller.StandardProcessorNode Failed to invoke #OnScheduled method due to java.lang.RuntimeException: Timed out while executing one of processor's OnScheduled task.
java.lang.RuntimeException: Timed out while executing one of processor's OnScheduled task.
at org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
at org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
at org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: null
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
... 9 common frames omitted
The #OnScheduled method of QueryDatabaseTable is trying to connect to your database and appears to be having problems causing it to hit the 60 second processor scheduling timeout.
Can you verify your DBCPConnectionPool service is correctly configured and that the servers running NiFi can otherwise connect to the database with the same credentials?
I Only have one Nifi Server Running. If I Chance the Connection String it Throws a Oracle Error. So I assume that this is working. Any Tipps How can I debug this ?
UPDATE: I checked and I have no connection from the nifi to the DB. This error is misleading.
On my case, it was a firewall issue. I asked to permission from security manager. The connection also can be checked via telnet.
telnet databaseserver port_number
Expected output:
Trying database_server...
Connected to 1database_server.
Escape character is '^]'.

Nifi Picking Server for Wait\Notify Processors

I am trying out the Wait\Notify Processors in Nifi 1.2 for the first time. In the Property for Distributed Cache Service I choose create new service.
Under Properties of that service I just pick the hostname of the local server where nifi is running as Server Hostname and all lights changed on go.
But when i started the Prozessors I got this error messege:
2017-07-12 14:28:09,563 ERROR [Timer-Driven Process Thread-6]
org.apache.nifi.processors.standard.Wait
Wait[id=115238a2-299b-1267-98b6-14d1a4eb45e8] Failed to process
session due to org.apache.nifi.processor.exception.ProcessException:
Failed to get signal for TOC_2017cw14_WGS84_umts due to
java.net.ConnectException: Connection refused: {}
org.apache.nifi.processor.exception.ProcessException: Failed to get
signal for TOC_2017cw14_WGS84_umts due to java.net.ConnectException:
Connection refused
at org.apache.nifi.processors.standard.Wait.onTrigger(Wait.java:354)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:111)
Manually add DistributedMapCacheServer with default parameters (port 4557) and enable it.
You might have DistributedMapCacheClientService, however you need DistributedMapCacheServer to resolve the issue.

Hive action fails in Oozie executed from Hue UI

I am executing oozie hive action and it fails in Hue UI with the following exception in Hive logs:
2016-07-15 15:27:58,430 ERROR org.apache.thrift.server.TThreadPoolServer: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1651)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:138)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
2016-07-15 15:27:58,432 ERROR org.apache.thrift.server.TThreadPoolServer: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1651)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:138)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
Kerberos is enabled on Hadoop cluster, all the settings are added in hive-site.xml.
Also I am able to connect to hiveserver2 from command line.
Please help me to understand what is going wrong here?
It seems like you are trying to use plain authentication to connect to a secured environment.
Unsupported mechanism type PLAIN
I am not sure whether it is the same in HUE, but normally I would suspect that you need to set the hive authentication method to KERBEROS .

Squirrel Setup to connect to Phoenix - HBASE: Error java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.NoSuchMethodError:

I am a newbie to Hbase & phoenix. I am trying to connect to HBASE via Phoenix JDBC Driver using Squirrel Client. Somehow I seem to get a strange error where the runtime complains of a NoSuchMethod Exception. I have included the relevant client jar phoenix-4.4.0-HBase-1.0-client in the lib folder of Squirrel and have registered the driver succesfully. When I try to connect I get this exception which seems to be a bit weird. I have xtracted the jar and seen that indeed the method getCurrentUser() exists in the org/apache/hadoop/security/UserGroupInformation.class file.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.NoSuchMethodError: org.apache.hadoop.security.UserGroupInformation.getCurrentUser()Lorg/apache/hadoop/security/UserGroupInformation;
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:202)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
I finally uncovered it. The hadoop-0.20.2-core.jar and phoenix-4.4.0-HBase-1.0-client.jar have common overlapping classes, which were in the lib folder of the Squirrel client. hadoop-0.20.2-core.jar is required to connect to hive where as phoenix client jar is required to connect to hbase.
Whenever I need to connect to one or the other I have to exclude one of these from the lib folder while launching squirrel client.

Resources