I created a VMWare machine on my computer running Ubuntu. I set up Apache Knox on there using the demo LDAP and I'm currently trying to set up a connection string to Knox through SQuirreL. I can't use the Hortonworks Sandboxes because I need to make this compatible with Hive under Cloudera. Before I start configuring Knox, I want to be able to connect to it first using the Hive JDBC driver. Here is the string that I have so far:
jdbc:hive2://<host>:8443/;ssl=1;sslTrustStore=/gateway.jks;trustStorePassword=<master secret>?hive.server2.transport.mode=http;httpPath=gateway/default/hive
My specific questions are:
What path should I be using for my sslTrustStore? It's currently located in /home/<user>/Downloads/knox-1.0.0/data/security/keystores/gateway.jks. I tried the same string with the full path but still no luck.
What should I be using for httpPath? My VM doesn't specifically have Hive on it since Knox will be connecting to a Hadoop Node with Hive.
Is there anything else I'm missing in the connection string?
In SQuirreL, after I get the error message and click "stack trace", this is the general gist of what I get:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: [Cloudera][HiveJDBCDriver](500164) Error initialized or created transport for authentication: \home\anudeep\Downloads\knox-1.0.0\data\security\keystores\gateway.jks (The system cannot find the path specified).
at java.util.concurrent.FutureTask.report(Unknown Source)
at java.util.concurrent.FutureTask.get(Unknown Source)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Can you connect to Hive directly without Knox ? Looking at the stack trace it appears that the keystore (gateway.jks) is not found, this could be permissions issue. Try installing Knox on the host machine. I had a lot of issues connecting to outside services (running on Host OS) from VM, but this could just be me.
There are few ways to debug this, before that let me answer your questions:
You are right, you need to use the security/keystores/gateway.jks path so that Beeline (or any JDBC client) can trust the certificates presented by Knox.
Looks like you are using Apache Knox so your path would look something like gateway/sandbox/hive (you need to update the HIVE service url under sandbox.xml topology). gateway/default/hive is mostly used by Knox instances configured by Ambari, which I don't think is true in your case.
Try making few changes such as ssl=true, and instead of query string (?) use a colon (:) for transport.mode i.e. ;transportMode=http
This is the connection sting that works for me with Beeline
beeline -u "jdbc:hive2://<knox-host>:8443/;ssl=true;sslTrustStore=/var/lib/knox/security/keystores/gateway.jks;trustStorePassword=<trustPassword>;transportMode=http;httpPath=gateway/sandbox/hive" -n admin -p admin-password
Now onto some debugging.
I think it will be easier if you simply download Knox on your Host OS (instead of VM) and talk to Hive, Knox needs 'line of sight' to services it proxies, with VMs it can be tricky. Also, I find it convenient to troubleshoot and check logs. You do not need Hive running on the same machine, just a line of sight to Knox is enough.
Make sure hive-server.xml has the property hive.server2.servermode=http, this gets me all the time :)
This tutorial/example explains how to connect to Hive2 using Knox using JDBC, it uses groovy scripting but you can just look at the setup and connection strings.
This is another example using KnoxShell to connect to Hive2.
Hope this helps.
Related
For some weird reason DBeaver 21.0.2 with Oracle-driver 12.2.0.1 seems to take a configured IP-address as the host, turn it into a hostname and then use that hostname for further usage to connect to the database.
That clearly is undesired as I access that database from an external workplace and with all the additional networking going on the IP-address clearly would be the way to go for me.
To make this even weirder, that resolution seems to take place after the connection was established. I mean I had a typo in the service name at one point and I got an ORA-12514 instead. As soon as I fixed that typo I got back to the hostname resolution problem, which has no ORA-number. So this might be something in DBeaver then.
DBeaver's error protocol:
java.net.UnknownHostException: myunknowndbhost.sjngm.com
at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(Unknown Source)
at java.base/java.net.InetAddress.getAddressesFromNameService(Unknown Source)
at java.base/java.net.InetAddress$NameServiceAddresses.get(Unknown Source)
at java.base/java.net.InetAddress.getAllByName0(Unknown Source)
at java.base/java.net.InetAddress.getAllByName(Unknown Source)
at java.base/java.net.InetAddress.getAllByName(Unknown Source)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:126)
at oracle.net.nt.ConnOption.connect(ConnOption.java:161)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:470)
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:521)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:660)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:639)
at oracle.net.ns.NSProtocolNIO.negotiateConnection(NSProtocolNIO.java:189)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:317)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1438)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:518)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:688)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:39)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:691)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCDataSource.lambda$0(JDBCDataSource.java:176)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCDataSource.openConnection(JDBCDataSource.java:195)
at org.jkiss.dbeaver.ext.oracle.model.OracleDataSource.openConnection(OracleDataSource.java:150)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCExecutionContext.connect(JDBCExecutionContext.java:101)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCRemoteInstance.initializeMainContext(JDBCRemoteInstance.java:100)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCRemoteInstance.<init>(JDBCRemoteInstance.java:59)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCDataSource.initializeRemoteInstance(JDBCDataSource.java:109)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCDataSource.<init>(JDBCDataSource.java:97)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCDataSource.<init>(JDBCDataSource.java:89)
at org.jkiss.dbeaver.ext.oracle.model.OracleDataSource.<init>(OracleDataSource.java:84)
at org.jkiss.dbeaver.ext.oracle.OracleDataSourceProvider.openDataSource(OracleDataSourceProvider.java:147)
at org.jkiss.dbeaver.registry.DataSourceDescriptor.connect(DataSourceDescriptor.java:896)
at org.jkiss.dbeaver.runtime.jobs.ConnectJob.run(ConnectJob.java:70)
at org.jkiss.dbeaver.runtime.jobs.ConnectionTestJob.run(ConnectionTestJob.java:103)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
So the question is: how do I turn off this hostname resolution?
This might be Oracle Client behavior... During user authentication, all Oracle Database clients are required to report a "MACHINE" variable to the server, describing the client's hostname/IP - this is what's reported in the machine column of the v$session database view. And I think the JDBC thin client will do a reverse DNS lookup to try to fill in that variable if necessary.
You can try setting the v$session.machine OracleConnection Property... in DBeaver, edit your connection, click "Edit Driver Settings" on the Connection Settings page, click "Driver properties" tab, and add a new property named v$session.machine with whatever name you want as the value.
Or you could try setting the ORACLE_HOSTNAME environment variable to your hostname. I'm unclear whether this would get picked up by the Oracle JDBC driver or not, but this documentation might be suggesting that it could help?
You could also try adding a hostname for your IP address in your HOSTS file. That seems the most likely fix this, but it's the most annoying option to implement.
I have a Hadoop cluster running Hortonworks Data Platform 2.4.2 which has been running well for more than a year. The cluster is Kerberised and external applications connect via Knox. Earlier today, the cluster stopped accepting JDBC connections via Knox to Hive.
The Knox logs show no errors, but the Hive Server2 log shows the following error:
"Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User: knox is not allowed to impersonate org.apache.hive.service.cli.HiveSQLException: Failed to validate proxy privilege of knox for "
Having looked at other users the suggestions mostly seem to be around the correct setting of configuration options for hadoop.proxyusers.users and hadoop.proxyusers.groups.
However, in my case I don't see how these settings could be the problem. The cluster has been running for over a year and we have a number of applications connecting to Hive via JDBC on a daily basis. The configuration of the server has not been changed and connections were previously succeeding on the current configuration. No changes had been made to the platform or environment and the cluster was not restarted or taken down for maintenance between the last successful JDBC connection and JDBC connections being declined.
I have now stopped and started the cluster, but after restart the cluster still does not accept JDBC connections.
Does anyone have any suggestions on how I should proceed?
Do you have Hive Impersonation turned on?
hive.server2.enable.doAs=true
This could be the issue assuming hadoop.proxyusers.users and hadoop.proxyusers.groups are set properly.
Also, check whether the user 'knox' exist on Hive Server2 node (and others used for impersonation).
The known work around seems to be to set:
hadoop.proxyuser.knox.groups = *
hadoop.proxyuser.knox.hosts = *
I have yet to find a real fix that lets you keep this layer of added security.
I have setup a hive environment with Kerberos security enabled on a Linux server (Red Hat). And I need to connect from a remote windows machine to hive using JDBC.
So, I have hiveserver2 running in the linux machine, and I have done "kinit".
Now I try to connect from a java program on the windows side with a test program like this,
Class.forName("org.apache.hive.jdbc.HiveDriver");
String url = "jdbc:hive2://<host>:10000/default;principal=hive/_HOST#<YOUR-REALM.COM>"
Connection con = DriverManager.getConnection(url);
And I got the following error,
Exception due to: Could not open client transport with JDBC Uri:
jdbc:hive2://<host>:10000/;principal=hive/_HOST#YOUR-REALM.COM>:
GSS initiate failed
What am I doing here wrong ? I checked many forums, but couldn't get a proper solution. Any answer will be appreciated.
Thanks
If you were running your code in Linux, I would simply point to that post -- i.e. you must use System properties to define Kerberos and JAAS configuration, from conf files with specific formats.
And you have to switch the debug trace flags to understand subtile configuration issue (i.e. different flavors/versions of JVMs may have different syntax requirements, which are not documented, it's a trial-and-error process).
But on Windows there are additional problems:
the Apache Hive JDBC driver has some dependencies on Hadoop JARs, especially when Kerberos is involved (see that post for details)
these Hadoop JARs require "native libraries" -- i.e. a Windows port of Hadoop (which you have to compile yourself!! or download from an insecure source on the web!!) -- plus System properties hadoop.home.dir and java.library.path pointing to the Hadoop home dir and its bin sub-dir respectively
On the top of that, the Apache Hive driver has compatibility issues -- whenever there are changes in the wire protocol, newer clients cannot connect to older servers.
So I strongly advise you to use the Cloudera JDBC driver for Hive for your Windows clients. The Cloudera site just asks your e-mail.
After that you have a 80+ pages PDF manual to read, the JARs to add to your CLASSPATH, and your JDBC URL to adapt according to the manual.
Side note: the Cloudera driver is a proper JDBC-4.x compliant driver, no need for that legacy Class.forName()...
The key for us when we ran into the problem, was as follows:
On your server there are certain kerberos principals listed that are allowed to operate on the data.
When we tried to run a query via JDBC, we didn't do the proper kinit on the client side.
In this case the solution is obvious:
On the windows client: do a kinit with the proper account before connecting
String url = "jdbc:hive2://<host>:10000/default;principal=hive/_HOST#<YOUR-REALM.COM>"
You should replace <YOUR-REALM.COM> with your real REALM.
I just installed oracle webligic 12.1.1, and I follow this videos's instructions:
youtube video
I write everything the same as in the video, when I wanted to test it, I got this exception:
Connection test failed.
IO exception: The Network Adapter could not establish the connection<br/>oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:458)
oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:546)
oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:236)
oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:521)
weblogic.jdbc.common.internal.DataSourceUtil.testConnection(DataSourceUtil.java:298)
com.bea.console.utils.jdbc.JDBCUtils.testConnection(JDBCUtils.java:746)
com.bea.console.actions.jdbc.datasources.createjdbcdatasource.CreateJDBCDataSource.testConnectionConfiguration(CreateJDBCDataSource.java:474)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
java.lang.reflect.Method.invoke(Method.java:597)
org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:870)
org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809)
org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478)
org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:306)
org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336)
org.apache.beehive.netui.pageflow.internal.FlowControllerAction.execute(FlowControllerAction.java:52)
org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)
org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.access$201(PageFlowRequestProcessor.java:97)
...
...
What could be the error, how could I solve it? Please, help me! Thank you!
Error: The Network Adapter could not establish the connection
The main cause of the above issue is the database is down or not pingable or not reachble...check your db services...make sure it is running fine.
First you have to create a DB, with MySQL or Oracle Database( SQLplus ), but that you already have done.
Then you have to go in your IDE (Eclipse or NetBeans) and select the option to see Services like Databases and Servers.
In Databases with right click (in Netbeans) you can see the option "new database connection", enter your credentials of the database that you already have created and this should resolve your problem if you are using NebBeans.
weblogic gives us many database options to choose. you have to make sure your database up and running before you try to connect. Just try to create a small DB table and query it by 'SELECT' option and check your DB is perfectly running. Connect giving your DB details correctly like name and type of DB. DERBY database is the inbuilt database provided in weblogic.
In my case, the error was in Weblogic 12.2.1.3.0.
I was creating a new datasource connection using a tnsnames that works perfectly fine in WL 12.1.3.
The fix was add more TRANSPORT_CONNECT_TIMEOUT (from 3 to 10) in the connection defined in the tnsnames, because apparently it wasn't enough for stablish a connection.
After that, the error was:
Blockquote
Could not establish a connection because of java.lang.IllegalArgumentException: ONS configuration failed
I solved this by putting this in setDomainEnv:
-Doracle.jdbc.fanEnabled=false
I'm trying to connect PHPStorm to the database on the server of my website, hosted on a Linux box at BlueHost.
In order to do this, I clicked the Data Sources tab on the right side of the screen, the + icon, and then DB Data Source a Data Source Properties dialog popped up. I entered a name for the source, set Data Source Level to Project. Then downloaded the MySQL Connector/J-5.1.18 JDBC Driver Files. This filled the JDBC Driver Class with about 6-7 classes, including com.mysql.jdbc.Driver.
The problem I believe I'm having is at the Database URL. it's asking for a jdbc:// url, which I'm not familiar with. I used the example format jdbc:mysql://[host][,failoverhost...][:port]/[database] and my username and password.
For host, I've tried localhost, and the name of the mysql server box###.bluehost.com. port 3306, and the name of the database was pretty straightforward.
I received this error when trying to test the connection...
Connection to Data Source failed
java.sql.SQLException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1116)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:344)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2332)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2369)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2153)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:792)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:381)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:305)
in RemoteDriverImpl.connect(RemoteDriverImpl.java:27)
in LocalDataSource.getConnection(LocalDataSource.java:105)
The major difference between this question and other questions here is that this is not to connect to a wamp or mamp stack, but a remote database.
If you are using Mamp and you are facing this problem, Just open the MAMP and uncheck 'allow local access only'.
BlueHost required that I add my IP Address range to the allowed access hosts list. I am on a shared hosting account, so BlueHost seems to allow remote connections on shared hosting accounts.
I did this by clicking "Remote MySQL" next to PHPMyAdmin in the cPanel for my hosting account. The cPanel automatically detected my IP range and suggested I add it. Once I did, everything else fell into place.
Thanks to LazyOne for pointing me in the right direction.