How to install JDBC driver on Databricks Cluster? - jdbc

I'm trying to get the data from my Oracle Database to a Databricks Cluster. But I think I'm doing it wrong:
On the cluster library I just installed the ojdbc8.jar and then after that I opened a notebook and did this to connect:
CREATE TABLE oracle_table
USING org.apache.spark.sql.jdbc
OPTIONS (
dbtable 'table_name',
driver 'oracle.jdbc.driver.OracleDriver',
user 'username',
password 'pasword',
url 'jdbc:oracle:thin://#<hostname>:1521/<db>')
And it says:
java.sql.SQLException: Invalid Oracle URL specified
Can someone help? I've been reading documentations but there's no clear instruction on how I should actually install this jar step by step. I might be using the wrong jar? Thanks!

I have managed to set this up in Python/PySpark as follows:
jdbcUrl = "jdbc:oracle:thin:#//hostName:port/databaseName"
connectionProperties = {
"user" : username,
"password" : password,
"driver" : "oracle.jdbc.driver.OracleDriver"
}
query = "(select * from mySchema.myTable )"
df = spark.read.jdbc(url=jdbcUrl, table=query1, properties=connectionProperties)
I am using the Oracle JDBC Thin Driver instantclient-basic-linux.x64-21.5.0.0.0, as available on the Oracle webpages. The current version is 21.7 I think, but it should work the same way.
Check this link to understand the two different notations for jdbc URLs

Related

RobotFramework Oracle DB Connection Issue

I'm trying to check if the oracle DB connection using the Keyword:
Connect To Database Using Custom Params
Following libraries are imported:
Database Library
JayDeBe API
This this is the connection string used:
'oracle.jdbc.driver.OracleDriver', 'jdbc:oracle:thin:#//DBHostname:Port/DBName', ['user', 'pass']
We do not get any response that can identify if the connection is established or rejected.
We see this message in RIDE:
'oracle.jdbc.driver.OracleDriver', 'jdbc:oracle:thin:#//DBHostName:Port/DBName', ['user', 'pass']
20170927 17:07:54.438 : INFO : Executing : Connect To Database Using Custom Params : jaydebeapi.connect(db_api_2.connect('oracle.jdbc.driver.OracleDriver', 'jdbc:oracle:thin:#//DBHostName:Port/DBName', ['user', 'pass']))
Can someone please help us?
We found the solution: In our case we have python 2.7(32bit) therefore we need the following:
1. Oracle Instant Client Downloads for Microsoft Windows (32-bit). you can download it from here: http://www.oracle.com/technetwork/topics/winsoft-085727.html
Set the variable path in environment variable for the Oracle Instant client.
Download cx_Oracle API (32bit- cx_Oracle-6.0.2-cp27-cp27m-win32.whl (md5) ) from the location:https://pypi.python.org/pypi/cx_Oracle/
Open command prompt and run: pip install cx_Orcle --upgrade
Open the RIDE and use the following keyword:
Connect To Database Using Custom Params cx_Oracle 'user', 'password', 'DBHOSTNAME:PORT/DBNAME'

How to load table from SQL server using H2o in R?

I try to load table into R using h2o but had the following error
my_data <- h2o.import_sql_table(my_sql_conn, table, username, password)
ERROR: Unexpected HTTP Status code: 500 Server Error (url = http://localhost:54321/99/ImportSQLTable)
java.lang.RuntimeException [1] "java.lang.RuntimeException: SQLException: No suitable driver found for jdbc:mysql://10.140.20.29/MySQL?&useSSL=false\nFailed to connect and read from SQL database with connection_url: jdbc:mysql://10.140.20.29/MySQL?&useSSL=false"
Can someone help me with this? Thank you so much!
You need a supported JDBC (Build on JDBC 42 Core) driver to connect from H2O to SQL Server. You can download Microsoft JDBC Driver 4.2 for SQL Server from the link below first:
https://www.microsoft.com/en-us/download/details.aspx?id=54671
After that please follow the article below to first test JDBC driver from R/Python H2O client and then connect to your database:
https://aichamp.wordpress.com/2017/03/20/building-h2o-glm-model-using-postgresql-database-and-jdbc-driver/
Above article is for postgres however you can use it with SQL server using an appropriate driver.
For Windows, remember to use ; instead : for the -cp argument.
java -Xmx4g -cp sqljdbc42.jar;h2o.jar water.H2OApp -port 3333
water.H2OApp is the main class in h2o.jar.
Important Note: SQL Server is not supported so far( August/2017).
You may use MariaDB to load datasets:
From Windows console:
java -Xmx4G -cp mariadb-java-client-2.1.0.jar;h2o.jar water.H2OApp -port 3333
Note. For Linux, replace ";" with ":"
From R:
sqlConn <- "jdbc:mariadb://10.106.7.46:3306/DBName"
userName <- "dbuser"
userPass <- "dbpass."
sql_Query <- "SELECT * FROM dbname.tablename;"
mydata <- h2o.import_sql_select( sqlConn, sql_Query, userName, userPass )

Hortonworks Hive Username password configuration

I am attempting to get a connection to the hive server and keep getting:
HiveAccessControlException Permission denied: user [hue] does not have [CREATE] privilege on [default/testHiveDriverTable]
Not sure what username/password combination I should use.
I have Hortonworks running on a virtual machine. The code I am attempting to use to connect to the Hive server is:
Connection con = DriverManager.getConnection(“jdbc:hive2://192.168.0.6:10000/default”, “hue”, “”);
I have also tried connecting via the "root", "Hadoop" username, password configuration.
By the look of the error message, the Connection is OK, but you lack privileges to execute the Statement just below... because that "testHiveDriverTable" label smells like you did a copy/paste on some dummy code example such as:
Connection con = DriverManager.getConnection("jdbc:hive://10.0.2.15:10000/default", "", "");
Statement stmt = con.createStatement();
String tableName = "testHiveDriverTable";
stmt.executeQuery("drop table " + tableName);
stmt.executeQuery("create table " + tableName + " (key int, value string)");
The "no [CREATE] privilege on [default/testHiveDriverTable]" message is a good fit for a create table testHiveDriverTable attempt in database default.
Bottom line: congratulations, you actually could connect to Hive. Now you can work on how to configure the "default" database so that you can actually create tables :-)
Have you tried connecting without giving username and password? I have the following connection string on my cloudera virtualbox which works
Connection con = DriverManager.getconnection(jdbc:hive2://localhost:10000/default,"","");
Here default is the DB name.

Not able to connect to hive on AWS EMR using java

I have setup AWS EMR cluster with hive. I want to connect to hive thrift server from my local machine using java. I tried following code-
Class.forName("com.amazon.hive.jdbc3.HS2Driver");
con = DriverManager.getConnection("jdbc:hive2://ec2XXXX.compute-1.amazonaws.com:10000/default","hadoop", "");
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/HiveJDBCDriver.html.As mentioned in the developer guide, added jars related with hive jdbc driver to class path.
But I am getting exception when trying to get connection.
I was able to connect to hive server on simple hadoop cluster using above code (with different jdbc driver).
Can someone please suggest if I am missing something?
Is it possible to connect to hive server on AWS EMR from local machine using hive jdbc?
(Merged Answer from the comments)
Hive is running on port 10000 but only locally, you have to create a ssh tunnel to the emr.
The following is from the documentation for hive 0.13.1
Create Tunnel
ssh -o ServerAliveInterval=10 -i path-to-key-file -N -L 10000:localhost:10000 hadoop#master-public-dns-name
Connect to JDBC
jdbc:hive2://localhost:10000/default
You can use the code using the library JSch
public static void portForwardForHive() {
try {
if(session != null && session.isConnected()) {
return;
}
JSch jsch = new JSch();
jsch.addIdentity(PATH_TO_SSH_KEY_PEM);
String host = REMOTE_HOST;
session = jsch.getSession(USER, host, 22);
// username and password will be given via UserInfo interface.
UserInfo ui = new MyUserInfo();
session.setUserInfo(ui);
session.connect();
int assingedPort = session.setPortForwardingL(LPORT, RHOST, RPORT);
System.out.println("Port forwarding done for the post : " + assingedPort);
} catch (Exception e) {
System.out.println(e);
}
}
Not sure if you've resolved this yet, but its a bug in EMR that's just bitten me.
For direct jdbc connectivity like you are doing, you must include the jdbc drivers in your shaded uber-jar. For jdbc access from within dataframes, you cannot access the jar in your uber-jar (another unrelated bug), but you must specify it on the command line (S3 is a convenient place to keep them):
--files s3://mybucketJAR/postgresql-9.4-1201.jdbc4.jar
However, even after this you will run into another problem if you are specifically trying to access hive. Amazon has built their own jdbc drivers with a different class hierarchy to the normal hive driver (com.amazon.hive.jdbc41.HS2Driver), however the EMR cluster includes the standard Hive jdbc driver in its standard path (org.apache.hive.jdbc.HiveDriver).
This is automatically registered as being capable of handling the jdbc:hive and jdbc:hive2 urls, so when you try to connect to a hive URL it finds this one first and uses it - even if you specifically register the amazon one. Unfortunately, this one is not compatible with amazon's EMR build of Hive.
There are two possible solutions:
1: Find the offending driver and unregister it:
Scala example:
val jdbcDrv = Collections.list(DriverManager.getDrivers)
for(i <- 0 until jdbcDrv.size) {
val drv = jdbcDrv.get(i)
val drvName = drv.getClass.getName
if(drvName == "org.apache.hive.jdbc.HiveDriver") {
log.info(s"Deregistering JDBC Driver: ${drvName}")
DriverManager.deregisterDriver(drv)
}
}
Or
2: As I found out later, you can specify the driver as part of the connect properties when you attempt to connect:
Scala example:
val hiveCredentials = new java.util.Properties
hiveCredentials.setProperty("user", hiveDBUser)
hiveCredentials.setProperty("password", hiveDBPassword)
hiveCredentials.setProperty("driver", "com.amazon.hive.jdbc41.HS2Driver")
val conn = DriverManager.getConnection(hiveDBURL, hiveCredentials)
This is a more "correct" version as it should override any preregistered handlers even if they have completely different class hierarchies.

How to connect Hive in iReport?

I am using Hadoop-0.20.0 and Hive-0.8.0. Now i have data into Hive table and i want generate reports from that. For that I am using iReport-4.5.0. For that I also download HivePlugin-0.5.nbm in iReport.
Now I am going to connect Hive connection in iReport.
Create New Data source --> New --> Hive Connection
Jdbc Drive: org.apache.hadoop.hive.jdbc.HiveDriver
Jdbc URl: jdbc:hive//localhost:10000/default
Server Address: localhost
Database: default
user name: root
password: somepassword
Then click on Test connection button.
I am getting error like:
Exception
Message:
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)
Level:
SEVERE
Stack Trace:
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException:
java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:226)
org.apache.hadoop.hive.jdbc.HiveConnection.<init>(HiveConnection.java:72)
org.apache.hadoop.hive.jdbc.HiveDriver.connect(HiveDriver.java:110)
com.jaspersoft.ireport.designer.connection.JDBCConnection.getConnection(JDBCConnection.java:140)
com.jaspersoft.ireport.hadoop.hive.connection.HiveConnection.getConnection(HiveConnection.java:48)
com.jaspersoft.ireport.designer.connection.JDBCConnection.test(JDBCConnection.java:447)
com.jaspersoft.ireport.designer.connection.gui.ConnectionDialog.jButtonTestActionPerformed(ConnectionDialog.java:335)
com.jaspersoft.ireport.designer.connection.gui.ConnectionDialog.access$300(ConnectionDialog.java:43)
Can any one help me in this? Where i am wrong or missing something?
"I also download HivePlugin-0.5.nbm in iReport."
This isn't clear. iReport 4.5 has the Hadoop Hive connector pre-installed. Why did you download the connector separately? Did you install this plugin?
Create New Data source --> New --> Hive Connection
Jdbc Drive: org.apache.hadoop.hive.jdbc.HiveDriver
...
This isn't possible with the current Hadoop Hive connector. When you create a new "Hadoop Hive Connection" you are given only one parameter to fill out: the url.
I'm guessing that you created a JDBC connection when you meant to create a Hadoop Hive connection. This is a logical thing to do. Hive is accessed via JDBC. But the Hive JDBC driver is still pretty new. It has a number of shortcomings. That's why the Hive connector was added to iReport. It is based on the Hive JDBC driver, but it includes a wrapper around it to avoid some problems.
Or maybe you installed an old Hive connector over the top of the one that's already included with iReport 4.5. At some point in the past the Hive connector let you fill in extra information like the JDBC Driver.
Start with a fresh iReport installation, and make sure you use the Hadoop Hive Connection. That should clear it up.
The error "java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)" happens because the VersionInfo class in hadoop-common.jar attempts to locate the version info using the current thread's class loader.
https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java#L41-L58
The code in question looks like this...
package org.apache.hadoop.util;
...
public class VersionInfo {
...
protected VersionInfo(String component) {
info = new Properties();
String versionInfoFile = component + "-version-info.properties";
InputStream is = null;
try {
is = Thread.currentThread().getContextClassLoader()
.getResourceAsStream(versionInfoFile);
if (is == null) {
throw new IOException("Resource not found");
}
info.load(is);
} catch (IOException ex) {
LogFactory.getLog(getClass()).warn("Could not read '" +
versionInfoFile + "', " + ex.toString(), ex);
} finally {
IOUtils.closeStream(is);
}
}
If your tool attempts to connect to the datasource in a separate thread, it will generate this error.
The easiest way to work around the issue is to put the hadoop-common.jar library in $JAVA_HOME/lib/ext or use the command line setting -Djava.endorsed.dirs to point to the hadoop-common.jar library. Then the thread's class loader will always be able to find this information.

Resources