I cannot connect to my Google Bigquery dataset via Simba JDBC driver.
I want to connect from R application using RJDBC package. I set the parameters as follows:
library(RJDBC)
driver <- JDBC(driverClass = "com.simba.googlebigquery.jdbc42.Driver", classPath = "~/JDBC/GoogleBigQueryJDBC42.jar", identifier.quote = "'")
conn <- dbConnect(driver,"jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=My_project_Id;OAuthType=1;")
but I receive an error saying:
Error in .jcall(drv#jdrv, "Ljava/sql/Connection;", "connect", as.character(url)[1], :
java.lang.NoClassDefFoundError: com/google/api/client/json/JsonFactory
Please tell me what I am doing wrong?
I found the problem, I should have add required libraries to Java class-path. So in R I executed the following commands:
.jaddClassPath("jackson-core-2.1.3.jar")
.jaddClassPath("google-oauth-client-1.22.0.jar")
.jaddClassPath("google-http-client-jackson2-1.22.0.jar")
.jaddClassPath("google-http-client-1.22.0.jar")
.jaddClassPath("GoogleBigQueryJDBC41.jar")
.jaddClassPath("google-api-services-bigquery-v2-rev320-1.22.0.jar")
.jaddClassPath("google-api-client-1.22.0.jar")
Related
I'm trying to get the data from my Oracle Database to a Databricks Cluster. But I think I'm doing it wrong:
On the cluster library I just installed the ojdbc8.jar and then after that I opened a notebook and did this to connect:
CREATE TABLE oracle_table
USING org.apache.spark.sql.jdbc
OPTIONS (
dbtable 'table_name',
driver 'oracle.jdbc.driver.OracleDriver',
user 'username',
password 'pasword',
url 'jdbc:oracle:thin://#<hostname>:1521/<db>')
And it says:
java.sql.SQLException: Invalid Oracle URL specified
Can someone help? I've been reading documentations but there's no clear instruction on how I should actually install this jar step by step. I might be using the wrong jar? Thanks!
I have managed to set this up in Python/PySpark as follows:
jdbcUrl = "jdbc:oracle:thin:#//hostName:port/databaseName"
connectionProperties = {
"user" : username,
"password" : password,
"driver" : "oracle.jdbc.driver.OracleDriver"
}
query = "(select * from mySchema.myTable )"
df = spark.read.jdbc(url=jdbcUrl, table=query1, properties=connectionProperties)
I am using the Oracle JDBC Thin Driver instantclient-basic-linux.x64-21.5.0.0.0, as available on the Oracle webpages. The current version is 21.7 I think, but it should work the same way.
Check this link to understand the two different notations for jdbc URLs
I try to load table into R using h2o but had the following error
my_data <- h2o.import_sql_table(my_sql_conn, table, username, password)
ERROR: Unexpected HTTP Status code: 500 Server Error (url = http://localhost:54321/99/ImportSQLTable)
java.lang.RuntimeException [1] "java.lang.RuntimeException: SQLException: No suitable driver found for jdbc:mysql://10.140.20.29/MySQL?&useSSL=false\nFailed to connect and read from SQL database with connection_url: jdbc:mysql://10.140.20.29/MySQL?&useSSL=false"
Can someone help me with this? Thank you so much!
You need a supported JDBC (Build on JDBC 42 Core) driver to connect from H2O to SQL Server. You can download Microsoft JDBC Driver 4.2 for SQL Server from the link below first:
https://www.microsoft.com/en-us/download/details.aspx?id=54671
After that please follow the article below to first test JDBC driver from R/Python H2O client and then connect to your database:
https://aichamp.wordpress.com/2017/03/20/building-h2o-glm-model-using-postgresql-database-and-jdbc-driver/
Above article is for postgres however you can use it with SQL server using an appropriate driver.
For Windows, remember to use ; instead : for the -cp argument.
java -Xmx4g -cp sqljdbc42.jar;h2o.jar water.H2OApp -port 3333
water.H2OApp is the main class in h2o.jar.
Important Note: SQL Server is not supported so far( August/2017).
You may use MariaDB to load datasets:
From Windows console:
java -Xmx4G -cp mariadb-java-client-2.1.0.jar;h2o.jar water.H2OApp -port 3333
Note. For Linux, replace ";" with ":"
From R:
sqlConn <- "jdbc:mariadb://10.106.7.46:3306/DBName"
userName <- "dbuser"
userPass <- "dbpass."
sql_Query <- "SELECT * FROM dbname.tablename;"
mydata <- h2o.import_sql_select( sqlConn, sql_Query, userName, userPass )
I am trying to connect to Teradata and DB2 from Pyspark.
I am using the below jars :
tdgssconfig-15.10.00.14.jar
teradata-connector-1.4.1.jar
terajdbc4-15.10.00.14.jar
&
db2jcc4.jar
Connection string:
df1 = sqlContext.load(source="jdbc", driver="com.teradata.jdbc.TeraDriver", url=db_url,user="db_user",TMODE="TERA",password="db_pwd",dbtable="U114473.EMPLOYEE")
df = sqlContext.read.format('jdbc').options(url='jdbc:db2://10.123.321.9:50000/DB599641',user='******',password='*****',driver='com.ibm.db2.jcc.DB2Driver', dbtable='DSN1.EMPLOYEE')
Both gives me Driver not found error.
Can we use JDBC drivers for pyspark?
Like James Tobin said, use the pyspark2 --jars /jarpath option when you start your pyspark sessioni or when you submit your py to spark
I have setup AWS EMR cluster with hive. I want to connect to hive thrift server from my local machine using java. I tried following code-
Class.forName("com.amazon.hive.jdbc3.HS2Driver");
con = DriverManager.getConnection("jdbc:hive2://ec2XXXX.compute-1.amazonaws.com:10000/default","hadoop", "");
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/HiveJDBCDriver.html.As mentioned in the developer guide, added jars related with hive jdbc driver to class path.
But I am getting exception when trying to get connection.
I was able to connect to hive server on simple hadoop cluster using above code (with different jdbc driver).
Can someone please suggest if I am missing something?
Is it possible to connect to hive server on AWS EMR from local machine using hive jdbc?
(Merged Answer from the comments)
Hive is running on port 10000 but only locally, you have to create a ssh tunnel to the emr.
The following is from the documentation for hive 0.13.1
Create Tunnel
ssh -o ServerAliveInterval=10 -i path-to-key-file -N -L 10000:localhost:10000 hadoop#master-public-dns-name
Connect to JDBC
jdbc:hive2://localhost:10000/default
You can use the code using the library JSch
public static void portForwardForHive() {
try {
if(session != null && session.isConnected()) {
return;
}
JSch jsch = new JSch();
jsch.addIdentity(PATH_TO_SSH_KEY_PEM);
String host = REMOTE_HOST;
session = jsch.getSession(USER, host, 22);
// username and password will be given via UserInfo interface.
UserInfo ui = new MyUserInfo();
session.setUserInfo(ui);
session.connect();
int assingedPort = session.setPortForwardingL(LPORT, RHOST, RPORT);
System.out.println("Port forwarding done for the post : " + assingedPort);
} catch (Exception e) {
System.out.println(e);
}
}
Not sure if you've resolved this yet, but its a bug in EMR that's just bitten me.
For direct jdbc connectivity like you are doing, you must include the jdbc drivers in your shaded uber-jar. For jdbc access from within dataframes, you cannot access the jar in your uber-jar (another unrelated bug), but you must specify it on the command line (S3 is a convenient place to keep them):
--files s3://mybucketJAR/postgresql-9.4-1201.jdbc4.jar
However, even after this you will run into another problem if you are specifically trying to access hive. Amazon has built their own jdbc drivers with a different class hierarchy to the normal hive driver (com.amazon.hive.jdbc41.HS2Driver), however the EMR cluster includes the standard Hive jdbc driver in its standard path (org.apache.hive.jdbc.HiveDriver).
This is automatically registered as being capable of handling the jdbc:hive and jdbc:hive2 urls, so when you try to connect to a hive URL it finds this one first and uses it - even if you specifically register the amazon one. Unfortunately, this one is not compatible with amazon's EMR build of Hive.
There are two possible solutions:
1: Find the offending driver and unregister it:
Scala example:
val jdbcDrv = Collections.list(DriverManager.getDrivers)
for(i <- 0 until jdbcDrv.size) {
val drv = jdbcDrv.get(i)
val drvName = drv.getClass.getName
if(drvName == "org.apache.hive.jdbc.HiveDriver") {
log.info(s"Deregistering JDBC Driver: ${drvName}")
DriverManager.deregisterDriver(drv)
}
}
Or
2: As I found out later, you can specify the driver as part of the connect properties when you attempt to connect:
Scala example:
val hiveCredentials = new java.util.Properties
hiveCredentials.setProperty("user", hiveDBUser)
hiveCredentials.setProperty("password", hiveDBPassword)
hiveCredentials.setProperty("driver", "com.amazon.hive.jdbc41.HS2Driver")
val conn = DriverManager.getConnection(hiveDBURL, hiveCredentials)
This is a more "correct" version as it should override any preregistered handlers even if they have completely different class hierarchies.
I am using Hadoop-0.20.0 and Hive-0.8.0. Now i have data into Hive table and i want generate reports from that. For that I am using iReport-4.5.0. For that I also download HivePlugin-0.5.nbm in iReport.
Now I am going to connect Hive connection in iReport.
Create New Data source --> New --> Hive Connection
Jdbc Drive: org.apache.hadoop.hive.jdbc.HiveDriver
Jdbc URl: jdbc:hive//localhost:10000/default
Server Address: localhost
Database: default
user name: root
password: somepassword
Then click on Test connection button.
I am getting error like:
Exception
Message:
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)
Level:
SEVERE
Stack Trace:
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException:
java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:226)
org.apache.hadoop.hive.jdbc.HiveConnection.<init>(HiveConnection.java:72)
org.apache.hadoop.hive.jdbc.HiveDriver.connect(HiveDriver.java:110)
com.jaspersoft.ireport.designer.connection.JDBCConnection.getConnection(JDBCConnection.java:140)
com.jaspersoft.ireport.hadoop.hive.connection.HiveConnection.getConnection(HiveConnection.java:48)
com.jaspersoft.ireport.designer.connection.JDBCConnection.test(JDBCConnection.java:447)
com.jaspersoft.ireport.designer.connection.gui.ConnectionDialog.jButtonTestActionPerformed(ConnectionDialog.java:335)
com.jaspersoft.ireport.designer.connection.gui.ConnectionDialog.access$300(ConnectionDialog.java:43)
Can any one help me in this? Where i am wrong or missing something?
"I also download HivePlugin-0.5.nbm in iReport."
This isn't clear. iReport 4.5 has the Hadoop Hive connector pre-installed. Why did you download the connector separately? Did you install this plugin?
Create New Data source --> New --> Hive Connection
Jdbc Drive: org.apache.hadoop.hive.jdbc.HiveDriver
...
This isn't possible with the current Hadoop Hive connector. When you create a new "Hadoop Hive Connection" you are given only one parameter to fill out: the url.
I'm guessing that you created a JDBC connection when you meant to create a Hadoop Hive connection. This is a logical thing to do. Hive is accessed via JDBC. But the Hive JDBC driver is still pretty new. It has a number of shortcomings. That's why the Hive connector was added to iReport. It is based on the Hive JDBC driver, but it includes a wrapper around it to avoid some problems.
Or maybe you installed an old Hive connector over the top of the one that's already included with iReport 4.5. At some point in the past the Hive connector let you fill in extra information like the JDBC Driver.
Start with a fresh iReport installation, and make sure you use the Hadoop Hive Connection. That should clear it up.
The error "java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)" happens because the VersionInfo class in hadoop-common.jar attempts to locate the version info using the current thread's class loader.
https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java#L41-L58
The code in question looks like this...
package org.apache.hadoop.util;
...
public class VersionInfo {
...
protected VersionInfo(String component) {
info = new Properties();
String versionInfoFile = component + "-version-info.properties";
InputStream is = null;
try {
is = Thread.currentThread().getContextClassLoader()
.getResourceAsStream(versionInfoFile);
if (is == null) {
throw new IOException("Resource not found");
}
info.load(is);
} catch (IOException ex) {
LogFactory.getLog(getClass()).warn("Could not read '" +
versionInfoFile + "', " + ex.toString(), ex);
} finally {
IOUtils.closeStream(is);
}
}
If your tool attempts to connect to the datasource in a separate thread, it will generate this error.
The easiest way to work around the issue is to put the hadoop-common.jar library in $JAVA_HOME/lib/ext or use the command line setting -Djava.endorsed.dirs to point to the hadoop-common.jar library. Then the thread's class loader will always be able to find this information.