Can JDBC be used with Cassandra? - jdbc

I have generic functions in my project that handle each database type, whether it's MySql or Oracle and more by using Statement and JDBC Connection.
However when I try using them for supporting Cassandra, it doesn't work so I created a new function just for Cassandra- that connects via Cluster.
Is there an option to use the following without getting exceptions and not Cluster?
Connection conn = DriverManager.getConnection("jdbc:cassandra://TheIp:9042", userName, passWord);
Statement stmt = conn.createStatement();
The dependencies I have on my POM Maven are;
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>3.11.0</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-mapping</artifactId>
<version>3.11.0</version>
</dependency>

It is possible to connect via JDBC using the Simba JDBC Driver for Apache Cassandra provided by DataStax for free. You can download the driver and the manual from DataStax Downloads.
However, if you're already developing your app in Java then have a look at the Cassandra Java driver. Here's a sample code which shows how to connect to and read from a Cassandra cluster:
import com.datastax.oss.driver.api.core.CqlSession;
import com.datastax.oss.driver.api.core.cql.*;
public class ConnectDatabase {
public static void main(String[] args) {
// Create the CqlSession object:
try (CqlSession session = CqlSession.builder()
.addContactPoint(new InetSocketAddress("1.2.3.4", 9042))
.withLocalDatacenter("DC1")
.build()) {
// Select the release_version from the system.local table:
ResultSet rs = session.execute("select release_version from system.local");
Row row = rs.one();
//Print the results of the CQL query to the console:
if (row != null) {
System.out.println(row.getString("release_version"));
} else {
System.out.println("An error occurred.");
}
}
System.exit(0);
}
}
Cheers!

Related

How to fix mysql jdbc error: java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/test

I have been looking for solution for my problem, but I did not get anywhere. I am using visual studio code to connect java(JDK-19) to MySQLv8.0 and I installed mysql-connector-j v8.0.31 (I am using windows)
I am trying to connect my database to java and I keep get this error:
java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/test
This is my code:
import java.sql.*;
import java.util.Properties;
public class App {
public static void main(String[] args) throws Exception {
Connection dbConnection = null;
try {
String url = "jdbc:mysql://localhost:3306/test";
Properties info = new Properties();
info.put("user", "root");
info.put("password", "test");
dbConnection = DriverManager.getConnection(url, info);
if (dbConnection != null) {
System.out.println("Successfully connected to MySQL database test");
}
} catch(SQLException e) {
System.out.println("An error occurd while connecting MySQL database");
e.printStackTrace();
}
}
}
I also added mysql-connector-j-8.0.31 jar file to referenced libraries.
Also added:
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.31</version>
</dependency>
to pom.xml

How to run drill in non installed system with jar file?

I'm making the program using Apache drill 1.8.
I'm trying to run this program in non-drill installed HDFS.
The way I think is using jar file, drill contained jar file can run this program because it is running in virtual machine.
But I'm not confident this way. Can it work?
If this way works, How to contain drill in jar file?
If not, what kind of way?
Plus question, how to change storage configuration using Java code?
It does not matter drill or hdfs running on the same machine or not.
Why do you need to create a jar.
If you are using Maven as building tool, add Drill JDBC driver dependency:
<dependency>
<groupId>org.apache.drill.exec</groupId>
<artifactId>drill-jdbc</artifactId>
<version>1.8.0</version>
</dependency>
Sample code:
public class TestJDBC {
// Host name of machine on which drill is running
public static final String DRILL_JDBC_LOCAL_URI = "jdbc:drill:drillbit=192.xxx.xxx.xxx";
public static final String JDBC_DRIVER = "org.apache.drill.jdbc.Driver";
public static void main(String[] args) throws SQLException {
try {
Class.forName(JDBC_DRIVER);
} catch (ClassNotFoundException ce) {
ce.printStackTrace();
}
try (Connection conn = new Driver().connect(DRILL_JDBC_LOCAL_URI, null);
Statement stmt = conn.createStatement();) {
String sql = "select employee_id,first_name,last_name from cp.`employee.json` limit 10";
ResultSet rs = stmt.executeQuery(sql);
while (rs.next()) {
System.out.print(rs.getInt("employee_id") + "\t");
System.out.print(rs.getString("first_name") + "\t");
System.out.print(rs.getString("last_name") + "\t");
System.out.println();
}
rs.close();
} catch (SQLException se) {
se.printStackTrace();
}
}
}

Maven, can't see my h2 class

I have some problem with my university project
I'm trying to connect to my h2 database but i'm failing on it.
I'm include it Maven dependencies but it still doesn't work.
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>1.4.186</version>
</dependency>
I get only classnotfoundexception
public class main {
public static void main(String[] a)
{
Class.forName("org.h2.Driver");
try (Connection conn = DriverManager.getConnection("jdbc:h2:~/test");
Statement stat = conn.createStatement()) {
stat.execute("create table test(id int primary key, name varchar(255))");
stat.execute("insert into test values(1, 'Hello')");
try (ResultSet rs = stat.executeQuery("select * from test")) {
while (rs.next()) {
System.out.println(rs.getString("name"));
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
Any suggestion?
This is failing because the h2 db driver classes are not in the classpath at runtime. You should bundle them with your app's jar file, if they are not somehow made the part of the app's classpath.
Use maven-shade-plugin, or maven-jar-plugin to bundle the dependencies with your app's jar (in this case it is called the Uber jar) so as to make it single independent runnable jar.

Integrating Spark SQL and Apache Drill through JDBC

I would like to create a Spark SQL DataFrame from the results of a query performed over CSV data (on HDFS) with Apache Drill. I successfully configured Spark SQL to make it connect to Drill via JDBC:
Map<String, String> connectionOptions = new HashMap<String, String>();
connectionOptions.put("url", args[0]);
connectionOptions.put("dbtable", args[1]);
connectionOptions.put("driver", "org.apache.drill.jdbc.Driver");
DataFrame logs = sqlc.read().format("jdbc").options(connectionOptions).load();
Spark SQL performs two queries: the first one to get the schema, and the second one to retrieve the actual data:
SELECT * FROM (SELECT * FROM dfs.output.`my_view`) WHERE 1=0
SELECT "field1","field2","field3" FROM (SELECT * FROM dfs.output.`my_view`)
The first one is successful, but in the second one Spark encloses fields within double quotes, which is something that Drill doesn't support, so the query fails.
Did someone managed to get this integration working?
Thank you!
you can add JDBC Dialect for this and register the dialect before using jdbc connector
case object DrillDialect extends JdbcDialect {
def canHandle(url: String): Boolean = url.startsWith("jdbc:drill:")
override def quoteIdentifier(colName: java.lang.String): java.lang.String = {
return colName
}
def instance = this
}
JdbcDialects.registerDialect(DrillDialect)
This is how the accepted answer code looks in Java:
import org.apache.spark.sql.jdbc.JdbcDialect;
public class DrillDialect extends JdbcDialect {
#Override
public String quoteIdentifier(String colName){
return colName;
}
public boolean canHandle(String url){
return url.startsWith("jdbc:drill:");
}
}
Before creating the Spark Session register the Dialect:
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.jdbc.JdbcDialects;
public static void main(String[] args) {
JdbcDialects.registerDialect(new DrillDialect());
SparkSession spark = SparkSession
.builder()
.appName("Drill Dialect")
.getOrCreate();
//More Spark code here..
spark.stop();
}
Tried and tested with Spark 2.3.2 and Drill 1.16.0. Hope it helps you too!

Hortonworks hiveserver2 jdbc error

I installed hortonworks, and I tried to access to hiveserver2 by jdbc. But I got error
ERROR : unsupported hive2 protocol
Code:
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
e.printStackTrace();
System.exit(1);
System.out.println("error");
}
java.sql.Connection con = DriverManager.getConnection("jdbc:hive2://192.168.0.96:10000/db","id","pwd");
Program versions:
Hadoop - 2.2.0
Hive - 0.12.0
Is there any solution for this situation?
As referred to by #climbage, there is a mismatch on protocol versions used by your client code and the hive server.
Here is the specific hive source code that is rejecting the request (in Hive Connection source code
private void openSession(Map sessVars) throws SQLException {
TOpenSessionReq openReq = new TOpenSessionReq();
..
try {
TOpenSessionResp openResp = client.OpenSession(openReq);
// validate connection
Utils.verifySuccess(openResp.getStatus());
if (!supportedProtocols.contains(openResp.getServerProtocolVersion())) {
throw new TException("Unsupported Hive2 protocol");
}
My suggestion: look at the src/test code in the core-hive module of the specific version of hive that is deployed on your servers. They will have jdbc tests that you can "lift" into your client code.
Also, have you simply tried hive and not hive2?
Connection con = DriverManager.getConnection("jdbc:hive://192.168.56.101:10000/default", "root", "");

Resources