I was attempting to use StreamSets to query a MySQL database and publish the data into Elasticsearch (localhost).
I downloaded StreamSets' tarball on my Mac and unzipped it into my home directory. Running StreamSets dc started up on my first try, then I followed the instructions here to add the jdbc driver, then the instructions here to configure my StreamSets job. However, I got an error:
JDBC_00 - Cannot connect to specified database:
com.streamsets.pipeline.api.StageException: JDBC_06 - Failed to
initialize connection pool:
com.zaxxer.hikari.pool.PoolInitializationException: Exception during
pool initialization: Connection.isValid() is not supported, configure
connection test query.
Are you using an old MySQL JDBC driver (pre-JDBC 4.0)?
Based on the error you need to go to the Legacy configuration tab and specify a test query yourself such as SELECT USER() or SELECT 1 from 1 so that connections can be validated.
Related
I'm new to OBIEE. I have version 12.2.1.4 installed on a Linux server. I installed the client tools on my Windows PC. Using the Administration tool I created a simple RPD which uses only two tables. For this I had to first create an ODBC DSN to connect to my DB/2 database.
Next, I uploaded the RPD to the OBIEE server using the datamodel cli tool. When I go to the http://hostname:9502/analytics page and select to create a new analysis, it shows me the name of the repository and the two tables. I selected a couple of columns and clicked on the Results tab.
At this point, I get an error message: ODBC error state: IM002 code: 0 message: [DataDirect][ODBC lib] Data source name not found and no default driver specified
I had used the em console to create a JNDI connection to DB/2. But, from the message it seems that it is trying to use the ODBC connection that was used when creating the RPD on my PC.
How do I change the connection that the server is using?
The server needs to be able to reach the data source. EM JNDI connections have nothing to do with it, but rather the server (server OS - not the application) has to reach the source.
You need to update your ODBC settings on the Linux server: https://support.oracle.com/epmos/faces/DocContentDisplay?id=2570997.1
I want to use JDBC connector on confluent. It doesnt work when I start connect with Confluent CLI.
confluent local start connect
and it gives this error:
Caused by: java.sql.SQLException: No suitable driver found for jdbc:oracle:thin:#10.10.10.10:1954/MYSERVICE
I stop connect and start manually connect-distributed or standalone it gives same error
./bin/connect-distributed etc/schema-registry/connect-avro-distributed.properties
but when I set CLASSPATH then above code is working fine and transfer data to Oracle.
export CLASSPATH=/home/my_confluent/confluent-5.4.1/share/java/kafka-connect-jdbc/ojdbc6.jar
But still I can not do same thing with connect service.
When I up my confluent connect
confluent local start connect
it gives same error.
The Confluent CLI uses Golang to start up scripts underneath, so that may explain why exporting Java specific variables do not work, however, given that if you export CLASSPATH=/any/path/to/jdbc-drivers/*.jar, then run any process in the same terminal process, it should inherit those variables.
confluent local start connect internally calls some exec.command("connect-distributed") function, which thereby is a Java method call that is ran through kafka-run-class.sh, which does inherit the CLASSPATH variable
The JDBC driver needs to be within the same folder as the JDBC sink connector.
So if your JDBC sink connector (kafka-connect-jdbc-5.4.1.jar) is in etc/kafka-connect-jdbc then put ojdbc6.jar in that folder.
Edit: after placing the JDBC driver here you must restart the Kafka Connect worker.
📹I've written up and recorded details of this whole process here: https://rmoff.dev/fix-jdbc-driver
Finally I found the solution after many tries.
I copied ojdbc6.jar file into /home/ersin/confluent-5.4.1/share/java/kafka/ folder and restart connect service and boom it works like a charm.
for your information
i am new to pentaho and bigdata......every time i try to connect my windows pentaho to my Linux based virtual machines HDFS..this error pops up..i'v tried a couple of solutions but haven't had any luck with them....i would really appreciate if any of you could come up with a solution...
thanks in advance...!!
Error connecting to database [hadoop] :org.pentaho.di.core.exception.KettleDatabaseException:
Error occurred while trying to connect to the database
Error connecting to database: (using class org.apache.hadoop.hive.jdbc.HiveDriver)
No suitable driver found for jdbc:hive://(virtual machine's ip address):10000/test
You must have your Hive JDBC driver in classpath. It can be included by extending your CLASSPATH to include the Hive JDBC jar.
set CLASSPATH=%CLASSPATH%;%HIVE_HOME%\lib\hive-jdbc-1.1.0-cdh5.10.1.jar
You should be through if there is no other error!
If you are using a Java application, you can use the following to obtain the connection object :
Connection con = DriverManager.getConnection("jdbc:hive2://172.16.149.158:10000/default", "hive", "");
Where
172.16.149.158 is the hive server address,
10000 is the default hive port
Do check if the connection is successful using telnet command..
$ telnet 'hive-server' 'hive-port'
It should connect successfully.
You can also use the Pentaho wizard to connect with hive db. Link from Pentaho wiki : http://wiki.pentaho.com/display/BAD/Create+Hive+Database+Connection
I am trying to make a connection to Cloudera VM 5.10(CDH 5.10) hue hive interface sample database to test the right driver using dbVisualizer tool (https://www.dbvis.com/download)
I checked CDH 5.10 has hive version 1.1, I got this information from
I tested two drivers
1) Hive JDBC 1.1.0 standalone from
https://repo1.maven.org/maven2/org/apache/hive/hive-jdbc/1.1.0/
2) "hive-jdbc-1.1.0-cdh5.10.0-standalone.jar" directly getting from cdh VM.
Cunnection URL I am giving is :
jdbc:hive://http://127.0.0.1:10000
When I connect I get this error message
"An error occurred while establishing the connection:
The selected Driver cannot handle the specified Database URL.
The most common reason for this error is that the database URL
contains a syntax error preventing the driver from accepting it.
The error also occurs when trying to connect to a database
with the wrong driver. Correct this and try again."
Please let me know if I am doing something wrong here.
I am running Neo4j 2.1.6, tried Neo4j 2.20 as well.
I can not connect it with DbVisualizer 9.1.13
And I can not find ANY step by step clear explanation on how to do it.
First I've got binary JDBC Neo4j-2.0.1-SNAPSHOT here
I can run my just installed Neo4j instance from the browser localhost:7474
and I don't know what the REST API is all about and if it is turned on by default.
I can run the Neo4j 2.20 same way that comes with a new feature of user authorization and I am not sure if that JDBC driver is compatible with it. My user:pass is neo4j:neo
So in DbVisualizer I clicked Tools->Driver Manager and filled out like this:
My connection properties are as follows:
I've got the error on connect:
Product: DbVisualizer Pro 9.1.13
Build: #2310 (2015-01-11 11:26:27)
Java Version: 1.8.0_25
OS Name: Windows Server 2012 R2
An error occurred while establishing the connection:
The selected Driver cannot handle the specified Database URL.
The most common reason for this error is that the database URL
contains a syntax error preventing the driver from accepting it.
The error also occurs when trying to connect to a database
with the wrong driver.
If you look at the documentation for the jdbc driver, you see that the database URL is:
jdbc:neo4j://localhost:7474/
Please try to make it work with 2.1.6 first.
For the 2.2. auth you have to use the token you got back as password.