I'm currently trying to test a JDBC connection to SAP Hana with AWS Glue.
Testing the connection results in the following error message:
com.amazonaws.glue.jobexecutor.commands.exception.CommandExecutorException: java.lang.IllegalArgumentException: No enum constant com.amazonaws.glue.jobexecutor.commands.jdbc.SupportedDriver.SAP
My JDBC URL looks like that jdbc:sap://ip:port/?databaseName=tdb
After looking in the developer guide from AWS for the JDBC Connection Properties, it looks like that required protocol for hana is not a supported right now.
So is this the case that you currently can't connect to a SAP Hana database with AWS Glue or am I missing something in my connection configuration?
The HANA JDBC driver seems not to be supported at the moment. AWS describes a way via S3, that, depending on your requirements might be enough for you:
https://aws.amazon.com/de/blogs/awsforsap/extracting-data-from-sap-hana-using-aws-glue-and-jdbc/
Related
Trying to achieve JDBC connection to reverse engineer from tables to hibernate using JBOSS tools, need JDBC connection for it, but the option for window->preference->Data Management-> Connectivity->Driver definitions shows only "Generic JDBC Driver" only for connection. Need SQL Server driver option.
screen shot
Connection to Oracle 12c from a standalone java application succeeds when ojdbc6.jar or ojdbc5.jar is used.
Connection String : jdbc:oracle:thin:#serverName:port:sid
Whereas the same connection string fails when connecting through Websphere with the following exception.
java.sql.SQLException: ORA-28040: No matching authentication protocol
DSRA0010E: SQL State = 99999, Error Code = 28,040
Note : Tried ojdbc8.jar and ojdbc6.jar
The ORA-28040: No matching authentication protocol error generally indicates that you are using an older JDBC driver with a newer database. You should either update your JDBC driver so that it is the same version as the database or update your sqlnet.ora file with the appropriate SQLNET.ALLOWED_LOGON_VERSION_SERVER/SQLNET.ALLOWED_LOGON_VERSION_CLIENT values. See Oracle's SQLNET documentation for more information.
Note that if you think you are using the same version JDBC driver as the database it is possible that a different JDBC driver is being picked up in the WebSphere environment. If that is the case:
Check that there are no additional JDBC drivers packaged with your application.
Check if there are other Oracle JDBC Providers configured in WebSphere using an older JDBC driver. If so either modify your configuration so all of your providers are using the same version Oracle JDBC driver or you will need to Isolate your JDBC Providers.
I'm struggling to get Confluent's kafka connector to connect to DB2.
I am running an ubuntu instance inside docker for testing pruposes. The solution needs to be deployed to kubernetes, so docker it is.
I have installed the Confluent platform using apt-get and adding their repos. All services are running, kafka, zookeeper, schema and kafka rest.
I have created my kafka connect properties file as described in this article: https://www.progress.com/blogs/build-an-etl-pipeline-with-kafka-connect-via-jdbc-connectors
I assumed that this will work the same for DB2. The step I'm missing in the above tutorial is this one:
java -jar PROGRESS_DATADIRECT_JDBC_POSTGRESQL_ALL.jar
I tried to run it like this:
java -jar /usr/share/java/kafka-connect-jdbc/db2jcc.jar
I get this error:
no main manifest attribute, in /usr/share/java/kafka-connect-jdbc/db2jcc.jar
I proceeded anyway, but of course I get an error:
No suitable driver found for jdbc:datadirect:db2://db2-server:50000;User=db2admin;Password=pwd;Database=test_db
This is my command to start the connector:
/usr/bin/connect-standalone /etc/kafka/connect-standalone.properties /etc/kafka-connect-jdbc/db2.properties
This is my properties file:
name=test-db2-jdbc
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:datadirect:db2://db2-server:50000;User=db2admin;Password=pwd;Database=test_db
mode=timestamp+incrementing
incrementing.column.name=id
timestamp.column.name=modified_time
topic.prefix=test_jdbc_
table.whitelist=data_log
I am sure I'm close. I just need to get the DB2 driver to register inside java or for kafka connect to pick it up and be able to use it.
I have tried other values for connector.class, but if I change that to the name of the class as it would be in other Java apps, I get this error:
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Class com.ibm.db2.jcc.DB2Jcc does not implement Connector
Any help or suggestions will be appreciated.
I am the author of the tutorial that you mentioned, I just noticed this thread and I see that you are using IBM supplied DB2 driver(db2cc.jar) with DataDirect IBM DB2 connection string(jdbc:datadirect:db2://db2-server:50000;User=db2admin;Password=pwd;Database=test_db), which is why as soon as you changed the connection string to IBM supplied driver, you were able to connect properly.
I am using logstash to create a pipeline from Postgres to CockroachDB. Below is the config.
The input plugin(source is postgres) is working fine. But I am unable to establish a connection in the output plugin(cockroachDB) using JDBC. I am facing the below error.
JDBC - Connection is not valid. Please check connection string or that your JDBC endpoint is available. {:level=>:error, :file=>"logstash/outputs/jdbc.rb", :line=>"154", :method=>"setup_and_test_pool!"}
Destination(cockroachDB) is open for connection at the specified ip and port.
As cockroachDB JDBC connection string is very similar to postgres, I tried the below connection strings, and still the same error.
jdbc:postgresql://host/database
jdbc:postgresql://host/database?sslmode=disable
jdbc:postgresql://host:port/database
jdbc:postgresql://host:port/database?sslmode=disable
How do I connect to cockroachDB through JDBC from logstash output plugin?
Your JDBC connection strings are OK.
Do not forget with JDBC the driver must be registered beforehand. You can do this either with Class.forName("org.postgresql.Driver") before your first JDBC class or invoke java.sql.DriverManager.registerDriver(new org.postgresql.Driver()); before you create your connection. Perhaps you forgot to register the driver?
For posterity, this should be working now. The problem was that JDBC's isValid() method was failing due to CockroachDB failing to prepare empty statements, which has since been fixed in CockroachDB.
I have database username and password to access oracle db and also have service url like https://X-X.X.X.oraclecloudapps.com/apex/.
Can anybody know how to connect this db using JDBC connection.
I tried using oracle thin driver but somehow it failes.
Sample java code:
Connection conn = DriverManager.getConnection("jdbc:oracle:thin:#//X.X.X.X.oraclecloudapps.com:1521/sid", "username", "****");
It throws
Exception in thread "main" java.sql.SQLRecoverableException: Io exception: The Network Adapter could not establish the connection
I don't know SID here, it would be helpful if anybody give steps to find SID/ServiceName from Oracle Cloud dashboard.
You can't use JDBC to connect to the Database Schema Service. You can connect using the API or tools that utilize the REST API. For data upload to Oracle Database Schema Service, use Oracle SQL Developer, the Oracle Application Express SQL Workshop Data Upload Utility or the Oracle Application Express Data Load utility. Read more here: http://docs.oracle.com/cloud/latest/dbcs_schema/CSDBU/GUID-3B14CF7A-637B-4019-AAA7-A6DC5FF3D2AE.htm#CSDBU177
There is only three way to connect Database Schema Service.
From an Oracle Application Express application running in Database Schema Service
From a Java application running in an Oracle Java Cloud Service
Through RESTful Web services
Try the following JDBC URL to resolve the issue
"jdbc:oracle:thin:#host-address:1521/sid";
Note / is used after port and Not :