i have setup kafka debezium on UBUNTU server and created sql connector and that works perfectly fine. then i created oracle connector and i m getting this error "Unable to connect: Failed to resolve Oracle database version"
i have followed strictly the documentation from debezium here.
https://debezium.io/documentation/reference/connectors/oracle.html
debezium oracle kafka documentaion
i have doubt over the config about following attributes.
"database.server.name" Is this same as host name ????
"database.hostname" : server host name where oracle db is running (myserver.domain.com)
"database.user" : user with all the permission required (except FLASHBACK ANY TABLE)
"database.out.server.name": (IS THIS REQUUIRED????)
The connector configuration options you're asking about are described here. That said, I'll cover them below for completeness.
database.server.name
This acts as a logical or unique name for the specific Oracle database that is being captured. If you deploy multiple connectors, each connector should have a unique name since this is used as a prefix for all Kafka topic names created or associated with this connector deployment. Since this is used as part of the Kafka topic names, Kafka topic name restrictions apply, meaning this should only contain alphanumeric characters and underscores only.
database.hostname
This should contain the IP address or the hostname from where the Oracle database can be reached.
database.user
This is the name of the user that the connector will use to connect and interact with the Oracle. In the documentation, this would be the user that you created by following these steps.
database.out.server.name
This setting is only applicable if you intend to use the Oracle XStream adapter, which requires setting the database.connection.adapter=xstream in your connector configuration. If you aren't specifying this alternative adapter, the connector will use the native Oracle LogMiner database tool and this setting can safely be omitted.
Related
Using a working JDBC connect string for a connect to a remote Oracle database using Oracle JDBC like
jdbc:oracle:thin:#myremotehost:1521:mysid
and mapping it inside SAP Connector with nomenclature
jdbc:oracle:thin:#<virtual host name>:<virtual Port>:mysid
with
<virtual host name> = oracle
<virtual Port>= 61521
resulting in
jdbc:oracle:thin:#oracle:61521:mysid
will end up with error
org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection: DataSource returned null from getConnection(): org.ops4j.pax.jdbc.impl.DriverDataSource#45d186af
To check if the issue is related to a problem in the Oracle JDBC Driver we
Installed SAP JDK 8 on the remote database machine
Using Maven to create a standalone Spring-Boot/Spring-JDBC/HikariCP/Oracle JDBC JAR reflecting the stack of SAP Connector / SAP VM using with iFlows
Tested the JAR with connect string
jdbc:oracle:thin:#myremotehost:1521:mysid
succesfully
add on remote database machine to file /etc/hosts an entry like
oracle XXX.XXX.XXX.XXX
where XXX.XXX.XXX.XXX is the IP of machine "myremotehost"
Tested the recreated JAR with new connect string
jdbc:oracle:thin:#oracle:1521:mysid
succesfully
Replacing "oracle" as virtual host name by "myoracle" like
jdbc:oracle:thin:#myoracle:61521:mysid
fixed the issue with SAP Connector
According to our finding we think the error is caused inside SAP Connector using Java String.replace() function replacing inside connect string
jdbc:oracle:thin:#oracle:61521:mysid
Any occurance of "oracle" (virtual host name) by the remote host name "myremotehost"
Any occurance of "61521" (virtual port number) by the remote host port "1521"
ending up with a connect string like
jdbc:myremotehost:thin:#myremotehost:1521:mysid
which is invalid because the first occurance of "oracle" in original connect string is required to resolve classes giving the seen error because no connection could be established.
For Developers of the SAP Connector at SAP: Valid workaround in the Java code of SAP Connenctor would be using regexp or restrict replacement for substring right of at-Sign "#" in connect string so that class resolving is still possible ;-)
kind regards
Frank Scherie
Expectation:
SAP fixes the issue and creates a bug/note for the issue
I'm trying to install Informatica 10.1 on OCI and thereby connecting it to ADW for the INFA users access.
I've successfully established connectivity with ADW through sqlplus using the wallet keys. However, during the Informatica installation I'm not able to connect to the ADW database. Below is an excerpt of the connection that is being tried by the installer.
Configure the database for the domain configuration repository:
Database type:
* 1->Oracle
2->SQLServer
3->DB2
4->Sybase
(Default: 1):
Database user ID: (default :- dbadmin) :
User password: (default :- ) :
Configure the database connection
1->JDBC URL
* 2->Custom JDBC Connection String
(Default: 2):
I'm wary on the custom JDBC Connection String that is being asked. Usually the default string is something like this:
jdbc:informatica:oracle://somestringfromtnsnames.oraclecloud.com:1521;ServiceName=somestringfromtnsnames.adb.oraclecloud.com
But in this case I'm connecting to ADW via wallet & ideally the wallet information should be provided. I just am not sure how. I've prepared a string in accordance to the same which I thought was correct, but it doesn't work.
jdbc:informatica:oracle:#tnsnamesalias?TNS_ADMIN=/path/to/my/wallet/store
Has anyone got any idea on this? Any pointers would be helpful.
From what I understand, the DataDirect JDBC drivers used by Informatica do not support Oracle's encryption, which is required to access ADW. It appears that you can use Oracle Client on an existing Informatica installation to add ADW as a target, but not using JDBC or ODBC. There appear to be limitations to this in terms of metadata access, and some import steps will need to be completed manually.
In spite of what it implies in "Autonomous Database 3rd Party Tools and Applications" for Informatica, the only way to complete a new installation - according to the steps in Appendix A of the doc - is to first disable the SQL*Net encryption. This requires a level of access to the Oracle configuration files and processes that does not exist for Autonomous Database services (i.e. access to sqlnet.ora and lsnrctl). It only exists if you are running your own VM host (Infrastructure as a Service) with a stand-alone installation of Oracle Database that you fully control.
I have created an Autonomous Transaction Processing database in Oracle Cloud. There are no ready-for-use JDBC links around, but there are "wallets". There is an instance wallet and regional wallet. Oracle says one of them, preferably instance wallet, should be used to connect to this DB instance.
A wallet is a ZIP file with a dozen of files inside. I've downloaded an instance wallet and unzipped it. Now I'm trying to connect DataGrip to this instance.
There is a TNS connection type in DataGrip and there is a famous tnsnames.ora in the wallet, so I guess I should use them. TNS connection type accepts a TNSADMIN parameter, which, I guess, is a directory of that wallet. tnsnames.ora from the wallet lists a few service names, AFAIU they differ by their priority, e.g. one for low-priority queries, another for medium-priority and one for the highest priorities question. I'm OK with medium priority, so I did this:
As you see, I'm getting an error:
[08006][17002] IO Error: The Network Adapter could not establish the connection
SSO KeyStore not available.
I've googled around, but this topic seems to be complicated. Oracle has a lot of connection parameters with certificates involved in the connection process, and I'm really new and I just want to connect to this instance. Why it should be so complicated? Can I use this wallet directly in DataGrip?
It seems that I've did everything correctly and the only problem is actually the driver version.
As of today, 2021-02-02, the latest available Oracle driver version in DataGrip is 19.8.0.0:
To fix the issue I've just created another Oracle driver in DataGrip and manually provided the latest JARs:
Go to the Oracle Database 21c (21.1) JDBC Driver & UCP Downloads
Download the ZIPped JDBC driver and companion JARs corresponding to your Java version: 8 or 11. Or just download the version for Java 8 (ojdbc8-full.tar.gz). It should work with any modern Java.
Create new subdirectory in the DataGrip's drivers directory for you driver. Something like ~/.config/JetBrains/DataGrip2020.3/jdbc-drivers/Oracle/21.1 for Linix.
Unzip the driver in that directory.
Configure new driver in DataGrip. Just clone the existing Oracle driver and replace the "Driver Files" with the ones from the ZIP.
Use this new driver to connect to the instance:
DataGrip 2021.1 provides Oracle JDBC Driver 21.1.0.0 with all required jar files.
Also, read DataGrip article about connection to Oracle using wallets.
I have one On Prem Oracle database and one Azure SQL Database and want to use a Copy Data activity to transfer this data.
I have now created a Self Hosted IR for the Oracle database, and I am able to connect to it and preview data from Data Factory editor:
I have a Azure SQL Database that I want to recieve data, and it is set up with AutoResolveIntegrationRuntime, with Connection successful. I am also able to preview data from this database:
When I try to run this Copy Data activity I get following error message:
ErrorCode=SqlFailedToConnect,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot connect to SQL Database: 'sqlsrv', Database: 'database', User: 'user'. Check the linked service configuration is correct, and make sure the SQL Database firewall allows the integration runtime to access.
Based on all Docs/Tutorials I have read this should not be failing. I have tried to open up the SQL Server to allow all IP adresses in firewall rules.
Any ideas what I might be doing wrong here?
Since the integration runtime is a bridge across on prem and cloud , you need to check whether you are able to access the onprem database and the Azure SQL database through the VM in which the IR is installed.
The VM hosting the IR should be able to access both the source and sink for the copy activity to be successful in case if either of source or sink is using the self hosted runtime.
So the issue is not w.r.t Azure SQL database but in the VM hosting the IR
I am trying to get an Other Databases (JDBC) connection from Tableau to SAS using SAS' integrated object model (sasiom jdbc) but running into this error:
Error:
Bad Connection: Tableau could not connect to the data source.
Trying to connect an http1.x server
Generic JDBC connection error
Trying to connect an http1.x server
Configuration Details
I believe my configuration is somewhat correct so far but I think that Tableau is not identifying the correct driver class to use when making a JDBC connection to SAS.
At a high level here is what a JDBC connection to SAS looks like:
JDBC Connection String: jdbc:sasiom://companyserver.company.com:port
Driver class name: com.sas.rio.MVADriver
Driver jar files location for Tableau to access: C:\Program Files\Tableau\Drivers
In this extract below from the Tableau Desktop logs it looks like 'dialect' and 'class' being used are genericjdbc which I think I want the class to be com.sas.rio.MVADriver to use the classname for sasiom but I'm not certain
{"attributes":{":protocol-customizations":"","class":"genericjdbc","dbname":"","dialect":"genericjdbc","jdbcproperties":"","jdbcurl":"jdbc:sasiom://companyserver.company.com:8591","password":"***","schema":"","username":"username","warehouse":""},"closed-protocols-count":"0","connection-limit":"16","group-id":"3","in-construction-count":"0","protocols-count":"0","this":"0x0000018511611140"}}
Properties file attempted without success
I've tried to add a properties file to force the class to be com.sas.rio.MVADriver in the hopes that I get a connection successful or at least a different error if anything else needs to change but no luck with a properties file.
Please provide some help or direction if anyone has successfully created a custom JDBC connection in Tableau and how you got it working? What configuration steps am I missing?
Is there a way to verify that Tableau is using the correct driver class for the jdbc connection? I have not seen the correct class of com.sas.rio.MVADriver referenced at all in the Tableau logs
The answer to this question is a Type 4.0 JDBC Driver is needed to automatically register the driver class name with the JDBC driver manager. The public SAS .jar files contain a Type 2.0 JDBC Driver which requires a manual JDBC driver class specification, which Tableau does not allow to my knowledge. It appears this is not possible currently.