I'm starting to study Apache Kafka and Kafka Connect.
I'm trying to get data from a remote Oracle Database that my user only have read privilegies and can't list tables (i don't have permission to change that). To every query, i have to pass a dblink, but in the JDBC Connector, i didn't find a option to pass a dblink.
I can do the query if i pass a specific query on the connector configuration, but i want to fetch allot of tables and speficifying the query on the connector, would make me create allot of connectors.
There's a way to pass the dblink on the connector configuration or to the JDBC URL?
Related
I have been given access to HUE Hive platform by my client. I have also raised all the access requests for the database and all of them are approved also. But I can't see any databases or tables in the Hive interface. Is there any procedure to connect to a database or it should reflect on the Hive interface automatically?
I’m trying to learn about streaming services and reading kafka doc’s :
https://kafka.apache.org/quickstart
https://kafka.apache.org/24/documentation/streams/quickstart
To take a simple example I’m attempting to refactor a Spring web services GET request which accepts an ID parameter and returns a list of attributes associated with that ID. The DB backend is Oracle.
What is the approach for loading a single Oracle DB table which can be served by Kafka ? The above docs don't contain information for this. Do I need to replicate the Oracle DB to a NoSql DB such as MongoDB ? (Why we require Apache Kafka with NoSQL databases?)
Kafka is an event streaming platform. It is not a database. Instead of thinking about "loading a single Oracle DB table which can be served by Kafka", you need to think in terms of what events are you looking for that will trigger processing?
Change Data Capture (CDC) products like Oracle Golden Gate (there are other products too) will detect changes to rows and send messages into Kafka each time a row changes.
Alternatively you could configure a Kafka JDBC Source Connector to execute a query and pull data into Kafka.
I am able to connect to Hive using hive-jdbc client and also using the beeline.Typical url is,
jdbc:hive2://hive_thrift_ip:10000/custom_schema;principal=hive/hive_thrift_ip#COMPANY.COM
Unfortunately the connection is always established to the 'default' schema of Hive , and it is not considering the configured schema name in the url. I use the org.apache.hive.jdbc.HiveDriver class
It always takes me to the tables of the default schema. Still I am able to access the tables from other schema using the schema name prefix to the tables, like custom_schema.test_table
Kindly let me know if I missed any property or configuration in the connection creation part which will help me in getting the session exclusively for the schema that configure in the url.
Many thanks.
I have a PostgreSql database and I need to connect it to read data from oracle view and store that data in custom table
The PostgreSql database will connect to oracle everyday automatically to read the latest updates from oracle view
How to create it?
It sounds like you probably want a SQL/MED foreign data wrapper. Check out oracle_fdw. You could also use the generic odbc_fdw or jdbc_fdw wrappers via Oracle's ODBC or JDBC drivers.
Another option is DBI-Link.
Combine these with a cron job if you want to copy to a local view.
I am trying to create db link from Oracle 11g to SQL Server 2005 using DG4MSQL gateway.
After creating db link I am not able to query SQL Server system views (sys.services or sys.objects) using JDBC driver, but I am able to query all user tables using JDBC driver.
If I use sqlplus, I am able to query all tables including system tables. Since my project is Java project, I am bound to use JDBC driver.
One more observation I made is that, if I use DG4ODBC instead of DG4MSQL gateway, then I am able to query all SQL Server tables including system tables using JDBC driver.
Please let me know if there are any ways I can query SQL Server system tables using DG4MSQL and JDBC driver?
this one is a little bit tricky to explain
An Oracle Gateway performs 3 types of operations:
SQL translations (when you query regular tables, views etc)
Data Dictionary translations (when you query system views)
Data Type transformations (for example Microsoft's date to Oracle's date)
In case of JDBC, the JDBC-ODBC bridge makes the JDBC driver perfectly compatible with the drivers included in DG4ODBC. Therefore, JDBC plus DG4ODBC allows you to perform all the transformations above.
The problem is that DG4MSQL uses a proprietary driver and only SQL translations can be bridged to JDBC.
As a solution to your issue, you could try to create local views on your Oracle schema, based on the remote SQL server system views. Depending on your requirements, you can even create them as materialized views.
CREATE OR REPLACE VIEW sys_services
AS SELECT *
FROM sys.services#dblink;
and then query sys_services instead of directly querying sys.services#dblink