How to configure Kafka JDBC Source Connector with stored procedure? - jdbc

I am very new to Kafka, I'm doing a PoC (my first Kafka app) and was curious to know if anyone has come across or worked on a custom JDBC Source Connector that uses a stored procedure to export data from Oracle into Kafka?
I know this could be considered an open-ended question by the SO community but thanks for your patience and I'd appreciate any feedback.
Thank you!
Some background:
I already have stored procedures in Oracle which are used by an existing Spring app to extract and load data in a new DB (for a new version of an existing enterprise application).
The table structure in the new database is different than that of the old.
The Spring app (which uses Spring JDBC and works with Stored Procedures, Row Mappers etc.) was developed as a PoC to check connectivity, integration and to load data in the new DB.
Now that it works, we're trying to introduce Kafka that would actually store the data and then load it into the new DB. The JDBC Sink Connector would be developed later.
I am looking for an example (using Java, Maven, Spring) that would help me in getting started towards building a custom connector. Most of the example/docs show curl examples which doesn't fit my existing app. I maybe missing something here.

There is way that we can call SP though source connector property - query: call sp here by creating temp table and insert data

Related

Is Spring Data Jdbc recommended for Oracle 18c?

Is Spring Data JDBC v1.1.5 recommended for Oracle Database and Enterprise Applications? Lot of samples around the net based on Open Source RDBMS (H2 or PostgreSQL). We are using Spring Data JDBC in a Spring Boot Microservice Application, facing following problems.
Force to write custom converters for oracle.sql.TIMESTAMP, oracle.sql.TIMESTAMPTZ and oracle.sql.DATE and oracle.sql.ROWID etc..
Can't type cast oracle.sql.ROWID to java.lang.Number
Identity must not be null after save.
Spring Data JDBC is absolutely recommended for Enterprise Applications.
Not so much for use with Oracle.
Since the necessary resources (database & JDBC driver) weren't available in a form that could be easily used in integration tests on public platforms, Oracle isn't included in regular builds.
Therefore it is likely that one encounters issues when working with Oracle.
Some are already known, for others issues in Jira or even PRs are highly appreciated.

Can ElasticSearch be used as a persistent store for Apache Ignite?

I want to know if there's a way to configure the datasource for Ignite as Elastic Search. I was browsing the web. But I did not find a solution.
I want to implement this integration for a Java application.
If I understand your idea correctly there's a way to do it. As far as I can see Elasticsearch supports SQL table-like data access and it's available through jdbc connection. From the Ignite's side we have 3rd party persistance, it uses jdbc to connect to an underlying store system. To be honest I haven't tested it but I suppose it should work.
Also I need mention that you can use GridGain WebConsole to generate simple Ignite project from existing jdbc connection. This functionality could be found on Configuration tab -> Create Cluster Configuration.

Make BIRT report using data from Gemfire cache

I am relatively new to both, BIRT and gemfire but I know the basics. Thought I wasn't able to find how BIRT can fetch data from gemfire cache.
Can someone please help me on the procedure or whether it is possible or not?
It looks like BIRT uses JDBC (amongst other means) to talk to a backend datasource. Unfortunately GemFire does not provide a JDBC driver. You would need to develop a custom ODA Data Source as described here: https://wiki.eclipse.org/Use_Case_-_Create_a_Custom_ODA_Data_Source

small emebedded database for spring data

I try to write a small web application with a restfull frontend to manage a little amount of data (round about 30 datasets). I want to create a PDF file from the datasets (using iText, but this is not the Problem). I search now a small database, which I can embed in my application an which persists the data somewhere on my Harddisc (if possible no Client / Server database), but I find no example / tutorial for this. All tutorial I found using a database in in-Memory mode, which is not what I need. Is there somewhere a nice tutorial helping me? Which database would you sugest to use in my Situation?
Thanks for your help and
Kind regards,
Andreas Grund
You can use H2, HSQL and Derby databases for embedded database. For example h2 database datasource-url like below :
jdbc:h2:~/Desktop/Database/test;DB_CLOSE_ON_EXIT=FALSE;
And in spring boot you can easily do it ,if you read this document Spring Boot-Embedded Databases

How to read and write data from multiple databases by using spring batch update?

I am working on spring batchupdate ,I search on google I didn't find any solution for my problem.
I have two databases(MySQL,ORALCE) I want to read data from mysql and write into oracle by using batch update .
Your problem is unclear.
You can first read the data from MySQL with one Spring JdbcTemplate object initialized with MySql data source, and then use another JdbcTemplate object, initialized with Oracle data source, to write the data.
If you want to do it in one transaction, you will have to use distributed transactions/XA libraries, such as Atomicos, and Spring distributed transaction manager. See here https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-jta.html for details on Spring integration with distributed transactions libraries.

Resources