Kafka source - sink connectors - multiple tables with single topic [closed] - jdbc

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Populating messages from different tables to a single topic. But need to separate out messages and sink to destination table set with same names.
Using debezium/connect, I am populating the topic from source table with jdbc source connector. With jdbc sink connector, I populate the destination table. (from topic to destination table) In thi scenario, I need 1 topic for 1 table. Since there are 100's of tables, I need to have a single topic to sync many tables
Any idea how to achieve this?

You can use Kafka Connect transforms to modify the topic / table names in flight.
It seems you may want to use a RegexRouter to collect multiple topics together.
https://docs.confluent.io/current/connect/transforms/regexrouter.html

Related

Different Microservices Using Different Relational Databases [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 months ago.
Improve this question
My goal is to create a social media and I'm planning to use microservice architecture. I want to create user-service and post-service. Post service will get a user id and return their posts.
In the database side, I was thinking about using a relational database, having a "post" table with user_id as a foreign key in it. However I heard that microservices should have different databases. So "user" and "post" tables should be in seperate databases.
In that case why would I choose a relational database instead of a non-relational database? Referring to a primary key in a different database with a foreign key in the current database makes no sense for me.
In most cases multiple tables might be required for each service (e.g. your user-service would have many other tables related to users and not just one table) and having relational database helps maintaining relations amongst those table of single service.

Couchbase Elastic Connector checkoint issue [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
Our configuration has two connectors. Each connector is connected to its own elasticsearch. But the two connectors are reading from the same couchbase bucket. We have noticed that if one of the connector is started first and reads all of the documents from the bucket, then the second connector after starting is not able to feed anything into its elasticsearch. Could this be due to checkpoint document added by first connector into the source bucket
Make sure the two connectors have different group names, otherwise they will share the same replication checkpoint (and weird things will happen if they run at the same time).
Here's the relevant section of the config file:
[group]
name = 'example-group'
Each connector group must be assigned a unique name (in order to keep its replication checkpoints separate). The group name is required even if there is only one connector instance in the group.
Reference: https://docs.couchbase.com/elasticsearch-connector/4.2/configuration.html#group-membership

How to store constant data in DB [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
We have some queries that always return the same result.
For example, query for retrieving all user roles of our system. The result is constant since server is up.
I'm looking for the best way to store such data.
I think about calling table creating script on server start up.
Or to write stored function which will create and fill a table if it doesn't exist and retrieve data if it does.
May be there are better alternatives?
Sounds like you should look into Materialised views.

Lotus Notes to Oracle database migration [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a NSF file of a lotus database. The objective is to give up the legacy lotus notes database and migrate it to relational database oracle. Do any one have expertise in this area to give a step wise process to carry out migration from lotus notes to oracle db.
10 years ago I integrated Domino and Oracle and well it was pretty impressive.
I googled migrate from domino to oracle and didn't find much more than the LEI (or DECS) to allow connection of DATA between the 2 systems.
some steps:
1 analyze the NSF: size (MB or GB ?) number of form/view logic in code (my 5 cents opinion find someone that really use the DB that explain what they use in it !)
2 form/view will be table and requests in Oracle
3 data migration : all text date ... will be strate forward. BUT attachment Rich Text and in lined image will be painfull
4 logic well you will have to rewrite all no formula/lotuscript/xpages to J2EE or else
read also http://searchdomino.techtarget.com/answer/Migrating-from-Domino-to-Java-and-Oracle

Creating event-driven SQL scripts [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am creating a database that stores GPS data. As soon as the database updates with a data point , I want the server to check to see if that point is within a certain area and send a message or update another database (haven't decided what action it should take yet). Is this event-driven operation possible in PL/SQL? I am only familiar with passive querying and running scheduled scripts.
Yes there is such feature called database triggers. On insert or update (actually there are much more event types) of the data you can check if some conditions are met and call PL/SQL procedure to handle the event.

Resources