Define multiple LookupOperation for hierarchical collection relationship in Spring-Boot application to get the data - spring

I am new to MongoDB and I am writing a DB call where multiple collections has been involved.
I am trying to understand, how can lookup the data when I have this type of relationship.
E.g. - User has subscription -> Subscription has plan details -> Subscription plan details.
Here is reference:
I want to write a lookup operation in Java.
I need pointer how I can do that?
HOw multiple different lookup operation I can combine?
Or is there any better way to get the data from the collections.
Thanks,
Atul

Related

How to read multiple tables using Spring Batch

I am looking to read data from multiple tables (different database tables) and aggregate and create final result set. In my case, each query will return the List of object. I went through web many times, I found no link other than - Spring Batch How to read multiple table (queries) as Reader and write it as flat file write, but it returns only single object.
Is there any way if we can do this ? Any working sample example would help a lot.
Example -
One query gives List of Departments - from Oracle DB
One query gives List of Employee - from Postgres
Now I want to build Employee and Department relationship and send final object to processor to further lookup against MongoDB and send the final object to reader.
The question should rather be "how to join three tables from three different databases and write the result in a file". There is no built-in reader in Spring Batch that reads from multiple tables. You either need to create a custom reader, or decompose the problem at hand into tasks that can be implemented using Spring Batch tasklet/chunk-oriented steps.
I believe you can use the driving query pattern in a single chunk-oriented step. The reader reads employee items, then a processor enrich items with 1) department from postgres and 2) other info from mongo. This should work for small/medium datasets. If you have a lot of data, you can use partitioning to parallelize things and improve performance.
Another option if you want to avoid a query per item is to load all departments in a cache for example (I guess there should be less departments than employees) and enrich items from the cache rather than with individual queries to the db.

AppSync update two tables with one mutation

I am a little confused on the best approach in how to update two tables with on GraphQL mutation, I am using AWS AppSync.
I have an application where I need a User to be able to register for an Event. Given I am using DynamoDB as the database, I had thought about a denormalized data structure for the User and Event tables. I am thinking of storing an array of brief Event details, such as eventID and title in the User table and an array of entrants in the Events table, holding only brief user info, such as userID and name. Firstly, is this a good approach or should I have a third join table to hold these 'relationships'.
If it's OK, I am needing to update both tables during the signUp mutation, but I am struggling to get my head around how to update 2 tables with the one mutation and in turn, one request mapping template.
Am I right in thinking I need to use a Pipeline resolver? Or is there another way to do this?
There are multiple options for this:
AppSync supports BatchWrite operations to update multiple DynamoDb tables at the same time
AppSync supports DynamoDb transactions to update multiple DynamoDb tables transactionally at the same time
Pipeline resolvers

How to fetch the data from database using spring boot without mapping

I have a database and in that database there are many tables of data. I want to fetch the data from any one of those tables by entering a query from the front-end application. I'm not doing any manipulation to the data, doing just retrieving the data from database.
Also, mapping the data requires writing so many entity or POJO classes, so I don't want to map the data to any object. How can I achieve this?
In this case, assuming the mapping of tables if not relevant, you don't need to use JPA/Hibernate at all.
You can use an old, battle tested jdbc template that can execute a query of your choice (that you'll pass from client), will serialize the response to JSONObject and return it as a response in your controller.
The client side will be responsible to rendering the result.
You might also query the database metadata to obtain the information about column names, types, etc. so that the client side will also get this information and will be able to show the results in a more convenient / "advanced" way.
Beware of security implications, though. Basically it means that the client will be able to delete all the records from the database by a simple query and you won't be able to avoid it :)

Kafka Connect Jdbc Source Data Schema

Does the connector generate a generic Schema such as record with the map of column->value in it, or each table get its own schema that would map to a different class trough binding. That is in the first scenario, trough binding all record across all table would bind to a record class, that contain a map.
I implemented some similar functionality in the past, and what I did was creating a generic record class containing a map with the field and value, which is consistent with what the underlying JDBC API returns.
Although this seems to defy the purpose of schema, hence i wonder how it works.
If anyone could give me some hint and documentation to investigate that, that would be much appreciated

Generic Entity in Spring Data for Couchbase Documents

One advantage of Document DBs like Couchbase is schemaless entities. It gives me freedom to add new attributes within the document without any schema change.
Using Couchbase JsonObject and JsonDocument my code remains generic to perform CRUD operations without any need to modify it whenever new attribute is added to the document. Refer this example where no Entities are created.
However if I follow the usual Spring Data approach of creating Entity classes, I do not take full advantage of this flexibility. I will end up in code change whenever I add new attribute into my document.
Is there a approach to have generic entity using Spring Data? Or Spring Data is not really suitable for schemaless DBs? Or is my understanding is incorrect?
I would argue the opposite is true.
One way or another if you introduce a new field you have to handle the existing data that doesn't have that field.
Either you update all your documents to include that field. That is what schema based stores basically force you to do.
Or you leave your store as it is and let your application handle that issue. With Spring Data you have some nice and obvious ways to handle that in a consistent fashion, e.g. by having a default value in the entity or handling that in a listener.

Resources