Read datasource from a database or a file and based on API identifier, save data in the database? Which tool to use? - spring

In a Spring Boot Application, there is a method mapped with a POST API for posting certain data in database. The issue is based on API URL parameter, data source will change.
Like the API is: {baseURL}/api/{someIdentifier}/addUser
Now, there is another file or consider a database which maps Database Connection Strings (like Datasource, Username, password, driver) to this {someIdentifier}. There could be a lot of such identifiers (corresponding to which there could be multiple databases and their parameters).
Now when this API gets hit, based on this identifier there will be a method which will fetch connection strings, make the connection and then it should save the data in that database. On every API, creating a connection is not feasible.
Can anyone please suggest which tool or technology can be helpful for solving this problem, especially using Spring Boot.
Thanks in advance!

You are looking for the AbstractRoutingDataSource.
From its documentation:
Abstract DataSource implementation that routes getConnection() calls to one of various target DataSources based on a lookup key. The latter is usually (but not necessarily) determined through some thread-bound transaction context.

Related

Which connection pool implementation has the behaviour that i want?

So i am running a spring boot server which i use to query a MySQL database. So far i have been using the auto-configured HikariCP connection pool with JOOQ so i had almost nothing to do with the connection pool. But now i need to query two different schemas (on the same server) and it seems like i can't auto-configure two connection pools so i have to tinker with the DataSource myself. I would like to conserve the native behavior of the connection, i.e have a set of persistent connections so that the server can dispatch the queries and once the query is resolved, the connection is still there and free to use again. I have found multiple implementations of connection pools allowing to have multiple DataSource to query multiple servers but i don't know if each of them is using the behavior that i just described.
Implementation #1 :
https://www.ru-rocker.com/2018/01/28/configure-multiple-data-source-spring-boot/
Implementation #2 :
https://www.stubbornjava.com/posts/database-connection-pooling-in-java-with-hikaricp
I feel like #2 is the most straight forward solution but i am sceptical to the idea of creating a new DataSource everytime i want to query. If i don't close it, am i just opening now connections over and over again? So obviously i would have to close them once finished but then it's not really a connection pool anymore. (Or am i misunderstanding this?)
Meanwhile #1 seems more reliable but again, i would be calling new HikariDataSource everytime so is that what i am looking for?
(Or is there a more simple solution that i have been missing out because i need to query two different schemas but still on the same server and dialect)
Ok so it turns out i don't have to setup multiple connections in my case. As i am querying the same server with the same credentials, i don't have to setup a connection for each shema. I just removed the schema that i specified in my jdbc url config:
spring.datasource.url=jdbc:mysql://localhost:5656/db_name?useUnicode=true&serverTimezone=UTC
Becomes
spring.datasource.url=jdbc:mysql://localhost:5656/?useUnicode=true&serverTimezone=UTC
And then as i had already generated the POJO with the JOOQ generator i could reference my table from the schema object, i.e: Client.CLIENT.ID.as("idClient") becomes ClientSchema.CLIENTSCHEMA.CLIENT.ID.as("idClient"). This way i can query multiple schemas without setting up any new additional connection.
How to configure MAVEN and JOOQ to generate sources from multiple schemas:
https://www.jooq.org/doc/3.13/manual/code-generation/codegen-advanced/codegen-config-database/codegen-database-catalog-and-schema-mapping/

Transaction management of JPA and external API calls

I'm new to spring, started using spring boot for the project. We have an use case of implementing database changes and few external API calls as one transaction. Please suggest, is this possible with the spring #transactional?
Do the API calls need to be part of the transaction?
If the answer is no, I would advise to use TransactionTemplate.doInTransaction() leaving the API requests outside of the Tx.
If you need to make the API requests inside a Tx, I would advise against it, you would be locking DB resources for the duration of those requests.
You can also search and find out more about the eventual consistency model.
Using #Transactional for multiple database changes as one transaction is of course doable with the annotation but not so much for the external API calls. You would have to implement some custom logic for that - there would have to be endpoints to undo your last actions and you would have to implement calling them manually in try-catch block for example. For example, if the external API call creates an item there would also have to be an endpoint to delete an item and so on.
So to summarise - using #Transactional annotation for implementing database changes as one transaction is fine, but not enough for external API calls.

Spring Data when does it connect to the database

I have been researching Spring Data Rest especially for cassandra and one of the questions my coworkers and I had was when does Spring Data connect to the database. We don't always want a rest controller to connect to the database so when does spring establish a connection if say we had a class extend the CRUDRepository? Does it connect to the database during the start of application itself? Is that something we can control?
For example, I implemented this example on Spring's website:
https://spring.io/guides/gs/accessing-data-rest/
At what point in the code does spring connect to the database?
Spring will connect to the DB as soon as the Datasource get initialized. Basically, Spring contexts will become alive somehow (Web listeners, manually calling them) and start creating beans. As soon as it reaches the Datasource, connection will be made and the connection pool will be populated.
Of course the above is based on a normal out of the box configuration and everything can be setup up to your taste.
So unless, you decide to control the connections yourself, DB connections will be sitting there waiting to be used.
Disagree with the above answer.
As part of research i initiated the datasource using a bean configuration and then changed my database password(not in my spring application but the real db username password)
The connection stays for a while and then in some point of time (maybe idle time) it stops working and throws credential exception.
This is enough to say the JPA does not keep the connection sitting and waiting to be used but uses some mechanism to occupy/release the db connection as per the need.

Is Spring XA transactions across non-transactional backends feasible at all?

I'm writing an application that has to communicate across 3 different platforms. Two expose their DB via a REST API (no jdbc driver) and one is a native JDBC connection (ex: Derby, MySQL, Oracle, etc).
My problem is that I have no way of assuring any ACID'ity when updating data, given that the three should be updated at the same time.
I've tried reading up on Spring XA but it seems as both 2PC and 1PC require some form of transactional backends. Given that 2 of my 3 destinations are REST APIs, I don't have any transactions. Just a save/update option.
Are there techniques I can use to ensure that the 3 sources are synchronized and that I don't run into inconsistent states if ever a write fails (ie: REST endpoint unavialble, etc)?
A transaction example would be:
Read from DB
Write to REST-1 endpoint
Update DB
Write to REST-2 endpoint
Is there some form of XA I could employ to wrap everything in such a way I can be assured of consistency?

Cache solution jaxb

I have a requirement in which I need to cache data coming as response from a soap ws. I am using Spring with JAXB and JAX-WS to call the web service. I am using ehcache for caching.
What I would want ideally is that the user data for example is cached as the java bean (JAXB). We can use the id of a bean (JAXB bean) as the name. Whenever data is requested, the data should first be checked in the cache and if the data is not available, the soap ws should be called and then the data should be stored in the cache.
I am not aware if there is already a solution to handle this in spring or ehcache or maybe in JAXB. Can someone please help me out.
In my view, the more important point to ask would be how would you keep the server data in synch with what you have cached on the client side? If you really want to do this, then you would have to create some sort of mechanism to notify clients that the original data has updated.
For e.g. one solution could be to create a JMS Topic (Pub/Sub) and have the server publish an event to this topic. All clients listen to this topic and reload the cache by making the web service call at this point.
In terms of the cache itself at the client end, why not just choose a ConcurrentHashMap to begin with?

Resources