I've single DB with three identical schema in PostgreSQL.
Now I need to select particular schema for DB operation based on locale-key (store in user session). I found somewhere that this thing is similar to dynamic data source routing.
Anyone have any idea about How to implement this in Spring?
Will this effect transaction management in anyway?
Please do share any sample code if possible.
Any suggestion would be appreciated.
Thanks & Regards.
I suggest using this approach:
First define a locale interceptor either session-based or change interceptor in your Spring MVC configuration.
You can now use LocaleContexHolder to fetch the current locale on the attached thread.
Use the reference blog post to define your dynamic routing data source. The parameter on your data source router will come from locale. Use LocaleContextHolder to determine the locale then use it to determine which data source should be used.
If you have single DB, then the dynamic aspect should not be related to connection pooling - all connections are still for single database. All you need to do is to dynamically set proper schema after starting the transaction.
This may be achieved using some aspect with order higher than <tx:annotation-driven />. In this aspect you should acquire current connection:
DataSourceUtils.getConnection(dataSource)
and issue the following PostgreSQL statement (see: http://www.postgresql.org/docs/9.1/static/sql-set.html for information about schema parameter);
set schema 'schemaname-on-the-basis-of-session-parameter';
See also using schemas in postgres.
As for the transaction management - transactions are related to physical connections and sessions. Schemas on the other side are kind of namespaces, so you don't have to change transaction management, just set current schema at the beginning of each transaction during user request processing.
Related
We are working on a spring boot library to generate and validate OTP. It uses database to store the OTP.
We are using Spring Data JPA for Database operations, as it will be easy to handle multiple database systems according to the project.
Now we have ran in to a problem, most of our projects uses Oracle with a single database.
When using the the same lib in multiple projects there is a name conflict.
So we want the name of the OTP table to be configurable using a property file.
We tried #Table(name = "${otp-table-name}") But its not working.
We did a lots of research and found out the hibernate naming strategy configuration can help.
But we dont want to use lots of configuration in our library as we need the library to be easily usable in the projects.
Can someone help us on this aspect.
Thanks in advance.
You can dynamically determine the actual DataSource based on the current context, use Spring's AbstractRoutingDataSource class. You could write your own version of this class and configure it to use a different data source based on the property file.
This allows you to switch between databases or schema without having to change the code in your library.
See: https://www.baeldung.com/spring-abstract-routing-data-source
Using a NamingStrategy is good approach.
You could let it delegate to an existing NamingStrategy and add a prefix.
Use a library specific default for the prefix, but also allow users of your library specify an alternative prefix.
This way your library can be used without extra configuration, but can also handle the case of multiple applications using it in the same database schema.
Of course this might involve the risk of someone using the default prefix without realizing that, that is already used.
It is not clear what the consequences of that scenario are.
If the consequences are really bad you should drop the default value and require that a project specific prefix is used.
When no prefix is specified throw an exception with an instructional error message telling the user, i.e. the developer how to pick a prefix and where to put it.
Hi I am trying to configure the quarkus to connect to a oracle database. With the current configuration I am able to connect to the database, but I cannot specify the current schema.
I followed the documentation and try to use the new-connection-sql to set the current schema. But it doesn't seems to work.
quarkus.datasource.mydatasource.new-connection-sql=ALTER SESSION SET CURRENT_SCHEMA=SCHEMA_NAME
Here is my application.properties file
quarkus.datasource.mydatasource.db-kind=oracle
quarkus.datasource.mydatasource.jdbc.driver=oracle.jdbc.driver.OracleDriver
quarkus.datasource.mydatasource.jdbc.url=jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.15.73.140)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=SN)))
quarkus.datasource.mydatasource.jdbc.min-size=3
quarkus.datasource.mydatasource.jdbc.max-size=20
quarkus.datasource.mydatasource.username=username
quarkus.datasource.mydatasource.password=password
quarkus.datasource.mydatasource.new-connection-sql=ALTER SESSION SET CURRENT_SCHEMA=SCHEMA_NAME
What could be the issue here?
Thank you.
This is working fine if you add the jdbc sub path name to the property
quarkus.datasource.mydatasource.jdbc.new-connection-sql=ALTER SESSION SET CURRENT_SCHEMA=SCHEMA_NAME
You can refer to these Quarkus configuration references:
https://quarkus.io/guides/datasource#quarkus-agroal_quarkus.datasource.-datasource-name-.jdbc.new-connection-sql
https://quarkus.io/guides/datasource#quarkus-agroal_quarkus.datasource.jdbc.new-connection-sql
You could try to set the schema in the connection url. But what you are trying to archieve, basically routing each user request to a specific schema, you should check the hibernate multitenancy support by this means, you can route each request to the database you want, but beware of the limitations regarding the parameters you can work with to know where to route your request.
Also check hibernate catalog and schema configuration parameters
I have multiple consumers for an API who post similar data into my API. My API needs to consume this data and persist the data into cassandra tables identified by consumer name. Eg. consumername_tablename
My spring boot entity is annotated with #Table which doesn't let me change the table name dynamically. Most recommendations online suggest that its not something we should try and change.
But in my scenario identifying all consumers and creating table in advance doesnt sound right. In future I want to be able to add consumers to my API seamlessly.
I want to use a variable passed in my API call as the prefix for my cassandra table names. Is this something I can achieve?
For starters: You cannot change annotations without recompiling- they are baked into the compiled class file. This is not the right approach.
Why not put everything in one table and make consumer part of the key? This should give you identical functionality without any of the hassle.
I need to attach a listener to a table in db
which should call a spring boot method, once CRUD operation is performed in the table(pre listeners and post listeners)
the entry can be made from any source
how can i do that in spring boot?
If the entity can be created from any source - e.g. manual insert - this is something which is outside of the scope and context of your running application.
What you're describing is known as the CDC (change data capture) pattern.
To implement CDC in this case you need to use the instrumentation of the underlying database - for example triggers.
As I see this is tagged with MongoDb - triggers are not an option as mongodb doesn't have support for triggers.
If you are using MongoDb v3.6+ you can leverage the new Change Streams feature. This is the official example with Java.
Change streams allow applications to access real-time data changes
without the complexity and risk of tailing the oplog. Applications can
use change streams to subscribe to all data changes on a single
collection, a database, or an entire deployment, and immediately react
to them. Because change streams use the aggregation framework,
applications can also filter for specific changes or transform the
notifications at will.
If you are using earlier versions of MongoDb you can monitor the oplog or use tailable cursors with capped collections.
Another approach would be to look into a 3rd party solution that turns everything happening in the DB as event streams - like for example debezium.
This article explains how to call any program from DB-Trigger.
Therefore, you can just create a Spring Boot java app and make the sys call to your app.
Similar mechanism is also available in Oracle and other DB.
We have a Spring+Hibernate application (using Spring 2, from AppFuse 1.9) which is in a desperate need to be updated to Spring 3. We're slowly working on that.
In the meantime, I'd like to take some of the load off our primary database server, and set up the read-only controllers (which just display information) to read from our database slaves.
More specifically, we have multiple databases servers (master+slaves), and I'd like to be able to set up multiple database connections, and then specify that controller1 uses db1, and controller's 2 and 3 use db2.
How can we achieve this?
You should be able to do that with AbstractRoutingDataSource class in Spring. This blog should help you. You can wire each data source for each of your controllers.