Datasource changes to secondary on run time if primary is offline - performance

I have to deal with the following scenario for spring application with Oracle database:
Spring application uses the primary database. In the meantime the secondary database stores data for disaster recovery (from primary).
The first step is currently provided. At this moment I have to implement:
When the primary database gets offline application should change the connection to the secondary database).
The implementation should be programmatically. How can I achieve that without changing the code that currently exists? Is there any working solution (library)?
I think about AbstractRoutingDataSource and ping databases (e.g. every 5 seconds) but I'm not sure about this solution.

So, let's to summarize the issue. I was unable to use Oracle RAC (Real Application Cluster). If the implementation should be programmatically you can try AbstractRoutingDataSource approche.
I have implemented timer that pings current database every 1 second (you can use validation query and check if you can read from database... if no we assume there is no connection and we can switch a datasource).
Thanks to that I was able to change datasource on runtime when current datasource is offline. What is more important it was automatic.
On the other hand, there are disadvantages:
For short time user can see the errors if the database is not
switched yet.
Some part of application may stop working if it is not properly
secured against the lack of connection to the database.

Related

It takes more than 10 minutes to generate entities from database in IntelliJ IDEA. How can I improve the speed?

I am using IntelliJ IDEA and JPA Buddy to generate entities from my database. However, every time I open an Entity from DB wizard, it takes a very very long time. Is it okay? Or something wrong with my database/IntelliJ IDEA or JPA Buddy?
My setup is:
Database: Oracle (~2000 tables)
IntelliJ IDEA: 2022.3.1
JPA Buddy: 2022.5.3
I have tried to recreate db connection and invalidate caches in the IntelliJ IDEA, same result.
It may happen due to a slow internet connection or many tables in the database (probably it is your case, 2000 is great number). Also, some database drivers are not showing their best side in this matter. The one way you can speed up your development process – is a "schema cache" option from JPA Buddy (1). Using it, you can generate the data model snapshot once and then use its local copy.
Just don't forget to refresh it when the database gets changed (2).

how to change database connection without restarting server when using AbstractRoutingDataSource

I am using AbstractRoutingDataSource to store data sources which are stored in database during spring boot application startup.
System can switch to correct database during program running.
When end users manual change database connection information from UI (like change password every 6 months), then sysem need to reload data sources information.
According to testing, even sytem reset target data sources, the old jdbc connection is used.

Make a J2EE application avoid updating the DB

I have a JBoss 6 application running both EJB and Spring code (some legacy involved in this decision). It should communicate to Oracle and PostgreSQL databases, on demand.
JPA is the way DB operations are done, no direct JDBC is involved.
I would like to do the following: without altering the business logic, to be able to "silence" database updates/deletes from my application, without breaking the flow with any exceptions.
My current thoughts are:
Set the JDBC driver as read-only from the deployment descriptor - this works only with PostgreSQL (Oracle driver does not support this)
Make a read-only user on the RDBMS level - it might fill me up with errors
Make all transactions rollback instead of committing - is this possible?
Make entity manager never persist anything - set the FlushMode to MANUAL and make sure flush() never gets called - but commit() still flushes everything.
Is there any other concise approach to this?
If you want to make sure the application works as on production, work on a replica of the Database. Use a scheduler every night that overwrites the replica DB.
My request also includes the need for this behavior to be activated or deactivated at runtime.
The solution I found (currently for a proof-of-concept) is:
create a new user, grant him rights on the default schema's tables;
with this user create views for each of the tables, with the same name (without the schema prefix);
create a trigger for each view that does nothing on insert, update, or delete, using INSTEAD OF;
create a data source and persistence unit for this user;
inject two entity managers at runtime, use the one that is needed;
Thanks for your help!

H2 Database multiple connections

I have the following issue:
Two instances of an application on two different systems should share a small database.
The main problem is that both systems can only exchange data through a network-folder.
I don't have the possibilty to setup a database-server somewhere.
Is it possible to place a H2 database on the network-folder and let both instances connect to the database (also concurrently)?
I could connect with both instances to the db using the embedded mode if I disable the file-locking, right?
The instances can perfom either READ or INSERT operations on the db. Do I risk data corruptions using multiple concurrent embedded connections?
As the documentation says; ( http://h2database.com/html/features.html#auto_mixed_mode
)
Multiple processes can access the same database without having to start the server manually. To do that, append ;AUTO_SERVER=TRUE to the database URL. You can use the same database URL independent of whether the database is already open or not. This feature doesn't work with in-memory databases.
// Application 1:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
// Application 2:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
From H2 documentation:
It is also possible to open the database without file locking; in this
case it is up to the application to protect the database files.
Failing to do so will result in a corrupted database.
I think that if your application use always the same configuration (shared file database on network folder), you need to create an application layer that manages concurrency

What should be approach?

Try to be more clear, I'm in lack of ideas in this problem, even it sounds like a classic.
My application is running on weblogic 10.3.3 application server, and for database I am using Oracle database 11g. My problem is that there is table in db, let's say "user.", there is column, let's say "columnA", in this table. This table is updating by some module of application.
What I want if when value of column is "abc.", then I have to show alert to console(IP). {IP can be retrieved from DB as it is configured in DB. this ip will be other linux system other than linux machine where oracle database is installed.} Updating is continuously done on my table from module of application. Please tell me from where should I start?, what should I read. I am not able to understand what should be approach. Any help is much appreciated.
A trigger on the table can call UTL_HTTP to communicate with another machine (eg call a RESTful API).
The architectural questions are :
This will happen PRIOR to the commit so you may get false alerts if a change is rolled back
If you wait for a response, it will slow the system down.
What do you do if you get an non-standard response (eg the other server isn't available)

Resources