Make a J2EE application avoid updating the DB - oracle

I have a JBoss 6 application running both EJB and Spring code (some legacy involved in this decision). It should communicate to Oracle and PostgreSQL databases, on demand.
JPA is the way DB operations are done, no direct JDBC is involved.
I would like to do the following: without altering the business logic, to be able to "silence" database updates/deletes from my application, without breaking the flow with any exceptions.
My current thoughts are:
Set the JDBC driver as read-only from the deployment descriptor - this works only with PostgreSQL (Oracle driver does not support this)
Make a read-only user on the RDBMS level - it might fill me up with errors
Make all transactions rollback instead of committing - is this possible?
Make entity manager never persist anything - set the FlushMode to MANUAL and make sure flush() never gets called - but commit() still flushes everything.
Is there any other concise approach to this?

If you want to make sure the application works as on production, work on a replica of the Database. Use a scheduler every night that overwrites the replica DB.

My request also includes the need for this behavior to be activated or deactivated at runtime.
The solution I found (currently for a proof-of-concept) is:
create a new user, grant him rights on the default schema's tables;
with this user create views for each of the tables, with the same name (without the schema prefix);
create a trigger for each view that does nothing on insert, update, or delete, using INSTEAD OF;
create a data source and persistence unit for this user;
inject two entity managers at runtime, use the one that is needed;
Thanks for your help!

Related

When and how should SpringData + JPA schema and DB initialization be used?

I'm working on a simple task of adding a new table to an existing SQL DB and wiring it into a SpringBoot API with SpringData.
I would typically start by defining the DB table directly, creating PK and FK, etc and then creating the Java bean that represents it, but am curious about using the SpringData initialization feature.
I am wondering when and where Spring Data + JPAs schema generation and DB initialization may be useful. There are many tutorials on how it can be implemented, but when and why are not as clear to me.
For example:
Should I convert my existing lower environment DBs (hand coded) to initialized automatically? If so, by dropping the existing tables and allowing the App to execute DDL?
Should this feature be relied on at all in production envrionment?
Should generation or initialization be run only once? Some tutorial mention this process running continually, but why would you choose to lose data that often?
What is the purpose of the drop-and-create jpa action? Why would
you ever want to drop tables? How are things like UAT test data handled?
My two cents on these topics:
Most people may say that you should not rely on automated database creation because it is a core concept of your application and you might want to take over the task so that you can lnowmfor sure what is really happening. I tend to agree with them. Unless it is a POC os something not production critical, I would prefer to define the database details myself.
In my opinion no.
This might be ok on environments that are non-productive. Or on early and exploratory developments. Definetely not on production.
On a POC or on early and exploratory developments this is ok. In any other case I see this being useful. Test data might also be part of the initial setup of the database. Spring allows you to do that by defining an SQL script inserting data to the database on startup.
Bottomline in my opinion you should not rely on this feature on Production. Instead you might want to take a look at liquibase or flyway (nice article comparing both https://dzone.com/articles/flyway-vs-liquibase), which are fully fledged database migration tools on which you can rely even on production.
My opinion in short:
No, don't rely on Auto DDL. It can be a handy feature in development but should never be used in production. And be careful, it will change your database whenever you change something on your entities.
But, and this is why I answer, there is a possibility to have hibernate write the SQL in a file instead of executing it. This gives you the ability to make use of the feature but still control how your database is changed. I frequently use this to generate scripts I then use as blueprint for my own liquibase migration scripts.
This way you can initially implement an entity in the code and run the application, which generates the hibernate sql file containing the create table statement for your newly added entity. Now you don't have to write all those column names and types for the database table yourself.
To achieve this, add following properties to your application.properties:
spring.jpa.hibernate.ddl-auto=none
spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=build/generated_scripts/hibernate_schema.sql
spring.jpa.properties.javax.persistence.schema-generation.scripts.action=create
This will generate the SQL script in your project folder within build/generated_scripts/hibernate_schema.sql
I know this is not exactly what you were asking for but I thought this could be a nice hint on how to use Auto DDL in a safer way.

Datasource changes to secondary on run time if primary is offline

I have to deal with the following scenario for spring application with Oracle database:
Spring application uses the primary database. In the meantime the secondary database stores data for disaster recovery (from primary).
The first step is currently provided. At this moment I have to implement:
When the primary database gets offline application should change the connection to the secondary database).
The implementation should be programmatically. How can I achieve that without changing the code that currently exists? Is there any working solution (library)?
I think about AbstractRoutingDataSource and ping databases (e.g. every 5 seconds) but I'm not sure about this solution.
So, let's to summarize the issue. I was unable to use Oracle RAC (Real Application Cluster). If the implementation should be programmatically you can try AbstractRoutingDataSource approche.
I have implemented timer that pings current database every 1 second (you can use validation query and check if you can read from database... if no we assume there is no connection and we can switch a datasource).
Thanks to that I was able to change datasource on runtime when current datasource is offline. What is more important it was automatic.
On the other hand, there are disadvantages:
For short time user can see the errors if the database is not
switched yet.
Some part of application may stop working if it is not properly
secured against the lack of connection to the database.

DB2 Exclusive Lock Not Released

In a particularly requested DB2 table, accessed by distributed Java desktop applications via JDBC, I'm getting the following scenario several times a day:
Client A wants to INSERT new registers and gets a IX lock on the table, and X locks in each new row;
Other client(s) want(s) to perform a SELECT, is granted a IS lock on the table, but the application stucks;
Client A continues to work, but the INSERT and UPDATE queries are not commited, the locks are not released, and it keeps collecting X locks to each row;
Client A exits and its work is not committed. The other clients finnally get their SELECT result set.
Used to work well, and it does most of the time, but the lock situations are getting more and more frequent.
Auto-commit is ON.
There are no exceptions thrown or errors detected in the logs.
DB2 9.5 / JDBC Driver 9.1 (JDBC 3 specification)
If the jdbc applications are not performing COMMIT then the locks will persist until a rollback or commit. If an application quits with uncommitted inserts then a rollback will happen for all recent versions of Db2. This is expected behaviour for Db2 on Linux/Unix/Windows.
If the jdbc application is failing to commit then it is broken or misconfigured so you must get to root cause of that if you seek a permanent solution.
If the other clients wish to ignore the insert row-locks then they should choose the correct isolation level and you can configure Db2 to skip insert-locks . See documentation DB2_SKIPINSERTED at this link
It turns out that sometimes the auto-commit, and I don't know why, becomes off to a random single instance of the application.
The following validation seems to solve the problem (but not the root of it):
if (!conn.getAutoCommit()) {
conn.commit();
}

Tibco JDBC Update dry run

Is it possible to have a dry run in Tibco for the JDBC Update activities? Meaning that I want to run those activities, but not actually update the database.
Even running in test mode if it's possible will be good.
The only option I see is having a copy of the targeted tables in a separe schema, duplicate the data, and temporary align the JDBC connection of you activity on this secondary, temporary/test database.
Since you can use global variables, no code is changed between test and delivery (a typical goal), and you can compare both DB tables to see if the WOULD HAVE ran well...
I think I found a way. I haven't tested it yet, but theoretically it should work.
The solution is to install P6Spy and to create a custom module that will throw an exception when trying to execute an INSERT/UPDATE.
You could wrap the activity into a transaction group and rollback whenever you only want to test the statement. Otherwise just exit the group normally so the data gets commited.

Oracle DML with 2 phase commit not materialized

Today I was hit by a successful 2pc that wasn't materialized in Oracle. The other participant was MSMQ which materialized fine.
The problem is that I did not get an Exception in the application (using c# odp.net). Later I found the In-Doubt Transactions in sys.dba_2pc_pending.
Could I somehow have detected this in my application?
EDIT: This is not about getting 2pc to work. It does work, and for more than a year until a day where some rows where missing. Please read about In-Doubt Oracle transactions link1 and pending transactions link2
My first thoughts is to make sure that distributed transaction processing is enabled on the oracle listener.
In my case no error was thrown. We use RAC and the service did not have distributed transaction processing enabled. In a stand-alone system I'm not sure what this would do, but in the case of RAC it serves the purpose of identifying the primary node for handling the transaction. Without it, a second operation that was supposed to be in the same operation just ended up starting a new transaction and deadlocked with the first.
I have also had significant amounts of time go by without an issue. By luck (there's probably more) it just so happened that transactions were never split over the nodes. But then a year later the same symptoms creap up and in all cases either the service didn't have the DTP flag checked or the wrong service name (one without DTP) was being used.
From:http://docs.oracle.com/cd/B19306_01/rac.102/b14197/hafeats.htm#BABBBCFG
Enabling Distributed Transaction Processing for Services For services
that you are going to use for distributed transaction processing,
create the service using Enterprise Manager, DBCA, or SRVCTL and
define only one instance as the preferred instance. You can have as
many AVAILABLE instances as you want. For example, the following
SRVCTL command creates a singleton service for database crm,
xa_01.service.us.oracle.com, whose preferred instance is RAC01:
srvctl add service -d crm -s xa_01.service.us.oracle.com -r RAC01 -a
RAC02, RAC03
Then mark the service for distributed transaction
processing by setting the DTP parameter to TRUE; the default is FALSE.
Enterprise Manager enables you to set this parameter on the Cluster
Managed Database Services: Create Service or Modify Service page. You
can also use the DBMS_SERVICE package to modify the DTP property of
the singleton service as follows:
EXECUTE DBMS_SERVICE.MODIFY_SERVICE(service_name
=>'xa_01.service.us.oracle.com', DTP=>TRUE);

Resources