Oracle DML with 2 phase commit not materialized - oracle

Today I was hit by a successful 2pc that wasn't materialized in Oracle. The other participant was MSMQ which materialized fine.
The problem is that I did not get an Exception in the application (using c# odp.net). Later I found the In-Doubt Transactions in sys.dba_2pc_pending.
Could I somehow have detected this in my application?
EDIT: This is not about getting 2pc to work. It does work, and for more than a year until a day where some rows where missing. Please read about In-Doubt Oracle transactions link1 and pending transactions link2

My first thoughts is to make sure that distributed transaction processing is enabled on the oracle listener.
In my case no error was thrown. We use RAC and the service did not have distributed transaction processing enabled. In a stand-alone system I'm not sure what this would do, but in the case of RAC it serves the purpose of identifying the primary node for handling the transaction. Without it, a second operation that was supposed to be in the same operation just ended up starting a new transaction and deadlocked with the first.
I have also had significant amounts of time go by without an issue. By luck (there's probably more) it just so happened that transactions were never split over the nodes. But then a year later the same symptoms creap up and in all cases either the service didn't have the DTP flag checked or the wrong service name (one without DTP) was being used.
From:http://docs.oracle.com/cd/B19306_01/rac.102/b14197/hafeats.htm#BABBBCFG
Enabling Distributed Transaction Processing for Services For services
that you are going to use for distributed transaction processing,
create the service using Enterprise Manager, DBCA, or SRVCTL and
define only one instance as the preferred instance. You can have as
many AVAILABLE instances as you want. For example, the following
SRVCTL command creates a singleton service for database crm,
xa_01.service.us.oracle.com, whose preferred instance is RAC01:
srvctl add service -d crm -s xa_01.service.us.oracle.com -r RAC01 -a
RAC02, RAC03
Then mark the service for distributed transaction
processing by setting the DTP parameter to TRUE; the default is FALSE.
Enterprise Manager enables you to set this parameter on the Cluster
Managed Database Services: Create Service or Modify Service page. You
can also use the DBMS_SERVICE package to modify the DTP property of
the singleton service as follows:
EXECUTE DBMS_SERVICE.MODIFY_SERVICE(service_name
=>'xa_01.service.us.oracle.com', DTP=>TRUE);

Related

How can I determine exactly how much downtime needed during the scheduled maintenance window in Autonomous Database?

How can I determine exactly how much downtime is needed during the scheduled maintenance window in Autonomous Database? Also, how can I find out about what's included in a given patch?
As described in the doc, your database remains available during the maintenance window. Your existing database connections may get disconnected briefly; however, you can immediately reconnect and continue using your database. If your application uses Oracle Transparent Application Continuity (TAC), you avoid these brief disconnects altogether.
In order to see what bug fixes are delivered in these maintenance windows, you can run the following query:
-- To view patch information for all available patches
SELECT * FROM DBA_CLOUD_PATCH_INFO;
-- For patch ADBS-21.7.1.1
SELECT * FROM DBA_CLOUD_PATCH_INFO WHERE PATCH_VERSION = 'ADBS-21.7.1.1';
Disclaimer: I’m a Product Manager at Oracle.

Datasource changes to secondary on run time if primary is offline

I have to deal with the following scenario for spring application with Oracle database:
Spring application uses the primary database. In the meantime the secondary database stores data for disaster recovery (from primary).
The first step is currently provided. At this moment I have to implement:
When the primary database gets offline application should change the connection to the secondary database).
The implementation should be programmatically. How can I achieve that without changing the code that currently exists? Is there any working solution (library)?
I think about AbstractRoutingDataSource and ping databases (e.g. every 5 seconds) but I'm not sure about this solution.
So, let's to summarize the issue. I was unable to use Oracle RAC (Real Application Cluster). If the implementation should be programmatically you can try AbstractRoutingDataSource approche.
I have implemented timer that pings current database every 1 second (you can use validation query and check if you can read from database... if no we assume there is no connection and we can switch a datasource).
Thanks to that I was able to change datasource on runtime when current datasource is offline. What is more important it was automatic.
On the other hand, there are disadvantages:
For short time user can see the errors if the database is not
switched yet.
Some part of application may stop working if it is not properly
secured against the lack of connection to the database.

Make a J2EE application avoid updating the DB

I have a JBoss 6 application running both EJB and Spring code (some legacy involved in this decision). It should communicate to Oracle and PostgreSQL databases, on demand.
JPA is the way DB operations are done, no direct JDBC is involved.
I would like to do the following: without altering the business logic, to be able to "silence" database updates/deletes from my application, without breaking the flow with any exceptions.
My current thoughts are:
Set the JDBC driver as read-only from the deployment descriptor - this works only with PostgreSQL (Oracle driver does not support this)
Make a read-only user on the RDBMS level - it might fill me up with errors
Make all transactions rollback instead of committing - is this possible?
Make entity manager never persist anything - set the FlushMode to MANUAL and make sure flush() never gets called - but commit() still flushes everything.
Is there any other concise approach to this?
If you want to make sure the application works as on production, work on a replica of the Database. Use a scheduler every night that overwrites the replica DB.
My request also includes the need for this behavior to be activated or deactivated at runtime.
The solution I found (currently for a proof-of-concept) is:
create a new user, grant him rights on the default schema's tables;
with this user create views for each of the tables, with the same name (without the schema prefix);
create a trigger for each view that does nothing on insert, update, or delete, using INSTEAD OF;
create a data source and persistence unit for this user;
inject two entity managers at runtime, use the one that is needed;
Thanks for your help!

Select listener when using DBCA on Oracle RAC

I have a 2 node oracle cluster (RAC). I created a listener and a database.
I want to create another listener and create a new database on the same cluster using the secondary listener for the new database.
In a single node mode where I define more than one listener on machine, when I use DBCA to create the database, a "Select Listener" page appears for me and I can choose the listener.
I created a new listener with the grid user for cluster-wide use, but when I use DBCA to create a database, the listener selection page does not appear.
Can any help me to choose secondary listener for new database?
Technically, it doesn't make sense to have two databases on the same set of RAC servers. Do not confuse between the terminolgy between database or database instances with logical schema(s). I have seen SQL Server developers confusing with schema and Oracle terminology of database.
Is it possible or not is a different scope altogether, but what you want to achieve is scalable and meaningful is my concern.
You could more explain about your requirement, what exactly makes you think to do so. I would be happy to guide you further. As of now, it doesn't make much sense.
update To Configure Multiple Listeners for Your Database Using DBCA, please read
http://docs.oracle.com/cd/E11882_01/install.112/e48195/undrstnd.htm#BEIDJJAG
You said, you have already created two listeners, but they were not listed in dbca. Please check the listener.ora if it has both the entries. If not, then create a new unique entry in listener.ora. Try again with dbca.

Oracle ALTER SESSION ADVISE COMMIT?

My app to recovers automatically from failures. I test it as follows:
Start app
In the middle of processing, kill the application server host (shutdown -r -f)
On host reboot, application server restarts (as a windows service)
Application restarts
Application tries to process, but is blocked by incomplete 2-phase commit transaction in Oracle DB from previous session.
Somewhere between 10 and 30 minutes later the DB resolves the prior txn and processing continues OK.
I need it to continue processing faster than this. My DBA advises that I should prefix my statement with
ALTER SESSION ADVISE COMMIT;
But he can't give me guarantees or details about the potential for data loss doing this.
Luckily the statement in question is simply updating a datetime value to SYSDATE every second or so, so if there was some data corruption it would last < 1 second before it was overwritten.
But, to my question. What exactly does the statement above do? How does Oracle resolve data synchronisation issues when it is used?
Can you clarify the role of the 'local' and 'remote' databases in your scenario.
Generally a multi-db transaction does the following
Starts the transaction
Makes a change on on database
Makes a change on the other database
Gets the other database to 'promise to commit'
Commits locally
Gets the remote db to commit
In doubt transactions happen if step 4 is completed and then something fails. The general practice is to get the remote database back up and confirm if it committed. If so, step (5) goes ahead. If the remote component of the transaction can't be committed, the local component is rolled back.
Your description seems to refer to an app server failure which is a different kettle of fish. In your case, I think the scenario is as follows :
App server takes a connection and starts a transaction
App server dies without committing
App server restarts and make a new database connection
App server starts a new transaction on the new connection
New transaction get 'stuck' waiting for a lock held by the old connection/transaction
After 20 minutes, dead connection is terminated and transaction rolled back
New transaction then continues
In which case the solution is to kill off the old connection quicker, with a shorter timeout (eg SQLNET_EXPIRE_TIME in the sqlnet.ora of the server) or a manual ALTER SYSTEM KILL SESSION.

Resources