Is it possible for a JDBC application to receive Fast Application Notification (FAN) from an Oracle Data Guard broker when a SWITCHOVER happens from a primary to a secondary database, so that the application can reconnect without Oracle RAC/ cluster? That is, I have just got a single instance database on the primary site and a similar setup on the secondary site, and I want my Java based application to detect and reconnect in case of FAILOVER/ SWITCHOVER.
Based on what I know about FAN, it depends on Oracle Notification Service, which indirectly means Oracle RAC/ Grid. Is this understanding correct? Oracle Data Guard, by itself, doesn't require either Oracle RAC or Oracle Grid.
If FAN is not available, what are the options for an application to get the connection to the new primary without requiring a restart?
You need either Oracle Grid or Oracle RAC. It cannot be configured with a simple Oracle Data Guard set up with single instance non-RAC databases.
Client failover
There are two things that we need to consider for client failover:
Detecting database failover
Reconnecting with the new primary
Detecting database failover
FAN mainly helps with the first point. Without FAN, the application would have to rely on database connection errors to detect that the database has failed (or no longer the primary database). This could happen in three different ways:
While establishing a new connection, the application would fail to connect with a failed database. If it was a SWITCHOVER, the application will also fail to establish a connection with a standby database with (plain) Oracle Data Guard. For Active Oracle Data Guard, it will fail to open up a connection unless its in read-only mode.
An existing connection which is already closed from the database side would throw an error. The application needs to catch this error and respond accordingly.
If the connection is still valid (for some reason), and the database is no longer available to respond (as it has failed), the application has to rely on TCP/ IP timeouts and OS level handling of sockets. For this, Oracle recommends that you tune the timeouts at the kernel level.
Once you detect connection failure, your application should be capable of creating a new connection to the database. This connection should be established with the new primary, as explained next.
Reconnecting with the new primary
Whether you use FAN or not, you can specify an address list (instead of a single database) in your connection string. This allows you to use a single connection string for multiple databases and can help you with Oracle Data Guard failover scenarios as well. A sample connection string for JDBC with multiple addresses is given below:
jdbc:oracle:thin:<userid>/<pwd>#(DESCRIPTION_LIST=
(LOAD_BALANCE=off)(FAILOVER=on)
(DESCRIPTION=
(CONNECT_TIMEOUT=6)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=2)
(ADDRESS_LIST=
(LOAD_BALANCE=on)
(ADDRESS=(PROTOCOL=TCP)(HOST=<primary-host-name>)(PORT=<port>))
)
(CONNECT_DATA=(SERVICE_NAME=<dbServiceNameOnPrimary>))
)
(DESCRIPTION=
(CONNECT_TIMEOUT=6)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=2)
(ADDRESS_LIST=
(LOAD_BALANCE=on)
(ADDRESS=(PROTOCOL=TCP)(HOST=<standby-host-name>)(PORT=<port>))
)
(CONNECT_DATA=(SERVICE_NAME=<dbServiceNameOnStandby>))
))
Source: Client Failover Best Practices for Highly Available Oracle Databases (pdf)
Related
On my Infa server PC, in Informatica Administration I created a repository service giving Oracle Database information. Once in Informatica I connect to this repository and there I want to create a mapping importing a table from a remote sql server pc (on my home network domain) and then create a workflow to put the data into an oracle target table.
Using ODBC admin console I created and tested the connection and I am also able to telnet the linked sql server and port.
Within informatica I created a relational connection for sql server and when I run the workflow I get the error reason (14007) failed to create and inituiate OLE DB instance and datatabase driven error, failed to connect to database using user I use in SSMS for windows authentication and connection string(). I would like to know, first of of all, if I am doing something wrong, willing to connect me to a repository with oracle database information and then use a sql server table on remote pc. Do I have to create another repository for Sql server and there use sql server tables or I can mix them? secondly I would like to know how to create a relational connection object in informatica for my linked sql server so that it will be the same of relatonal connection created with ODBC admin consolle. Last but not least I would like to understand why gives an error out saying I left connection string empty, when I cannot see a place where I can put it by creating the relational connection object
I might not be able to solve the problem completely, but here's few remarks that might be helpful:
PowerCenter Repository database is where PowerCenter will store all the metadata about the processes you create. It may be Oracle - that's perfectly fine. And as it is not releated to your data sources or targets, you do not need to create another one for different sources/targets. One is enough for all ot them.
Using PowerCenter Workflow Manager create the appropriate connections to all the systems you need. Here you create the connections that indicate ODBC/other connections that will be used by Integration Service to actually connect to your data sources and targets, hence
Make sure the ODBC / other data sources are specified on Intergration Service. It is the IS that will run the process, connect to systems specified in the process with the defined connections.
When you build the mappings, you create them in a client app (Mapping Designer) and you can connect to DB engines to create source and target definitions. Mark, that in such case you use the connection (eg. ODBC data source) defined on the client. Once you will try to actually run the workflow with the given mapping, it is executed on IS (mentioned above) where appropriate connections need to be defined - and that's completely separate.
When editing a session in a workflow, for each source and target you need to pick a connection defined in Informatica Repository, created as described in point 2 above (or use a variable to indicate one - but that's another story)
So the error you mention seems to be related to connection created in Workflow Manager - it probably does not specify the connection string that should refer the data source defined on IS.
currently, we are using oracle 8i and we are working to decommisson it.
I need to find out which all other databases are connecting to our database using db link.
Please note, I am not looking for the connection from our database to others database. I already got that information using all_Db_links.
If you audit connections to the database or look at the listener log, that will tell you the machines that are connecting to the database and the application that is connecting (that information is coming from the client so it could be spoofed but I'm assuming no one is actively trying to hide information from you). That should allow you to determine which connections are coming via database links. That may not tell you which database on the particular server is connecting if there are multiple databases on the same server using the same Oracle Home. But it should narrow it down to a relatively small number of databases that you can manually check.
Before I start with my question, I would like to clarify that I am java/j2ee developer and have limited understanding of things on oracle side.
I am using glassfish server with JDBC connection Pooling and in back side, oracle database.
Also i am using global temporary table of oracle to execute some work flow.
i am storing session specific data in global temp table.
Now my issue is most of the time i am getting the same sessionId for each connection.
Does that means i can't use glboal temporary table with glassfish JDBC connection Pooling.
Another interesting thing is if i removed connection pooling then i am getting different sessionID for each connection.
Please provide your suggestions.
When using Connection Pooling it's always best to not leave states in the database session when the connection is released into the pool. That's because there is no guarantee that you'll get back the same connection the next time you need one. A global temp table (GTT) is an example of such a state and belongs to one Database session, or to one JDBC connection (there is a 1-1 mapping between DB session and JDBC connection). It won't be visible if you use another JDBC connection.
So if your business logic requires that you use a GTT then you should not release the connection back to the pool until you're dong using this GTT. Note that this goes against the best practices which recommend to release the connection back to the pool as soon as possible. As an alternative you may use a normal table and commit your temporary results into it so that they can be accessed through any other connection.
I have the following issue:
Two instances of an application on two different systems should share a small database.
The main problem is that both systems can only exchange data through a network-folder.
I don't have the possibilty to setup a database-server somewhere.
Is it possible to place a H2 database on the network-folder and let both instances connect to the database (also concurrently)?
I could connect with both instances to the db using the embedded mode if I disable the file-locking, right?
The instances can perfom either READ or INSERT operations on the db. Do I risk data corruptions using multiple concurrent embedded connections?
As the documentation says; ( http://h2database.com/html/features.html#auto_mixed_mode
)
Multiple processes can access the same database without having to start the server manually. To do that, append ;AUTO_SERVER=TRUE to the database URL. You can use the same database URL independent of whether the database is already open or not. This feature doesn't work with in-memory databases.
// Application 1:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
// Application 2:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
From H2 documentation:
It is also possible to open the database without file locking; in this
case it is up to the application to protect the database files.
Failing to do so will result in a corrupted database.
I think that if your application use always the same configuration (shared file database on network folder), you need to create an application layer that manages concurrency
We are trying to reproduce an Oracle deadlock issue in our Grails / JBoss 5 / Windows Server 2003 application with The Grinder. We are simulating 800 concurrent users using 8 VM Grinder nodes, but we are only seen one database connection per VM, so somewhere along the line there appears to be some sort of limit.
How can we lift this limit to allow more than one database connection per VM?
Are you trying to connect directly from the Grinder to Oracle? Normally you'd use the Grinder to apply load against your JBoss server, and let JBoss worry about the Oracle connections.
If you really want to go from The Grinder to Oracle, and you want to control exactly how many DB connections you open, it can be done by opening a separate connection for each Grinder threads. Instantiate a new connection in the _init_ method of your TestRunner class. You'll want to avoid using any ORM tools (Hibernate, Ibatis, ...) since they do connection pooling for you and won't let you have direct control of the number of DB connections you open. Use the JDBC API (via jython) instead.