Preventing the oracle connection being get lost - oracle

How to prevent the connection to the oracle server being gets lost if it is kept for some ideal time

If you use the newest JDBC spec 4.0 there is a isValid() method available for a connection that allows you to check if the connection is usable, if not then get a new (reconnect) connection and execute your SQL.

One possible way that I knew to save the database connection from being getting lost is to send a dummy query after the threshhold time, By Threash hold I mean the time after which the connection to the database is expected to become idle or get lost.
Some thing like
Ping_time_to_DB=60
if(Current_time - Last_Ping_time > Ping_time_to_DB)
{
--send a dummy query like
select 1 from dual;
}

Related

BizTalk send port returns ORA-01013: user requested cancel of current operation

I have an application that inserts many rows into an Oracle database. The send port returns "ORA-01013: user requested cancel of current operation"
The send port is "WCF-Custom" using OracleDbBinding for the connection to the database.
I have the same issue in the past. My problem was using "WCF-Custom" port with OracleDBBinding to invoke an Oracle PL. That PL was very slow in their response and finally I received the error "ORA-01013: user requested cancel of current operation".
My issue was resolved changing the PL. I think that the error was produced by the "ReceiveTimeout" property of the Send Port. This property says that "Specifies the WCF message receive timeout. Essentially, this means the maximum amount of time the adapter waits for an inbound message.", I suspect that when ReceiveTimeout is accomplished, the WCF-Custom cancels the operation and then Oracle sends the error.
What's happening:
When inserting large numbers of records WCF makes several parallel requests to insert the data. The default 'UseAmbientTransaction' setting wraps all the inserts within a single transaction. If one of the inserted rows breaks a database constraint it tries to rollback the transaction for all the inserts. The transactions all return the oracle 1013 exception and the real cause of the failure is lost.
Solution:
On the send port 'Transport Advanced Options' tab set the 'Ordered Delivery' check box. This prevents the parallel insert operations and the real cause of the error will be logged.

How to refresh DB connection with Sequel

I use Sequel::Model.DB to interact with my DB, but for some reason, the DB structure was changed, for example, via the DB console.
This method:
Sequel::Model.db.schema('table_name')
still returns the old DB, cached from the first connection I guess.
How can I reset that cache or, ideally, ensure the actual DB connection on each request?
I tried to use a new connection every time:
def db
#db ||= Sequel.connect(Sequel::Model.db.opts)
end
but, predictably, I got this error eventually:
Sequel::DatabaseConnectionError - PG::ConnectionBad: FATAL: sorry, too many clients already
You shouldn't be changing the structure of the database in an incompatible way while Sequel is running. The easiest way to solve this issue is just to restart the process after changing the database schema, and Sequel will pick up the new database structure.
If you really wanted to try to do this without restarting the process, you could remove the cached schemas (#db.instance_variable_get(:#schemas).clear), and reset the dataset for all model classes (ModelClass.dataset = ModelClass.dataset for each Sequel::Model). However, that doesn't necessarily result in the same thing, since if you remove a column, the old column name will still have a method defined for it.

Load file into Azure DB with MSSQLConnection

I just want to load some files into a Azure DB.
I am using the "Microsoft SQL Server" DB Type for the connection.
The problem is that when I insert like more than 10.000 rows, I have sometimes (90% of the time) an error:
Exception in component tMSSqlOutput_5
java.sql.BatchUpdateException: I/O Error: Connection reset
at net.sourceforge.jtds.jdbc.JtdsStatement.executeBatch(JtdsStatement.java:1091)
at dev_storch.extractgc_child2_0_1.extractGC_child2.tFileInputDelimited_5Process(extractGC_child2.java:28852)
at dev_storch.extractgc_child2_0_1.extractGC_child2.tFileList_6Process(extractGC_child2.java:32386)
at dev_storch.extractgc_child2_0_1.extractGC_child2.tFileList_5Process(extractGC_child2.java:31540)
at dev_storch.extractgc_child2_0_1.extractGC_child2.tMSSqlRow_1Process(extractGC_child2.java:30657)
at dev_storch.extractgc_child2_0_1.extractGC_child2.tLoop_2Process(extractGC_child2.java:30440)
at dev_storch.extractgc_child2_0_1.extractGC_child2.tFileList_4Process(extractGC_child2.java:29664)
at dev_storch.extractgc_child2_0_1.extractGC_child2.tJava_3Process(extractGC_child2.java:34020)
at dev_storch.extractgc_child2_0_1.extractGC_child2.tMSSqlInput_1Process(extractGC_child2.java:33593)
at dev_storch.extractgc_child2_0_1.extractGC_child2.tFTPConnection_2Process(extractGC_child2.java:33154)
[FATAL]: dev_storch.extractgc_child2_0_1.extractGC_child2 - tMSSqlOutput_5 I/O Error: Connection reset
[FATAL]: dev_storch.extractgc_child2_0_1.extractGC_child2 - tMSSqlRow_7 Invalid state, the Connection object is closed.
But when the volume of data inserted is lower, I don't receive any error.
My configuration looks like this:
tMSSQLConnection. Then I have some components to load files from a folder and load it inside a table.
The error comes at the tMSSQLOutput.
The following of the job are logs filling.
I tried to change the Batch size, to not use a DBConnection, but doesn't work.
I tried with a Generic JDBC component and it seems to work everytime. But I don't want to use the generic JDBC components because on the ouptut components, we can not choose the colume DB type (but maybe someone know how is it possible):
MSSQL:
Generic JDBC:
Thank you in advance...
here one solution maybe your :
Be aware that the batch size must be lower than or equal to the limit of parameter markers authorized by the JDBC driver (generally 2000) divided by the number of columns.

oracle: connection timeout when calling a plsql procedure from Java

We have a daily batch job executing a oracle-plsql function. Actually the quartz scheduler invokes a java program which makes a call to the oracle-plsql function. This oracle plsql function deletes data (which is more than 6 months) from 4 tables and then commits the transaction.
This batch job was running successfully in the test environment but started failing when new data was dumped to the tables which happened 2 weeks ago (The code is supposed to go into production this week). Earlier the number of rows in each table was not more than 0.1 million. But now it is 1 million in 3 tables and 2.4 million in the other table.
After running for 3 hours, we are getting a error in java (written in the log file) "...Connection reset; nested exception is java.sql.SQLException: Io exception: Connection reset....". When the row-counts on the tables were checked, it was clear that no record was deleted from any of the tables.
Is it possible in oracle database, for the plsql procedure/function to be automatically terminated/killed when the connection is timed out and the invoking session is no longer active?
Thanks in advance,
Pradeep.
The PL/SQL won't terminate because it is inactive, since by definition it isn't - it is still doing something. It won't be generating any network traffic back to your client though.
It appears something at the network level is causing the connection to be terminated. This could be a listener timeout, a firewall timeout, or something else. If it's consistently after three hours then it will almost certainly be a timeout configured somewhere rather than a network glitch, which would be more random (and possibly recoverable).
When the network connection is interrupted, Oracle will notice at some point and terminate the session. That will cause the PL/SQL call to be terminated, and that will cause any work it has done to be rolled back, which may take a while.
3 hours seems a long time for your deletes though even for a few million records. Perhaps you're deleting inefficiently, with individual inserts within your procedure. Which doesn't really help you of course. It might be worth pointing out that your production environment might not have whatever setting is killing your connection, or might have a shorter timeout, so even reducing the runtime might not make it bullet-proof in live. You probably need to find the source of the timeout and check the equivalent in the live env. to try to pre-empt similar problems there.

Server issue when searching Oracle database

I have a JEE application searching a large Oracle databse for data. The application uses JDBC to query the database.
The issue I am having is that the results page is unable to be displayed. I get the following error:
The connection to the server was reset while the page was loading.
This happens after 60 seconds. When I run the sql query manually using a SQL client, the results return in 3 seconds.
I have checked the logs and there are no exceptions that I can see.
Do any of you know the best way to find what is causing the connection to be reset? If I break my search date range into 2, and search both ranges individually, both return results. So it seems that it's the larger result set causing the issue.
Any help is welcome.
You are probably right about the larger result set. Often when running a query from a SQL client, you'll get the first set of records right away. If you page down to force pull of all records, then it bogs down. Perhaps your hitting the same issue with JDBC client where it takes more than 60 sec to get all the rows. I've not done JDBC in a while, but can you get it to stream the result set?
Regards,
Roger
All views are mine ...

Resources