Cancel execution a big query on oracle submitted by SQLDeveloper - oracle

Perhaps, this is a naive question. When we use SQLDeveloper to execute a big query to Oracle, the we cancel the task. I wonder if it helps to cancel the execution of the server or not?
Thanks,

yes, in case the DB server finds the time to handle the protocol message, it cancels the execution of the statement and returns an
ORA-01013: user requested cancel of current operation instead of an SQL result set.

Related

In case of batch read in JDBC using setFetchSize, if we apply timeout to statement then is it batch level timeout?

Statement statement = connection.prepareStatement("SELECT * FROM BOOKS");
statement.setFetchSize(100);
statement.setQueryTimeout(10);//Timeout of 10 seconds
here is the timeout for per batch or total records to be fetched?
The JDBC API documentation of Statement.setQueryTimeout says:
Sets the number of seconds the driver will wait for a Statement object
to execute to the given number of seconds. By default there is no
limit on the amount of time allowed for a running statement to
complete. If the limit is exceeded, an SQLTimeoutException is
thrown. A JDBC driver must apply this limit to the execute,
executeQuery and executeUpdate methods.
Note: JDBC driver implementations may also apply this limit to ResultSet methods (consult your driver vendor documentation for
details).
Note: In the case of Statement batching, it is implementation defined as to whether the time-out is applied to individual SQL
commands added via the addBatch method or to the entire batch of SQL
commands invoked by the executeBatch method (consult your driver
vendor documentation for details).
In other words, this will depend on the database and driver you use.
Some drivers might apply it to execute only, and fetching rows after execute might not be subject to the timeout, some might apply it to individual fetches triggered by ResultSet.next() as well, and yet others might even use the wall clock time since query execution started and forcibly close the cursor if query execution and fetching and processing rows takes longer than the specified timeout (which could trigger the timeout even if the server is quick to respond, but it is the client that is taking a long time to process the entire result set).

BizTalk send port returns ORA-01013: user requested cancel of current operation

I have an application that inserts many rows into an Oracle database. The send port returns "ORA-01013: user requested cancel of current operation"
The send port is "WCF-Custom" using OracleDbBinding for the connection to the database.
I have the same issue in the past. My problem was using "WCF-Custom" port with OracleDBBinding to invoke an Oracle PL. That PL was very slow in their response and finally I received the error "ORA-01013: user requested cancel of current operation".
My issue was resolved changing the PL. I think that the error was produced by the "ReceiveTimeout" property of the Send Port. This property says that "Specifies the WCF message receive timeout. Essentially, this means the maximum amount of time the adapter waits for an inbound message.", I suspect that when ReceiveTimeout is accomplished, the WCF-Custom cancels the operation and then Oracle sends the error.
What's happening:
When inserting large numbers of records WCF makes several parallel requests to insert the data. The default 'UseAmbientTransaction' setting wraps all the inserts within a single transaction. If one of the inserted rows breaks a database constraint it tries to rollback the transaction for all the inserts. The transactions all return the oracle 1013 exception and the real cause of the failure is lost.
Solution:
On the send port 'Transport Advanced Options' tab set the 'Ordered Delivery' check box. This prevents the parallel insert operations and the real cause of the error will be logged.

How to handle Oracle bulk inserts with transactions if a transaction state unknown error is recived

In my application I have used Oracle (OCI) bulk executions using the following function.
OCIStmtExecute
For all the normal conditions it works as expected. Once Oracle node failover happened it gives rejections like "ORA-25405" in commits.
ORA-25405: transaction status unknown
According to the guide lines available all says "The user must determine the transaction's status manually".
My Question is will there be a scenario where my bulk insert/update works partially giving the above error?
From http://docs.oracle.com/cd/B10500_01/appdev.920/a96584/oci16m89.htm
With global transactions, it is possible that the transaction is now in-doubt, meaning that it is neither committed nor aborted.
This is exactly your case. This means the transaction isn't committed.
OCITransCommit() attempts to retrieve the status of the transaction from the server. The status is returned.
The solution then is to try again to commit the transaction with OCITransCommit(), then get the return value and check it.

How to cause a "ORA-01555: snapshot too old error" without updates

I am running into ORA-01555: snapshot too old errors with Oracle 9i but am not running any updates with this application at all.
The error occurs after the application has been connected for some hours without any queries, then every query (which would otherwise be subsecond queries) comes back with a ORA-01555: snapshot too old: rollback segment number 6 with name "_SYSSMU6$" too small.
Could this be cause of transaction isolation set to TRANSACTION_SERIALIZABLE? Or some other bug in the JDBC code? This could be caused by a bug in the jdbc-go driver but everything I've read about this bug has led me to believe scenarios where no DML statements are made this would not occur.
Read below a very good insight on this error by Tom Kyte. The problem in your case may come from what is called 'delayed block cleanout'. This is a case where selects creates redo. However, the root cause is almost sure improper size of rollback segments(but Tom adds as correlated causes: too frequently commits, a too big read after many updates, etc).
snaphot too old error (Tom Kyte)
When you run a query on an Oracle database the result will be what Oracle calls a "Read consistent snapshot".
What it means is that all the data items in the result will be represented with the value as of the the time the query was started.
To achieve this the DBMS looks into the rollback segments to get the original value of items which have been updated since the start of the query.
The DBMS uses the rollback segment in a circular way and will eventually wrap around - overwriting the old data.
If your query needs data that is no longer available in the rollback segment you will get "snapshot too old".
This can happen if your query is running for a long time on data being concurrently updated.
You can prevent it by either extending your rollback segments or avoid running the query concurrently with heavy updaters.
I also believe newer versions of Oracle provides better dynamic management of rollback segments than what is the case for Oracle 9i.

Session timeout issues while executing Oracle stored procedures from QTP

We have automated the process of capturing baseline metrics of various queries as part of Oracle tuning project. The automation is carried out by a QTP script which executes the procedure, which in turn runs the query for specified number of times with different input parameters. Once the execution of stored procedure is complete, it opens OEM and saves the reports by searching the particular SQL ID.
We are facing an issue while running stored procedures which in turn has queries taking lot of time to execute. In such cases, the QTP executes the stored procedure for some duration of time and after that it appears to have been stopped. When I check OEM, after a certain amount of time, QTP terminates the execution of stored procedures and the session seems to have been timed out.
Since QTP uses ADO, do I need to set “CommandTimeout” property of the connection to some large value in case of executing stored procedures which take lot of time? Doesn’t the QTP throw any error in case of any such time out issue? In our case the QTP status was still being displayed as “Running”, even when nothing was happening in the backend.

Resources