To check execution in DB - oracle

is there a way to know that of the say 100 DDL and DML statement which i am executing through jdbc , is something stuck.
i need to find out for progress bar that in db sql statements are executing and are not hung, so that can inform to user.
Is there a way to find this out.

There are any number of Oracle data dictionary views that you might use given the relatively vague requirements (what, exactly, is your definition of "hung", for example).
I'd probably start, though, with v$session assuming you can identify the sid and serial# of the particular session that you want to monitor (which may be aided by looking at the various columns in v$session). STATUS will tell you whether the session is actively executing a SQL statement at that particular instant. The sql_id will (generally) let you join to v$sqlarea or other views that tell you what statement is currently executing. The event column will tell you what the session is waiting on (i.e. reading from disk, waiting on CPU, waiting to acquire a lock, etc.).
The sql_id from v$session will also let you join to v$sqlstats which periodically updates with things like the amount of logical I/O a particular SQL statement has generated which would let you see that the currently active statement is doing something (whether that something is useful or whether it will terminate in our lifetime would be much more difficult).
Depending on what the code is doing, there may be one or more rows in v$session_longops that you can use to track the progress of longer-running operations-- using this effectively, though, will require that the third-party code issues the sort of long-running SQL operations that Oracle can monitor automatically (i.e. table scans of tables that have a reasonable amount of data) or that the code is instrumented to use v$session_longops to track its own progress.
Depending on what version of Oracle you're using, you might also be able to use the v$sql_monitor view to monitor SQL in real time.

Related

PL/SQL - retrieve output

Is there a way to retrieve output from PL/SQL continuously rather than wait until the SP completes its execution. Continuously mean as when it executes the execute immediate.
Any other mechanism to retrieve pl/sql output?
As per Oracle docs
Output that you create using PUT or PUT_LINE is buffered in the SGA. The output cannot be retrieved until the PL/SQL program unit from which it was buffered returns to its caller. So, for example, Enterprise Manager or SQL*Plus do not display DBMS_OUTPUT messages until the PL/SQL program completes.
As far as I know, there is a way, but not with DBMS_OUTPUT.PUT_LINE. Technique I use is:
create a log table which will accept values you'd normally display using DBMS_OUTPUT.PUT_LINE. Columns I use are
ID (a sequence, to be able to sort data)
Date (to know what happened when; might not be enough for sorting purposes because operations that take very short time to finish might have the same timestamp)
Message (a VARCHAR2 column, large enough to accept the whole information)
create a logging procedure which will be inserting values into that table. It should be an autonomous transaction so that you could COMMIT within (and be able to access data from other sessions), without affecting the main transaction
Doing so, you'd
start your PL/SQL procedure
call the logging procedure whenever appropriate (basically, where you'd put the DBMS_OUTPUT.PUT_LINE call)
in another session, periodically query the log table as select * from log_table order by ID desc
Additionally, you could write a simple Apex application with one report page which selects from the logging table and refreshes periodically (for example, every 10 seconds or so) and view the main PL/SQL procedure's execution.
The approach that Littlefoot has provided is what I normally use as well.
However, there is another approach that you can try for a specific use case. Let's say you have a long-running batch job (like a payroll process for example). You do not wish to be tied down in front of the screen monitoring the progress. But you want to know as soon as the processing of any of the rows of data hits an error so that you can take action or inform a relevant team. In this case, you could add code to send out emails with all the information from the database as soon as the processing of a row hits an error (or meets any condition you specify).
You can do this using the functions and procedures provided in the 'UTL_MAIL' package. UTL_MAIL Documentation from Oracle
For monitoring progress without the overhead of logging to tables and autonomous transactions. I use:
DBMS_APPLICATION.SET_CLIENT_INFO( TO_CHAR(SYSDATE, 'HH24:MI:SS') || ' On step A' );
and then monitor in v$session.client_infofor your session. It's all in memory and won't persist of course but it is a quick and easy ~zero cost way of posting progress.
Another option (Linux/UNIX) for centralised logging that is persistent and again avoids logging in the database more generally viewable that I like is interfacing to syslog and having Splunk or similar pick these up. If you have Splunk or similar then this makes the monitoring viewable without having to connect to the database query directly. See this post here for how to do this.
https://community.oracle.com/thread/2343125

V$SQL stats for PLSQL table function

I recently moved a set of my Oracle SQL queries to a package with table functions where I am returning data pipelined.
However, I started observing something unusual. V$SQL stats like buffer_gets, fetches, cpu_time, execution_time etc, have started showing a cumulative number, and they keep getting higher with every execution of the queries.
Is this usual behavior?
That's expected behavior.
There's a lots of shared resources involved. If you have an application with a dozen database connections in a pool, all doing the same sort of work, then you can have SQLs that have multiple users fetching from them at the same time (eg Connection 1 getting the Invoices for ABC while Connection 2 is getting invoices for XYZ). V$SQL (or GV$SQL in a RAC environment) will show the cumulative totals for most details (and things like USERS_EXECUTING will be current values across all sessions).
The only time they'll get reset is when the SQL ages out of the shared area, typically when it hasn't being used for some time and the resources are needed for another SQL, or rare events like a database shutdown.

SAS connection to Oracle hung up for 2 hours

In SAS we have a library which is actually ORACLE schema and today I faced with a strange event when trying to query a table in this library.
A regular SAS SQL query:
proc sql;
delete from table where id=123;
quit;
Was hung up for two hours while it usually took some seconds:
NOTE: PROCEDURE SQL used (Total process time):
real time 2:00:33.49
cpu time 0.03 seconds
While this operation was being performed I tried to delete a nearby row in ORACLE SQL DEVELOPER but it hung up processing delete request too. However deleting a row that was not nearby these rows did not cause any problems. Well how can I find out the possible reason? I guess that was a sort of deadlock.
It sounds like someone has locked a row that your session is trying to delete. You should be able to spot this by querying v$session:
select sid, schemaname, osuser, terminal, program, event
from v$session
where type != 'BACKGROUND';
and checking if your session has an event of "enq: TX - row lock contention" (or similar). If so, then you'll have to work out who has the blocking lock (if you have access to Toad's session browser, this is easy to do, but Google should throw up something that can help. Or, if your database is Oracle 11.2, there's a view: v$session_blockers that ought to pinpoint the blocking session), and then get them to either commit or rollback their transaction.

Oracle - Can't Find Long Running Queries

What am I missing here? I am trying to test identifying long running queries.
I have a test table with about 400 million rows called mytest.
I ran select * from mytest in sqlplus
In another window, I ran the script below to see my long running query
select s.username, s.sid, s.serial#, s.schemaname,
s.program, s.osuser, s.status, s.last_call_et
from v$session s
where last_call_et >= 1 – this is just for testing
My long running query does not show up in the result from the query above. If I change the criteria to be >=0, then I see my query showing the status as INACTIVE and last_call_et of 0 despite the fact that the query is still running. What can I do to see my long running queries like the select * from... above so that I can kill it?
Thanks
First, you need to understand what a query like select * from mytest is really doing under the covers because that's generally not going to be a long-running query. Oracle doesn't ever need to materialize that result set and isn't going to read all the data as the result of a single call. Instead, what goes on is a series of calls each of which cause Oracle to do a little bit of work. The conversation goes something like this.
Client: Hey Oracle, run the query for me: select * from mytest
Oracle: Sure thing (last_call_et resets to 0 to reflect that a new call started). I've generated a query plan and opened a cursor,
here's a handle (note that no work has been done yet to actually
execute the query)
Client: Cool, thanks. Using this cursor handle,
fetch me the next 50 rows (the fetch size is a client-side setting)
Oracle: Will do (last_call_et resets to 0 to reflect that a new call started). I started full scanning the table, read a couple of
blocks, and got 50 rows. Here you go.
Client: OK, I've processed
those. Using this cursor handle, fetch the next 50 rows
Repeat until all the data is fetched
At no point in this process is Oracle ever really being asked to do more than read a handful of blocks to get the 50 rows (or whatever the fetch size the client is requesting). At any point, the client could simply not request the next batch of data so Oracle doesn't need to do anything long-running. Oracle doesn't track the application think time between requests for more data-- it has no idea whether the client is a GUI that is in a tight loop fetching data or whether it is displaying a result to a human and waiting for a human to hit the "next" button. The vast majority of the time, the session is going to be INACTIVE because it's mostly waiting for the client to request the next batch of data (which it generally won't do until it had formatted the last batch of data for display and done the work to display it).
When most people talk about a long-running query, they're talking about a query that Oracle is actively processing for a relatively long time with no waits on a client to fetch the data.
You can use the below script to find long running query:
select * from
(
select
opname,
start_time,
target,
sofar,
totalwork,
units,
elapsed_seconds,
message
from
v$session_longops
order by start_time desc
)
where rownum <=1;

How to find out when an Oracle table was updated the last time

Can I find out when the last INSERT, UPDATE or DELETE statement was performed on a table in an Oracle database and if so, how?
A little background: The Oracle version is 10g. I have a batch application that runs regularly, reads data from a single Oracle table and writes it into a file. I would like to skip this if the data hasn't changed since the last time the job ran.
The application is written in C++ and communicates with Oracle via OCI. It logs into Oracle with a "normal" user, so I can't use any special admin stuff.
Edit: Okay, "Special Admin Stuff" wasn't exactly a good description. What I mean is: I can't do anything besides SELECTing from tables and calling stored procedures. Changing anything about the database itself (like adding triggers), is sadly not an option if want to get it done before 2010.
I'm really late to this party but here's how I did it:
SELECT SCN_TO_TIMESTAMP(MAX(ora_rowscn)) from myTable;
It's close enough for my purposes.
Since you are on 10g, you could potentially use the ORA_ROWSCN pseudocolumn. That gives you an upper bound of the last SCN (system change number) that caused a change in the row. Since this is an increasing sequence, you could store off the maximum ORA_ROWSCN that you've seen and then look only for data with an SCN greater than that.
By default, ORA_ROWSCN is actually maintained at the block level, so a change to any row in a block will change the ORA_ROWSCN for all rows in the block. This is probably quite sufficient if the intention is to minimize the number of rows you process multiple times with no changes if we're talking about "normal" data access patterns. You can rebuild the table with ROWDEPENDENCIES which will cause the ORA_ROWSCN to be tracked at the row level, which gives you more granular information but requires a one-time effort to rebuild the table.
Another option would be to configure something like Change Data Capture (CDC) and to make your OCI application a subscriber to changes to the table, but that also requires a one-time effort to configure CDC.
Ask your DBA about auditing. He can start an audit with a simple command like :
AUDIT INSERT ON user.table
Then you can query the table USER_AUDIT_OBJECT to determine if there has been an insert on your table since the last export.
google for Oracle auditing for more info...
SELECT * FROM all_tab_modifications;
Could you run a checksum of some sort on the result and store that locally? Then when your application queries the database, you can compare its checksum and determine if you should import it?
It looks like you may be able to use the ORA_HASH function to accomplish this.
Update: Another good resource: 10g’s ORA_HASH function to determine if two Oracle tables’ data are equal
Oracle can watch tables for changes and when a change occurs can execute a callback function in PL/SQL or OCI. The callback gets an object that's a collection of tables which changed, and that has a collection of rowid which changed, and the type of action, Ins, upd, del.
So you don't even go to the table, you sit and wait to be called. You'll only go if there are changes to write.
It's called Database Change Notification. It's much simpler than CDC as Justin mentioned, but both require some fancy admin stuff. The good part is that neither of these require changes to the APPLICATION.
The caveat is that CDC is fine for high volume tables, DCN is not.
If the auditing is enabled on the server, just simply use
SELECT *
FROM ALL_TAB_MODIFICATIONS
WHERE TABLE_NAME IN ()
You would need to add a trigger on insert, update, delete that sets a value in another table to sysdate.
When you run application, it would read the value and save it somewhere so that the next time it is run it has a reference to compare.
Would you consider that "Special Admin Stuff"?
It would be better to describe what you're actually doing so you get clearer answers.
How long does the batch process take to write the file? It may be easiest to let it go ahead and then compare the file against a copy of the file from the previous run to see if they are identical.
If any one is still looking for an answer they can use Oracle Database Change Notification feature coming with Oracle 10g. It requires CHANGE NOTIFICATION system privilege. You can register listeners when to trigger a notification back to the application.
Please use the below statement
select * from all_objects ao where ao.OBJECT_TYPE = 'TABLE' and ao.OWNER = 'YOUR_SCHEMA_NAME'

Resources