Oracle: how to run a stored procedure "later" - oracle

We have a system that allows users interfacing data into the database to set up various rules that are used to alter data before it is merged in to the main table. For example, an order might have a rule that sets up what delivery company to use based on a customer's address.
This is originally intended to operate only on the data being loaded in, so it's limited to functions that you can call from a select statement. An important point to note is that the data is not in the destination table yet.
Now, I have a project that requires an update to another table (fine - I can use an autonomous_transaction pragma for that). However, there are some functions I need to run that require the data to be inserted before they run (i.e. they are aggregating data).
So, I really want to just queue up running my procedure till some time later (it's not time dependent).
How do I do that in Oracle? The wealth of documentation is rather overwhelming when I just want to do something simple.

BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'daily_tasks_job',
job_type => 'STORED_PROCEDURE',
job_action => 'prc_daily_tasks',
repeat_interval => 'FREQ=DAILY; INTERVAL=1',
enabled => TRUE,
comments => 'Calls stored procedure once a day'
);
END;
BEGIN
DBMS_SCHEDULER.create_job(
job_name => 'SHELL_JOB',
repeat_interval => 'FREQ=DAILY; BYHOUR=2',
job_type => 'EXECUTABLE',
job_action => '/u01/app/oracle/admin/tools/shell_job.sh',
enabled => TRUE,
comments => 'Perform stuff'
);
END;

The standard aproach for this would be to use dbms_jobs to schedule a job calling the procedure.
If there is some precondition, the job could check the precondition. If it is fulfilled, the job continues, if not it reschedules itself and exit.

Related

11g get invalid invalid rowid through v$logminer_content

I used logminer to get change data from archivelog, but get invalid rowid 'AAAAAAAAAAAAAAAAAA'. How could this happened. It just a insert operation.
copy catalog
begin
sys.dbms_logmnr_d.build(options => dbms_logmnr_d.STORE_IN_REDO_LOGS);
end;
/
add logfile
begin
sys.dbms_logmnr.add_logfile(LogFileName => '/arch/archlog/SZO1ABS9/ARC0000286133_0846017616.0001',
Options => sys.dbms_logmnr.NEW);
end;
/
start logmnr
begin
sys.dbms_logmnr.start_logmnr(Options => sys.dbms_logmnr.DICT_FROM_REDO_LOGS +
sys.dbms_logmnr.COMMITTED_DATA_ONLY);
end;
/
fetch result
select scn,start_scn,commit_scn,timestamp,operation,row_id,sql_redo,sql_undo from v$logmnr_contents
where row_id = 'AAAAAAAAAAAAAAAAAA' and scn = '7590067871061';
I'm sure you got around this by now lol, but recently had to deal with the same problem.
I'm not sure what the underlying cause but I worked out two approaches to fix it.
In a postprocessing pass, I fixed up these rowid's by issuing a new query to get the table's pkey value via MINE_VALUE, you can then use that to query the table via its pkey for the rowid.
If you have flashback enabled, this fixup is apparently already taken care of. If you look at the same transaction in FLASHBACK_TRANSACTION_QUERY or a versions query on the table, you'll see the correct ROWID and no 'AAAA..'

How to analyze a table using DBMS_STATS package in PL/SQL?

Here's the code I'm working on:
begin
DBMS_STATS.GATHER_TABLE_STATS (ownname => 'appdata' ,
tabname => 'TRANSACTIONS',
cascade => true,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt=>'for all indexed columns size 1',
granularity => 'ALL',
degree => 1);
end;
After executing the code, PL/SQL procedure successfully completed is displayed.
How to view the statistics for the particular table, analyzed by DBMS_STATS ?
You may see information in DBA_TABLES
SELECT *
FROM DBA_TABLES where table_name='TRANSACTIONS';
e.g. Column LAST_ANALYZED shows when it was last analyzed.
There are also information column by column in
SELECT * FROM all_tab_columns where table_name='TRANSACTIONS';
where you could find min value, max value, etc.

Associating job with job class

I'm attempting to create a job using DBMS_SCHEDULER in an Oracle 11g DB but having some trouble setting the job class attribute. I have already looked in the SYS schema and there is a job class named "SCHED$_LOG_ON_ERRORS_CLASS" that only outputs to the log if a job fails, which is what I want instead of having it log every time the job succeeds. Here is the script I am using to create the job:
BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'DIRXML.CHECK_EVENTLOG',
job_type => 'STORED_PROCEDURE',
job_action => 'DIRXML.P_Check_Eventlog',
job_class => 'DIRXML.SCHED$_LOG_ON_ERRORS_CLASS',
repeat_interval => 'FREQ=SECONDLY;INTERVAL=30',
enabled => TRUE
);
END;
/
The script will execute without errors if I remove the job_class attribute but when I add it I get the following error:
ORA-27476: "SYS.SCHED$_LOG_ON_ERRORS_CLASS" does not exist ORA-06512:
at "SYS.DBMS_ISCHED", line 124 ORA-06512: at "SYS.DBMS_SCHEDULER",
line 271 ORA-06512: at line 2
The only thing I could think of is that permissions aren't set up correctly for my user?
It looks like there wasn't a public execute grant on that specific job class, which explains why it wasn't finding it.

Create trigger in Oracle 11g

I want to create a trigger in Oracle 11g. The problem is that I want a trigger which runs every time when there is a SELECT statement. Is this possible or is there other way to achieve the same result. This is the PL/SQL block:
CREATE TRIGGER time_check
BEFORE INSERT OR UPDATE OF users, passwd, last_login ON table
FOR EACH ROW
BEGIN
delete from table where last_login < sysdate - 30/1440;
END;
I'm trying to implement a table where I can store user data. I want to "flush" the rows which are old than one hour. Are there other alternatives to how I could implement this?
p.s Can you tell me is this PL/SQL block is correct. Are there any mistakes?
BEGIN
sys.dbms_scheduler.create_job(
job_name => '"ADMIN"."USERSESSIONFLUSH"',
job_type => 'PLSQL_BLOCK',
job_action => 'begin
-- Insert PL/SQL code here
delete from UserSessions where last_login < sysdate - 30/1440;
end;',
repeat_interval => 'FREQ=MINUTELY;INTERVAL=2',
start_date => systimestamp at time zone 'Asia/Nicosia',
job_class => '"DEFAULT_JOB_CLASS"',
comments => 'Flushes expired user sessions',
auto_drop => FALSE,
enabled => FALSE);
sys.dbms_scheduler.set_attribute( name => '"ADMIN"."USERSESSIONFLUSH"', attribute => 'job_priority', value => 5);
sys.dbms_scheduler.set_attribute( name => '"ADMIN"."USERSESSIONFLUSH"', attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_FAILED_RUNS);
sys.dbms_scheduler.enable( '"ADMIN"."USERSESSIONFLUSH"' );
END;
I'm not aware of a way of having a trigger on select. From the documentation, the only statements you can trigger on are insert/delete/update (and some DDL).
For what you want to do, I would suggest a simpler solution: use the DBMS_SCHEDULER package to schedule a cleanup job every so often. It won't add overhead to your select queries, so it should have less performance impact globally.
You'll find lots of examples in: Examples of Using the Scheduler

How to audit deletes in a certain table with Oracle?

I'm trying to record DELETE statements in a certain table using Oracle's auditing features. I ran:
SQL> AUDIT DELETE TABLE BY TPMDBO BY ACCESS;
Audit succeeded.
I'm unclear if this audits the deletion of a table schema itself (ie, dropping the table), or if it audits the deletion of one or more rows within any table (ie, the delete command). If the latter, how do I limit this auditing to only a table called Foo? Thanks!
UPDATE:
SQL> show parameter audit
NAME TYPE VALUE
------------------------------------ ----------- -------------
audit_file_dest string /backup/audit
audit_sys_operations boolean TRUE
audit_syslog_level string
audit_trail string XML, EXTENDED
There is a new feature called fine-grained auditing (FGA), that stores log in SYS.FGA_LOG$ instead SYS.AUD$. Here is the FGA manual.
BEGIN
DBMS_FGA.ADD_POLICY(
object_schema => 'HR',
object_name => 'FOO',
policy_name => 'my_policy',
policy_owner => 'SEC_MGR',
enable => TRUE,
statement_types => 'DELETE',
audit_condition => 'USER = ''myuser''',
audit_trail => DBMS_FGA.DB);
END;
/
Yes, your original command should audit DELETE operations (not DROP) for this user on all tables. Examine show parameter audit

Resources