What's the use of call dbms_scheduler.auto_purge()? The procedure is not listed in the official PL/SQL Packages and Types Reference.
When I execute this function, what will happen?
I've never used it. But, here's what Morgan's Library and Burleson Consulting say about it (didn't find any reference in "official" Oracle documentation).
AUTO_PURGE purges from the logs based on class and global log_history.
The AUTO_PURGE procedure uses the log_history values defined at the scheduler, job class and job log_history level to determine which logs should be purged. This procedure runs as part of the scheduled purge process, but it can also be run manually.
Related
CacheLoader is not getting called while trying to find an entity using GemfireRepository.
As a solution, I am using Region<K,V> for looking up, which is calling CacheLoader. So wanted to know whether there is any restriction for Spring Data Repository which doesn't allow CacheLoader to be called when entry is not present in the cache.
And, is there any other alternative? Because I have one more scenario where my cache key is combination of id1 & id2 and I want to get all entries based on id1. And if there is no entry present in cache, then it will call CacheLoader to load all entries from Cassandra store.
There are no limitations nor restrictions in SDG when using the SD Repository abstraction (and SDG's Repository extension) that would prevent a CacheLoader from being invoked so long as the CacheLoader was properly registered on the target Region. Once control is handed over to GemFire/Geode to complete the data access operation (CRUD), it is out of SDG's hands.
However, you should know that GemFire/Geode only invokes CacheLoaders on gets (i.e. Region.get(key) operations), never on (OQL) queries. OQL queries are invoked from derived query methods or custom, user-defined query methods using #Query annotated methods declared in the application Repository interface.
NOTE: See Apache Geode CacheLoader Javadoc and User Guide for more details.
For a simple CrudRepository.findById(key) call, the call stack follows from...
SimplyGemfireRepository.findById(key)
GemfireTemplate.get(key)
And then, Region.get(key) (from here).
By way of example, and to illustrate this behavior, I added the o.s.d.g.repository.sample.RepositoryDataAccessOnRegionUsingCacheLoaderIntegrationTests to the SDG test suite as part of DATAGEODE-308. You can provide additional feedback in this JIRA ticket, if necessary.
Cheers!
I am calling two different procedures of a same package from two different execution points in ORACLE APEX page. This Package has anonymous block in it. (Written at the end of package body)
Execution point: On Load: Before "Body" regions.
Calls : pkg_name.proc_1;
Item Read Only section.
Calls: pkg_name.proc_2;
I have put logging in the anonymous block in the package to check its execution. I observed that it is executing only once during page rendering.
But after the page rendering every time i make a call to the package (Through dynamic actions), the anonymous block gets executed.
I read, in oracle apex, every time a call is made to DB it will get a new DB connection from the pool.
How does the anonymous blocks execute in packages --> Once in a database session or Every time a package is called?
If Once in a database session, does that mean the entire page rendering happens in one database session unlike Dynamic actions?
Please help!
Thanks.
Yes. All rendering work is done together before the page is delivered to your browser - hence the single run of your package's anonymous block. But each PL/SQL dynamic action executes as a separate AJAX request, which means each one connects to the database and disconnects independently.
This is a follow-up to this question.
I was having trouble with Oracle performing the eventcreate Windows command from DBMS_SCHEDULER.
As a workaround, I instead created a basic C# application to perform the same eventcreate function. It works on a basic level but I'm facing a few roadblocks.
Here is the program. (I'm not tagging C# in this question because the question is not about C#. I am just providing this as information only.)
using System;
using System.Diagnostics;
class myEventCreateClass
{
public static void Main(String[] args)
{
using(EventLog eventLog = new EventLog("Application"))
{
eventLog.Source = "MySource";
eventLog.WriteEntry(args[0], EventLogEntryType.Warning, 218);
}
}
}
I modified the DBMS_SCHEDULER job to this:
BEGIN
sys.dbms_scheduler.create_job(
job_name => 'SYS.TESTJOB',
job_type => 'EXECUTABLE',
job_action => 'C:\myEventCreate.exe',
job_class => 'DEFAULT_JOB_CLASS',
number_of_arguments => 1,
auto_drop => FALSE,
enabled => FALSE);
sys.dbms_scheduler.set_job_argument_value('SYS.TESTJOB', 1, 'testing123');
sys.dbms_scheduler.enable('SYS.TESTJOB');
END;
When I run this job manually under the SYS schema, it successfully places an event into the Windows event log that says:
testing123
This is where my success ends...
If I create the same job under a different schema (e.g. change all instances of SYS.TESTJOB to MYSCHEMA.TESTJOB), it creates the job in that schema but when I attmept to run the job (from any schema) I get the following long list of errors:
ORA-27370: job slave failed to launch a job of type EXECUTABLE
ORA-27300: OS system dependent operation:accessing job scheduler service failed with status: 2
ORA-27301: OS failure message: The system cannot find the file specified.
ORA-27302: failure occurred at: sjsec 6a
ORA-27303: additional information: The system cannot find the file specified.
ORA-06512: at "SYS.DBMS_ISCHED", line 185
ORA-06512: at "SYS.DBMS_SCHEDULER", line 486
ORA-06512: at line 1
And when I try to run SYS.TESTJOB from MYSCHEMA, it tells me the job doesn't exist:
ORA-27476: "SYS.TESTJOB" does not exist
ORA-06512: at "SYS.DBMS_ISCHED", line 185
ORA-06512: at "SYS.DBMS_SCHEDULER", line 486
ORA-06512: at line 1
How can I get this job working from a schema other than SYS?
One more problem (probably the bigger issue): I am trying to run this job from inside a trigger.
According to this question, changing the settings of a DBMS_SCHEDULER job (in my case, I'm attempting to change the job arguments each time before I run the job) causes an implicit COMMIT in Oracle, which is not allowed in triggers.
To me it seems misleading for Oracle to even label these as "arguments", because the values of the arguments are fixed inside the job, and changing the arguments means changing the job itself.
Anyway, the accepted answer in this question says to use DBMS_JOB since this does not implicitly COMMIT, but I can't find a way to use DBMS_JOB to run an external .exe file.
Therefore, is it possible to modify this job somehow to allow dynamic job arguments?
I'm also open to other solutions, but from what I have read, DBMS_SCHEDULER seems to be the best way to accomplish this.
As requested, here is some context for what I am trying to accomplish:
At my company, we have it set up such that if an entry is placed into the Windows event log under a certain source (e.g. in this case, mySource as shown in the provided C# application), a text message containing the content of the user log message is automatically sent the cell phones of myself and a few other admins.
This is extremely useful as it gives us an immediate notification that some event of importance happened, and we can control exactly which events we want to include and what specific information about these events we want to be notified of.
Here are some examples of what we currently get notified about via text message:
The stopping or starting of any of our custom applications (and who stopped/started it if it didn't crash).
When any of our custom applications are taken into or out of watchdog control (and who did this).
When certain "known issues" arise or are about to arise that we haven't fully fixed yet. This allows us to get "ahead of the game" so that we can deal with it proactively rather than waiting for someone to tell us about it.
I want to extend this functionality to some events in our Oracle database (which is why I am trying to place an event into the event log based on a trigger in Oracle).
Here are some things I have in mind as of now that we want to be notified of via text message, all of which can be determined inside a trigger:
When anyone not in a certain "approved" list of users (which would be our admins plus the custom applications with connections to Oracle) connects to our Oracle database. This can be accomplished with a logon trigger. (Actually, I already have this one working since the logon triggers are called by the SYS schema, so I'm not having issues with other schemas not being able to run the job. But... since I still can't change any arguments, the best I can currently do is just say "Someone" not approved logged into Oracle database.... It would be alot more useful if I could pass the username to the Windows event log.)
When anything besides our custom applications changes data in our Oracle database. (Our custom applications handle all of the inserts/updates/deletes etc. Only in very rare cases would we need to manually modify something. We want to be notified when anyone [including myself or other admins] modifies anything the database.) This can be accomplished with an update/insert/delete trigger for each table.
The reason why it is working under SYS is that it is special privileged account. You need to create a new credential and map it to the job
The solution would be create a credential with DBMS_SCHEDULER.CREATE_CREDENTIAL together with OS account that has enough privileges and assign this new credential to your job.
For the trigger issue to be honest I don't know yet.
Edit - solution based using Oracle's subtransaction facility
After OP update and reaction to comments:
Based on the workflow I think it is better to use internal Oracle's notification to do the responsive audit. I think trying to hack yourself f into Windows event log via external application is bringing another unnecessary layer of complexity.
I would create a table within DB where I would store all the events and on top of that table I would create a Job with notifications (SMS, mail, etc) which would be run if any change to the log table occurs.
In order to use triggers when an error occurs you should use PRAGMA autonomous_transaction from your main scope (allows you to do a subtransaction). This will allow you to commit any DML you may have, but do a rollback the rest.
the permissions problem is already resolved in the other answer. For the 'commit inside a trigger' problem, there is the PRAGMA AUTONOMOUS_TRANSACTION. See bottom of this link for an example: https://docs.oracle.com/cd/B14117_01/appdev.101/b10807/13_elems002.htm. It does exactly what you want.
I have a process flow built by someone prior which calls a very simple stored procedure. upon completion of the procedure the process flow has 2 transitions, one if the stored procedure was successful and the other if not. However, the stored procedure itself does not return anything that can be directly evaluated by the process flow like a return result. Now this procedure if it fails (with the ubiquitious max extants problem) it will call the branch which will call a stored procedure for sending a failure email message. If it succeeds the contrary will occur.
I had to tweak the procedure so I created a new one. now if it fails or succeeds the success branch is called regardless. I have checked all the docs from oracle as to how to make this work and for the life of me cannot determine how to make it work correctly. I first posted this on the oracle forum and got no responses. Does anyone have an idea how to make this work?
According to the Oracle Warehouse Builder guide:
When you add a transition to the canvas, by default, the transition has no condition applied to it.
Make sure that you have correctly defined a conditional transition as described in the Defining Transition Conditions section of the documentation.
A User Defined Activity will return an ERROR outcome if:
it raises an exception, or
it returns the value 3 and the Use Return as Status option is set to true
"However, the stored procedure itself does not return anything that
can be directly evaluated by the process flow like a return result."
This is the crux: if the operating procedure procedure produces no signal how can you tell whether it was successful? Indeed, what is the definition of success under this circumstance?
I don't understand why, when you had to "tweak the procedure", you wrote a new one instead of, er, tweaking the original procedure. The only way you're going to solve this is to get some feedback out of the original procedure.
At this point we run out of details. The direct option is to edit the original procedure so it passes back result info, probably through OUT parameters or else by introducing some logging capability. Alternatively, re-write it to raise an exception on failure. The indirection option is to write some queries to establish what the procedure achieved on a given run, and then figure out whether that constitutes success.
Personally, re-writing the original procedure seems the better option.
If this answer doesn't help you then you need to explain more about your scenario. What your procedure does, how you need to evaluate it, why you can't re-write it.
I am a newbie in Developer2000.
I have an Oracle pl/sql procedure (say, proc_submit_request) that fetches thousands of requests and submits them to dbms_job scheduler. The calling of dbms_job is
coded inside a loop for each request fetched.
Currently, i have a button (say, SUBMIT button) in oracle forms screen clicking which calls proc_submit_request.
The problem here is... the control does not return to my screen untill ALL of the requests fetched are submitted to the dbms_job (this takes hours to complete)
The screen grays out and just the hour-glass appears untill the completion of the procedure proc_submit_request.
proc_submit_appears returns a message to screen telling "XXXX requests submitted".
My requirement now is, once the user clicks the SUBMIT button, the screen should no longer gray out. The user should be able to navigate to other screens and not just struck with the submit screen untill the called procedure is completed.
I suggested running listeners (shell scripts and perl stuff) that can listen for any messages in pipe and run requests as back-ground process.
But the user is asking me to fix the issue in the application rather running listeners.
I've heard a little of OPEN_FORM built-in.
Suppose, I have two forms namely Form-1 and Form-2. Form-1 calls Form-2 using OPEN_FORM.
Now are the following things possible using OPEN_FORM?
On calling open_form('Form-2',OTHER-ARGUMENTS...), control must be in Form-1 (i.e. USer should not know that another form is getting opened) and Form-2 should call proc_submit_request.
User must be able to navigate to other screens in the application. But Form-2 must still be running until proc_submit_procedure is completed.
What happens if the user closes (exits) Form-1 ? Will Form-2 be still running?
Please provide me answers or suggest a good solution.
Good thought on the Form-1, Form-2 scenario - I'm not sure if that would work or not. But here is a much easier way without having to fumble around with coordinating forms being hidden and running stuff, and coming to the forefront when the function actually returns...etc, etc.
Rewrite your function that runs the database jobs to run as an AUTONOMOUS_TRANSACTION. Have a look at the compiler directive PRAGMA AUTONOMOUS_TRANSACTION for more details on that. You must use this within a database function/package/procedure - it is not valid with Forms (at least forms 10, not sure about 11).
You can then store the jobs result somewhere from your function (package variable, table, etc) and then use the built-in called CREATE_TIMER in conjunction with the form level trigger WHEN-TIMER-EXPIRED to go and check your storage location every 10 seconds or so - you can then display a message to the user about the jobs, and kill the timer using DELETE_TIMER.
You could create a single DBMS_JOB to call proc_submit_request. That way your form will only have to make one call; and the creation of all the other jobs will be done in a separate session.