This is a follow-up to this question.
I was having trouble with Oracle performing the eventcreate Windows command from DBMS_SCHEDULER.
As a workaround, I instead created a basic C# application to perform the same eventcreate function. It works on a basic level but I'm facing a few roadblocks.
Here is the program. (I'm not tagging C# in this question because the question is not about C#. I am just providing this as information only.)
using System;
using System.Diagnostics;
class myEventCreateClass
{
public static void Main(String[] args)
{
using(EventLog eventLog = new EventLog("Application"))
{
eventLog.Source = "MySource";
eventLog.WriteEntry(args[0], EventLogEntryType.Warning, 218);
}
}
}
I modified the DBMS_SCHEDULER job to this:
BEGIN
sys.dbms_scheduler.create_job(
job_name => 'SYS.TESTJOB',
job_type => 'EXECUTABLE',
job_action => 'C:\myEventCreate.exe',
job_class => 'DEFAULT_JOB_CLASS',
number_of_arguments => 1,
auto_drop => FALSE,
enabled => FALSE);
sys.dbms_scheduler.set_job_argument_value('SYS.TESTJOB', 1, 'testing123');
sys.dbms_scheduler.enable('SYS.TESTJOB');
END;
When I run this job manually under the SYS schema, it successfully places an event into the Windows event log that says:
testing123
This is where my success ends...
If I create the same job under a different schema (e.g. change all instances of SYS.TESTJOB to MYSCHEMA.TESTJOB), it creates the job in that schema but when I attmept to run the job (from any schema) I get the following long list of errors:
ORA-27370: job slave failed to launch a job of type EXECUTABLE
ORA-27300: OS system dependent operation:accessing job scheduler service failed with status: 2
ORA-27301: OS failure message: The system cannot find the file specified.
ORA-27302: failure occurred at: sjsec 6a
ORA-27303: additional information: The system cannot find the file specified.
ORA-06512: at "SYS.DBMS_ISCHED", line 185
ORA-06512: at "SYS.DBMS_SCHEDULER", line 486
ORA-06512: at line 1
And when I try to run SYS.TESTJOB from MYSCHEMA, it tells me the job doesn't exist:
ORA-27476: "SYS.TESTJOB" does not exist
ORA-06512: at "SYS.DBMS_ISCHED", line 185
ORA-06512: at "SYS.DBMS_SCHEDULER", line 486
ORA-06512: at line 1
How can I get this job working from a schema other than SYS?
One more problem (probably the bigger issue): I am trying to run this job from inside a trigger.
According to this question, changing the settings of a DBMS_SCHEDULER job (in my case, I'm attempting to change the job arguments each time before I run the job) causes an implicit COMMIT in Oracle, which is not allowed in triggers.
To me it seems misleading for Oracle to even label these as "arguments", because the values of the arguments are fixed inside the job, and changing the arguments means changing the job itself.
Anyway, the accepted answer in this question says to use DBMS_JOB since this does not implicitly COMMIT, but I can't find a way to use DBMS_JOB to run an external .exe file.
Therefore, is it possible to modify this job somehow to allow dynamic job arguments?
I'm also open to other solutions, but from what I have read, DBMS_SCHEDULER seems to be the best way to accomplish this.
As requested, here is some context for what I am trying to accomplish:
At my company, we have it set up such that if an entry is placed into the Windows event log under a certain source (e.g. in this case, mySource as shown in the provided C# application), a text message containing the content of the user log message is automatically sent the cell phones of myself and a few other admins.
This is extremely useful as it gives us an immediate notification that some event of importance happened, and we can control exactly which events we want to include and what specific information about these events we want to be notified of.
Here are some examples of what we currently get notified about via text message:
The stopping or starting of any of our custom applications (and who stopped/started it if it didn't crash).
When any of our custom applications are taken into or out of watchdog control (and who did this).
When certain "known issues" arise or are about to arise that we haven't fully fixed yet. This allows us to get "ahead of the game" so that we can deal with it proactively rather than waiting for someone to tell us about it.
I want to extend this functionality to some events in our Oracle database (which is why I am trying to place an event into the event log based on a trigger in Oracle).
Here are some things I have in mind as of now that we want to be notified of via text message, all of which can be determined inside a trigger:
When anyone not in a certain "approved" list of users (which would be our admins plus the custom applications with connections to Oracle) connects to our Oracle database. This can be accomplished with a logon trigger. (Actually, I already have this one working since the logon triggers are called by the SYS schema, so I'm not having issues with other schemas not being able to run the job. But... since I still can't change any arguments, the best I can currently do is just say "Someone" not approved logged into Oracle database.... It would be alot more useful if I could pass the username to the Windows event log.)
When anything besides our custom applications changes data in our Oracle database. (Our custom applications handle all of the inserts/updates/deletes etc. Only in very rare cases would we need to manually modify something. We want to be notified when anyone [including myself or other admins] modifies anything the database.) This can be accomplished with an update/insert/delete trigger for each table.
The reason why it is working under SYS is that it is special privileged account. You need to create a new credential and map it to the job
The solution would be create a credential with DBMS_SCHEDULER.CREATE_CREDENTIAL together with OS account that has enough privileges and assign this new credential to your job.
For the trigger issue to be honest I don't know yet.
Edit - solution based using Oracle's subtransaction facility
After OP update and reaction to comments:
Based on the workflow I think it is better to use internal Oracle's notification to do the responsive audit. I think trying to hack yourself f into Windows event log via external application is bringing another unnecessary layer of complexity.
I would create a table within DB where I would store all the events and on top of that table I would create a Job with notifications (SMS, mail, etc) which would be run if any change to the log table occurs.
In order to use triggers when an error occurs you should use PRAGMA autonomous_transaction from your main scope (allows you to do a subtransaction). This will allow you to commit any DML you may have, but do a rollback the rest.
the permissions problem is already resolved in the other answer. For the 'commit inside a trigger' problem, there is the PRAGMA AUTONOMOUS_TRANSACTION. See bottom of this link for an example: https://docs.oracle.com/cd/B14117_01/appdev.101/b10807/13_elems002.htm. It does exactly what you want.
Related
Currently there is one place in our code that uses RAISE_APPLICATION_ERROR:
IF v_cl_type_row.code = 'B_DOCUMENT' THEN
RAISE_APPLICATION_ERROR(-20000, '*A message for the user*');
END IF;
It's not an error that happens here, the user just gets an informational message. Because this is currently raised as an error, it is then also logged as an error, which it is not. So I have been tasked with somehow returning this as an info message instead, to avoid unnecessary logging.
But is that even possible with Oracle inbuilt functions? I haven't been able to find a way how to do something like that.
We are using Oracle 19c.
IF-THEN-ELSE means PL/SQL. Code written in that language are executed within the database. PL/SQL isn't "interactive".
If you use a tool like SQL*Plus (command line) or GUI (such as SQL Developer, TOAD, etc.) you could use DBMS_OUTPUT.PUT_LINE instead of RAISE_APPLICATION_ERROR because - as you already noticed - RAISE stops the execution; user sees the message, but there's nothing they can do to make the procedure continue its work.
In your example, it would be
IF v_cl_type_row.code = 'B_DOCUMENT' THEN
DBMS_OUTPUT.PUT_LINE('*A message for the user*');
END IF;
Note that you'd have to wait until procedure finishes to see all messages at once; Oracle won't display them one-by-one, as code reaches that point.
However, as far as it is OK to use for debugging purposes, users most probably don't use any of tools I previously mentioned. If it were e.g. Oracle Forms application or Apex one, that code would work (i.e. wouldn't raise a compilation error), but nobody would see anything because these front-ends aren't capable of displaying such a message.
In Forms, for example, you could use message built-in (text would then be displayed in the status line, at the bottom of the screen) or alert (so you'd see a pop-up window which lets you acknowledge the message and move on, i.e. code would proceed its execution).
Therefore, it depends on which tool you actually use to develop that application and its capabilities of interaction with end users.
We are having legacy application, where there are so many users connecting to our legacy system. We know about our jobs and our DB maintenance activities. But, we see so many different users also accessing the production system. We want to capture bare minimal extended events, to see what are the different third party users and what queries are being run by them.
Our Extended Events Session Current Configuration:
We added below events. We have applied filters for our databases in server. We are writing to disk file target with 5 GB limit and recycling the files, to avoid file system bloating.
module_end ( additional event field: statement)
rpc_completed (additional event field: statement)
sql_batch_completed (additional event field: batch text)
We are capturing below Global fields.
client_app_name
database_id
nt_username
sql text
username
But, even the above one is overwhelming for the production system. So, We are trying to reduce the amount of capture.
Our Planned Changes for minimal extended events capture:
Apply filter for removing the known users from the events capture, in addition to database filters
Just capture rpc_completed, sql_batch_completed events
Just capture client_app_name, database_id, username global fields, as we can get sql statement from event field: statement
Our Question:
Please suggest, whether we have configured our extended events session in the minimal configuration mode. Or do you suggest more changes to the event session.
Thanks for your help.
UPDATE: Our modification script for reference
ALTER EVENT SESSION [Audit_UserActivities] ON SERVER
DROP EVENT sqlserver.module_end, DROP EVENT sqlserver.rpc_completed, DROP EVENT sqlserver.sql_batch_completed
ALTER EVENT SESSION [Audit_UserActivities] ON SERVER
ADD EVENT sqlserver.rpc_completed(
ACTION(sqlserver.client_app_name,sqlserver.database_id,sqlserver.username)
WHERE (([sqlserver].[like_i_sql_unicode_string]([sqlserver].[database_name],N'DBPrefix%')) OR (([sqlserver].[equal_i_sql_unicode_string]([sqlserver].[database_name],N'DBName')) AND ([sqlserver].[username]<>N'DBSysadminUser')))), ADD EVENT sqlserver.sql_batch_completed(SET collect_batch_text=(1)
ACTION(sqlserver.client_app_name,sqlserver.database_id,sqlserver.username)
WHERE (([sqlserver].[like_i_sql_unicode_string]([sqlserver].[database_name],N'DBPrefix%')) OR (([sqlserver].[equal_i_sql_unicode_string]([sqlserver].[database_name],N'DBName')) AND ([sqlserver].[username]<>N'DBSysadminUser'))))
GO
I would not expect the Extended Events session in your question, with a file target, to generally be impactful on a healthy server. There are additional considerations you should consider to mitigate impact, though.
There is a known issue capturing TVP RPC events that's fixed in SQL Server 2016+, including Azure SQL Database. I believe the problem still exists in older versions and is very costly with large TVPs. Your recourse in SQL Server 2012 is to exclude TVP RPC events with a filter.
Consider specifying a larger buffer size (e.g. MAX_MEMORY=100MB, depending on your instance memory). Also specify ALLOW_MULTIPLE_EVENT_LOSS, to mitigate impact of tracing on your workload for high-frequency events since some event loss is acceptable for this tracing scenario.
I am running DB queries in the UFT framework. During the execution, if the data is deleted in the backend tables, then UFT will throw the error message box as like below. I want to skip the error and continue and report the error description in the xml result sheet.
Error Message:
Run Error:
Either BOF or EOF is True, or the current record has been deleted. Requested operation requires a current record
I am trying to add by recovery scenario by calling error function and proceed to next step. My error function is as like below:
Function RecoveryFunction1(Object, Method, Arguments, retVal)
If err.number<>"" then
error1=err.description
Reporter.reportevent micwarning,"Error ocurred:","","Error Details: "&error1&"
End If
End Function
I have added the above function in the function library and associated the recovery scenario in the test settings. But still error message is throwing up.
Please help me to handle the error message during run time instead of writing for each and every function 'on error resume next' statement.
You need to change the way you request the view of the database so that later modifications or deletions will not affect your query.
Take a look at the docs for the ADO Recordset. It specifies different cursor types, some of which will allow you to keep a static view of the data as it was when you ran the query, while others give live views. I'm not sure which one is best for your specific use case, so you should experiment and see which one works for you.
Failing that, you can try a move heavy-handed approach which is to begin a database transaction before you do your select, which should isolate your data processing from any external changes. However, that may be undesirable if your process takes a long time, as it may lock out other processes until you end your transaction and yield the locks on the rows you're looking at. Again, it depends on your specific database environment and the systems that interact with it.
Update
I should have added from the outset - this is in Microsoft Dynamics CRM 2011
I know CRM well, but I'm at a loss to explain behaviour on my current deployment.
Please read the outline of my scenario to help me understand which of my presumptions / understandings is wrong (and therefore what is causing this error). It's not consistent with my expectations.
Basic Scenario
Requirement demands that a web service is called every X minutes (it adds pending items to a database index)
I've opted to use a workflow / custom entity trigger model (i.e. I have a custom entity which has a CREATE plugin registered. The plugin executes my logic. An accompanying workflow is started when "completed" time + [timeout period] expires. On expiry, it creates a new trigger record and the workflow ends).
The plugin logic works just fine. The workflow concept works fine to a point, but after a period of time the workflow stalls with a failure:
This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow logic, see Help.
So in a nutshell - standard infinite loop detection. I understand the concept and why it exists.
Specific deployment
Firstly, I think it's quite safe for us to ignore the content of the plugin code in this scenario. It works fine, it's atomic and hardly touches CRM (to be clear, it is a pre-event plugin which runs the remote web service, awaits a response and then sets the "completed on" date/time attribute on my Trigger record before passing the Target entity back into the pipeline) . So long as a Trigger record is created, this code runs and does what it should.
Having discounted the content of the plugin, there might be an issue that I don't appreciate in having the plugin registered on the pre-create step of the entity...
So that leaves the workflow itself. It's a simple one. It runs thusly:
On creation of a new Trigger entity...
it has a Timeout of Trigger.new_completedon + 15 minutes
on timeout, it creates a new Trigger record (with no "completed on" value - this is set by the plugin remember)
That's all - no explicit "end workflow" (though I've just added one now and will set it testing...)
With this set-up, I manually create a new Trigger record and the process spins nicely into action. Roll forwards 1h 58 mins (based on the last cycle I ran - remembering that my plugin code may take a minute to finish running), after 7 successful execution cycles (i.e. new workflow jobs being created and completed), the 8th one fails with the aforementioned error.
What I already know (correct me where I'm wrong)
Recursion depth, by default, is set to 8. If a workflow / plugin calls itself 8 times then an infinite loop is detected.
Recursion depth is reset every one hour (or 10 minutes - see "Warnings" in linked blog?)
Recursion depth settings can be set via PowerShell or SDK code using the Deployment Web Service in an on-premise deployment only (via the Set-CrmSetting Cmdlet)
What I don't want to hear (please)
"Change recursion depth settings"
I cannot change the Deployment recursion depth settings as this is not an option in an online scenario - ultimately I will be deploying to CRM Online too.
"Increase the timeout period on your workflow"
This is not an option either - the reindex needs to occur every 15 minutes, ideally sooner.
Update
#Boone suggested below that the recursion depth timeout is reset after 60 minutes of inactivity rather than every 60 minutes. Therein lies the first misunderstanding.
While discussing with #alex, I suggested that there may be some persistence of CorrelationId between creating an entity via the workflow and the workflow that ultimates gets spawned... Well there is. The CorrelationId is the same in both the plugin and the workflow and any records that spool from that thread. I am now looking at ways to decouple the CorrelationId (or perhaps the creation of records) from the entity and the workflow.
For the one hour "reset" to take place you have to have NO activity for an hour. It doesn't reset just 1 hour from the original. So since you have an activity every 15 minutes, it never has a chance to reset. I don't know that is said in stone anywhere... but from my experience.
In CRM 4 it was possible to create a CRM Service (Google creating a CRM service in the child pipeline) and reset the correlation ID (using CorrelationToken.NewToken()). I don't see anything so easy in the 2011 SDK. No idea if this trick worked in the online environment. Is 2011 online backwards compatible with CRM 4 plug-ins?
One thing you could try would be to use the IExecutionContext.CorrelationId to scavenge the asyncoperation (System Job) table. But according to the metadata, the attribute I think might be useful (CorrelationId, CorrelationUpdatedTime, Depth) are NOT valid for update. Maybe you could delete the rows? Even that may not help.
I doubt this can be solved like this.
I'd suggest a different approach: deploy a simple application alongside CRM and let it call the web service, which in turn can use the XRM endpoints in order to change the records.
UPDATE
Or, you can try something like this upon your crm service initialization in the plugin (dug it up from one of my plugins) leaving your workflow untouched:
CrmService service = new CrmService();
//initialize service here, then...
CorrelationToken newtoken = new CorrelationToken();
newtoken.CorrelationId = context.CorrelationId;
newtoken.CorrelationUpdatedTime = context.CorrelationUpdatedTime;
// WILD GUESS: Enforce unlimited depth ?
corToken.Depth = 0; // THIS WAS: context.Depth;
//updating correlation token
service.CorrelationTokenValue = corToken;
I admit I don't really remember much about this (code dates back to about 2 years ago), but it might help.
I am a newbie in Developer2000.
I have an Oracle pl/sql procedure (say, proc_submit_request) that fetches thousands of requests and submits them to dbms_job scheduler. The calling of dbms_job is
coded inside a loop for each request fetched.
Currently, i have a button (say, SUBMIT button) in oracle forms screen clicking which calls proc_submit_request.
The problem here is... the control does not return to my screen untill ALL of the requests fetched are submitted to the dbms_job (this takes hours to complete)
The screen grays out and just the hour-glass appears untill the completion of the procedure proc_submit_request.
proc_submit_appears returns a message to screen telling "XXXX requests submitted".
My requirement now is, once the user clicks the SUBMIT button, the screen should no longer gray out. The user should be able to navigate to other screens and not just struck with the submit screen untill the called procedure is completed.
I suggested running listeners (shell scripts and perl stuff) that can listen for any messages in pipe and run requests as back-ground process.
But the user is asking me to fix the issue in the application rather running listeners.
I've heard a little of OPEN_FORM built-in.
Suppose, I have two forms namely Form-1 and Form-2. Form-1 calls Form-2 using OPEN_FORM.
Now are the following things possible using OPEN_FORM?
On calling open_form('Form-2',OTHER-ARGUMENTS...), control must be in Form-1 (i.e. USer should not know that another form is getting opened) and Form-2 should call proc_submit_request.
User must be able to navigate to other screens in the application. But Form-2 must still be running until proc_submit_procedure is completed.
What happens if the user closes (exits) Form-1 ? Will Form-2 be still running?
Please provide me answers or suggest a good solution.
Good thought on the Form-1, Form-2 scenario - I'm not sure if that would work or not. But here is a much easier way without having to fumble around with coordinating forms being hidden and running stuff, and coming to the forefront when the function actually returns...etc, etc.
Rewrite your function that runs the database jobs to run as an AUTONOMOUS_TRANSACTION. Have a look at the compiler directive PRAGMA AUTONOMOUS_TRANSACTION for more details on that. You must use this within a database function/package/procedure - it is not valid with Forms (at least forms 10, not sure about 11).
You can then store the jobs result somewhere from your function (package variable, table, etc) and then use the built-in called CREATE_TIMER in conjunction with the form level trigger WHEN-TIMER-EXPIRED to go and check your storage location every 10 seconds or so - you can then display a message to the user about the jobs, and kill the timer using DELETE_TIMER.
You could create a single DBMS_JOB to call proc_submit_request. That way your form will only have to make one call; and the creation of all the other jobs will be done in a separate session.