Quartz get the list of completed jobs - spring

In Quartz version 2.2 how can we find if a job is finished and when?! I have the job key .
First I tried to get the job trigger and see the trigger status.
List<? extends Trigger> triggers = sched.getTriggersOfJob(jobKey);
for (Trigger trigger : triggers) {
....
}
But I find that quartz deletes the job trigger from database when the trigger finish successfully.
I googled an find http://forums.terracotta.org/forums/posts/list/6791.page
Quartz cleans up all of its own unused data so that an administrator
doesn't have to delete records filling up the database (many users
have millions of triggers firing repeatedly. it is impractical and
performance-hindering to keep all that data around).
If you want a history of when triggers have fire, implement a
TriggerListener and record the info yourself, much as the
LoggingTriggerHistoryPlugin does.
Quartz 2.2 is likely to add a history feature with new api for
retrieving the data.
On the other hand I review the quartz code (http://www.quartz-scheduler.org/api/2.2.1/org/quartz/Trigger.CompletedExecutionInstruction.html) and find that the trigger state can be set to NOOP, RE_EXECUTE_JOB, SET_TRIGGER_COMPLETE, DELETE_TRIGGER, SET_ALL_JOB_TRIGGERS_COMPLETE, SET_TRIGGER_ERROR, SET_ALL_JOB_TRIGGERS_ERROR
I think this enumeration is used for failed trigger, But I wonder if there is a way to make Trigger persist.

Related

Oracle scheduler JOBS: get argument_value used in a specific job instance

I have a Oracle Scheduler Job configured in my DB 19c. This is as event-based job, with a queue table created used in this process.
I need to analyse a possible problem in one of this executions. In DBA_SCHEDULER_JOB_LOG table I have the instance and a JOB_SUBNAME called SCHED$_EVTPARINST_89121.
Although I don't have DBA privileges, I have two questions:
How can I get the parameters values passed to this scheduler Job instance?
How can I get the Log (history) of enqueue messages, with enqueue date and set_job_argument_value associated to each enqueue message?
I've tried to check in Oracle documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/scheduling-jobs-with-oracle-scheduler.html#GUID-E9B234FF-C7F3-44B0-AEC8-A997353C414F or https://docs.oracle.com/database/121/ARPLS/d_aq.htm#ARPLS004) without success.
Thanks.

Dynamics CRM Plugin can't retrieve records created earlier in the pipeline

I have a chain of synchronous events that take place.
a custom control calls an action
action creates a couple of records
action then triggers a plugin which tries to retrieve records that were created in step 2, but the query returns nothing
I suspect this is happening because all the events are in the same transaction and therefore the records they create are not yet committed to the database. Is this correct?
Is there an easy way to retrieve records that were created earlier in the pipeline or am I stuck having to stuff OutputParameter object into SharedVariables?

best practices with Oracle DBMS_Scheduler

what's good for Oracle DBMS_Scheduler?
keeping a job scheduled(disabled) every time. and enable it and run it when needed.
create the job ,run it and drop it.
I have a table x and whenever a records gets submitted to that table ,I should have a job to process that record.
we may or may not have the record insertions always..
Keeping this in mind..what's better...?
Processing rows as they appear in a table in an asynchronous process can be done in a number of different ways, choose the way that suits you:
Add a trigger to the table which creates a one-off job to process the row using DBMS_JOB. This is suitable if the volume of data being inserted to the table is quite low, and you don't want your job running all the time. The advantage of DBMS_JOB is that the job will not start until the insert is committed; if it is rolled back, the job is also rolled back so doesn't run. The disadvantage is that if there is a sustained spike of activity, all the jobs created will crowd out any other jobs that are running.
Create a single job using DBMS_SCHEDULER which runs regularly, polls the table for new records and processes them. This method would need a column on the table that it can update to mark each record as "processed". For example, add a VARCHAR2(1) flag which is set to 'Y' on insert and set to NULL by the job after processing. You could add an index to that flag which will only store entries for unprocessed rows (so it will be small and fast). This method is much more efficient, especially for large data volumes, because each run of the job can effectively process large chunks of data in bulk at a time.
Use Oracle Advanced Queueing. http://docs.oracle.com/cd/E11882_01/server.112/e11013/aq_intro.htm#ADQUE0100
For (1), a separate job is created for each record in the table. You don't need to create the jobs. You do need to monitor them, however; if one fails, you would need to investigate and re-run manually.
For (2), you just create one job and let it run regularly. If one record fails, it could be picked up by the next iteration of the job. I would process each record in a separate transaction so the failure of one record doesn't affect the failure of other records still in the queue.
For (3), you still create a job like (2) but instead of reading the table it pulls requests off a queue.

Quartz schedular: how do I setup dynamic job arguments

I'm setting up a Quartz driven job in a Spring. The job needs a single argument which is the id of a database record that it can use to locate the data it needs to process.
The sequence is:
Job starts,
locates next available record id,
processes data.
Because the record id is unknown until the job starts, I cannot set it up when I create the job. I also need to account for restarts if things go bad. From reading Quartz doco it appears that if I store the record Id in the trigger's JobDataMap, then when the server restarts, the job will automatically restart with the same record Id it was original started with.
This is where things get tricky, I'm trying to figure out where and when to get the record Id so I can store it in the trigger's JobDataMap. I'm thinking I need to implement a TriggerListener and use it to set the record Id in the JobDataMap when the triggerFired() callback is called. This will involve a call to the database to get the record Id.
I'm not really sure if this approach is the correct one, or whether I'm barking up the wrong tree. Can someone with some quartz experience tell me if this is correct, or if there is a better way to configure a jobs arguments so that they can be dynamically set and a restart will preserve them?
Thanks
Derek

Quartz org.quartz.jobStore.selectWithLockSQL row lock

I am using Quartz in clustered mode
I have some row lock contention on DB level caused by excessive call to :
org.quartz.jobStore.selectWithLockSQL
"SELECT * FROM QRTZ_LOCKS WHERE SCHED_NAME = :"SYS_B_0" AND LOCK_NAME = :1 FOR UPDATE"
I read quartz docs and is still not very clear to me why is above query is executed.
What is the purpose of having this row lock ?
Regards
The locks table is used by quartz for coordinating multiple schedulers when deployed in cluster mode. In a cluster only one node should fire the trigger, so a lock is used to avoid multiple nodes acquiring the same trigger.
From the clustering section of the documentation (http://quartz-scheduler.org/generated/2.2.1/html/qs-all/#page/Quartz_Scheduler_Documentation_Set%2Fre-cls_cluster_configuration.html%23):
Clustering currently only works with the JDBC-Jobstore (JobStoreTX or
JobStoreCMT), and essentially works by having each node of the cluster
share the same database. Load-balancing occurs automatically, with
each node of the cluster firing jobs as quickly as it can. When a
trigger's firing time occurs, the first node to acquire it (by placing
a lock on it) is the node that will fire it.
In my case, I was experiencing a similar issue. I was using quartz fir running jobs whose logic involved fetching data from a foreign db. Whenever the connection between the application db and foreign db stopped due to some reason and the connection came back up the issue of locks surfaced and we used to get messages like this in the database logs
2021-01-14 12:06:17.935 KST [46836] STATEMENT:
SELECT * FROM HVACQRTZ_LOCKS WHERE SCHED_NAME = 'schedulerFactoryBean' AND LOCK_NAME = $1 FOR UPDATE
2021-01-14 12:06:18.937 KST [46836] ERROR: current transaction is aborted, commands ignored until end of transaction block
To solve this issue I used this property of quartz and once after using this property the issue went away. By default, the foe update part will be there at the end of the query but since the default query is replaced by the query which I wrote in the property file the for update portion is gone and no locks appear now and everything seems to be working smoothly.
selectWithLockSQL: SELECT * FROM {0}LOCKS WHERE LOCK_NAME = ?

Resources