Oracle scheduler job sometimes failing, without error message - oracle

I have created a dbms scheduler job which should write a short string to the alert log of each instance of the 9 databases on our 2-node 11g Oracle RAC cluster, once every 24 hours.
The job action is:
'dbms_system.ksdwrt(2, to_char(sysdate+1,''rrrr-mm-dd'') || ''_First'');'
which should write a line like:
2014-08-27_First
The job runs succesfully according to its log, and it does write what it's supposed to, but not always. It's only been scheduled for a few days, so I can't be certain, but it looks as if it will only write to one instance's alert log. Logs on both sides seem to be getting written to, but if it's on one side it's not on the other, so it appears. There is however no indication whatever of any failure in the job itself.
Can anyone shed any light on this behaviour? Thanks.

Related

What is the correct use case for SchedulerLock lockAtMostFor?

I am using SchedulerLock in Spring Boot And I am using 2 servers.
What I'm curious about is why is "lockAtMostFor" an option that exists?
Take an example: on one of my 2 servers, the schedule runs first and then locks.
But something went wrong while running, and my server went down.
At this moment, my scheduled task ends incompletely.
Any guide I read is full of vague answers about "lock time in case a node dies".
When a node dies, it can no longer execute schedules.
But why keep holding a LOCK for a dead node?
Even if I urgently try to manually execute the schedule on the 2nd server, it is impossible to manually execute it because of the above lock.
What are options that exist for?

Oracle job alert for DBMS job

I am trying to create an alert for a dbms scheduler job if it is running for a duration longer than expected. For example, if a job that usually takes 2 hours to run is now running for more than 2.5 hours, I want to be notified.
What would be the best way to do this? Can I use Oracle Enterprise Manager for this?
I achieved this by setting the parameter max_run_duration in the dbms job.
An event will be raised if the job run time exceeds the time mentioned in the property.

Why is my laravel batch failing randomly with ModelNotFoundException?

I have a batch that is scheduled to run every day at 00:05.
On some days, a job in the batch fails immediately with the ModelNotFoundException.
However, the model that was not found does exist.
There were no change to any of the concerned models in the database. (Field, Category, Condition)
There is also no code in the application that allows to delete the category.
Retrying the job manually from the horizon dashboard make the job pass.
The dba said there are no error in the logs and no scheduled maintenance at that time.
What can possibly cause this?

Run Flink with parallelism more than 1

may be i'm just missing smth, but i just have no more ideas where to look.
i read messages from 2 sources, make a join based on a common key and sink
it all to kafka.
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(3)
...
source1
.keyBy(_.searchId)
.connect(source2.keyBy(_.searchId))
.process(new SearchResultsJoinFunction)
.addSink(KafkaSink.sink)
so it perfectly works when i launch it locally and it also works on cluster with Parallelism set to 1, but with 3 not any more.
When i deploy it to 1 job manager and 3 taskmanagers and get every Task in "RUNNING" state, after 2
minutes (when nothing is comming to sink) one of the taskmanagers gets the following log:
https://gist.github.com/zavalit/1b1bf6621bed2a3848a05c1ef84c689c#file-gistfile1-txt-L108
and the whole thing just shuts down.
i'll appreciate any hint.
tnx, in an advance.
The problem appears to be that this task manager -- flink-taskmanager-12-2qvcd (10.81.53.209) -- is unable to talk to at least one of the other task managers, namely flink-taskmanager-12-57jzd (10.81.40.124:46240). This is why the job never really starts to run.
I would check in the logs for this other task manager to see what it says, and I would also review your network configuration. Perhaps a firewall is getting in the way?

How do I stop or drop a job from the Oracle Job Scheduler

Sounds easy, right? I have a job that is running that I'd like to stop (it's been running for way to long, and there is clearly a problem with it). Well, when I try to stop the job, I get this message:
ORA-27366: job "string.string" is not running. Cause: An attempt was made to stop a job that was not running.
However, when I try to drop the job entirely, because I REALLY don't want it running anymore, I get this message:
ORA-27478: job "string.string" is running. Cause: An attempt was made to drop a job that is currently running.
Really, Oracle? Make up your mind! Has anyone seen this before? How do I stop this rogue job without restarting the server?!?!
This has happened to us before and we had to bounce the server, very annoying.
You could try this:
DBMS_SCHEDULER.DROP_JOB(JOB_NAME => 'my_jobname');
OR
try desc the job name as well. Oracle creates a table for a job immediately its created.
Try desc jobname;
bear in mind the schema that created the job and append it to the jobname in the desc statement.
then you can drop it with the drop table statement.

Resources