Killing an Oracle job. 10g specific - oracle

We're using a job scheduling system that runs on top of DBMS_JOB. It uses a master job to create one-time jobs. We deploy the same set of jobs to all our clients, but can specify which jobs should only run at certain clients.
We get occasional problems with a process run by a job hanging. The main cause of this is UTL_TCP not timing out when it does get an expected response. I want to be able to kill those jobs so that they can run again.
I'm looking at creating a new job that kill any of these one-time jobs that have been running for longer than a certain time.
We're stuck with Oracle 10g for a while yet, so I'm limited to what that can do.
There's an article that seems to cover most of this at
http://it.toolbox.com/blogs/database-solutions/killing-the-oracle-dbms_job-6498
I have a feeling that this is not going to cover all eventualities, including:
We run can jobs as several different users and a user can only break/remove jobs they created. I believe that I may be able to use DBMS_IJOB to get around that, but I need to get the DBA to let me execute it.
We have Oracle RAC systems. I understand 10g limits ALTER SYSTEM KILL SESSION to killing sessions on the current instance. I could arrange for all jobs to run on the same instance, but I've not tried that yet.
Anything else I should consider? Stack Overflow needs a definitive answer on this.

You can get the PID from the job tables and kill the stuck process via the normal OS commands.
You can kill jobs on any instance. On 10g, you need to know on which instance the stuck job is running, and connect to that instance:
To get your instance and pid:
select inst_id, process from gv$session where ...
Connect to a specific instance:
sqplus admin#node3 as sysdba
alter system kill session ...

There are more ways to kill a session on oracle. Depends on your plattform. Running on unix sessions (background jobs too) are represented by processes. Killing the process, kills the session. On windows sessions are represented by a thread. Killing the thread using orakill, kills the session. The process (or thread) id is stored in gv$process.

Related

What is the correct use case for SchedulerLock lockAtMostFor?

I am using SchedulerLock in Spring Boot And I am using 2 servers.
What I'm curious about is why is "lockAtMostFor" an option that exists?
Take an example: on one of my 2 servers, the schedule runs first and then locks.
But something went wrong while running, and my server went down.
At this moment, my scheduled task ends incompletely.
Any guide I read is full of vague answers about "lock time in case a node dies".
When a node dies, it can no longer execute schedules.
But why keep holding a LOCK for a dead node?
Even if I urgently try to manually execute the schedule on the 2nd server, it is impossible to manually execute it because of the above lock.
What are options that exist for?

When an Oracle Scheduled Job is Enabled, is it run in separate session?

I've recently been learning to use Oracle's scheduler to run jobs asynchronously.
I'm trying to build jobs that only run once and then are automatically dropped.
The way I've accomplished this is to set the MAX_RUNS attribute of a job to 1 and for the AUTO_DROP attribute to be set to TRUE.
All my jobs are DISABLED by default as I only kick them off manually.
I noticed that the jobs were not being dropped and this Ask Tom thread explained why.
Thus I must enable my jobs first before running them if they are to be dropped automatically.
However when a job is enabled it is scheduled immediately.
My question is:
When a scheduled job is enabled, and thus immediately scheduled, does it execute in a separate session?
I am needing the jobs to all be scheduled asynchronously and thus I am hoping to achieve the same behavior as:
DBMS_SCHEDULER.RUN_JOB(V_JOB_NAME, FALSE);
FALSE indicating asynchronous scheduling of the job in a separate session.
I am fine with the approach of enabling a job and having it immediately scheduled, as long as it's asynchronous in a separate session.
Alternatively, if there was a way to enforce that a job is NOT scheduled when it is enabled, that would work as well.
I am currently on Oracle 11gR2
If I had a bunch of asynchronous 1-time jobs that needed to be run, I would seriously consider using the old dbms_job package to do my scheduling rather than dbms_scheduler. Then you're just doing something like this
declare
l_jobno pls_integer;
begin
dbms_job.submit( l_jobno,
'<<what you want the job to do>>',
sysdate );
end;
/
This will cause a new session to be opened as soon as you commit the submit that runs whatever code you specified. It'll run once and then get removed from dba_jobs.
Alternatively, I'd try to re-architect the system so that there is a background "widget processor" job that runs every few minutes, reads a table to determine what widgets need to be processed, and processes those widgets. That probably makes more sense than spawning a separate job to process each widget.
dbms_scheduler is wonderful when you want to have more sophisticated processing where one job depends on another and jobs get started based on events. But old-school dba_jobs can be really handy when you just need a very lightweight framework for very simple jobs.
I have found that by setting the START_DATE of a new job to a future date and the REPEAT_INTERVAL to something like 'FREQ=YEARLY' then the job will not run immediately when it is enabled.
While not ideal, this will allow for the job to be dropped when run manually via the DBMS_SCEHDULER.RUN_JOB(V_JOB_NAME, FALSE); command.
However this does imply that the job is scheduled to run at some point in the future. So just don't forget to run it manually. That's fine in my case since everything is happening in a single call, but good to call out.

How to make an Agent work in the system session?

I'm trying to a background program that need to connect to window server, which is not allowed in a daemon. But may job is quite appropriate to be done in [System] session which daemons act.
I have tried to set session of the Agent.(Aqua by default)
LimitLoadToSessionType: System
But it didn't work.. Is it wrong? How can I do it?
May be [LoginWindow] + [Aqua] could match the right session. But between the two session, the program need to be shutdown and relaunched.

Open a JDBC connection in a specific AS400 subsystem

I have a web service that calls some stored procedure on a AS400 via JTOpen.
What I would like to do is that the connections used to call the stored procedures was opened in a specific subsystem with a specific user, instead of qusrwrk/quser as now (default).
I think I can be able to clone the qusrwrk subsystem to make it start with a specific user, but what I cannot figure out is the mechanism to open the connection in the specific subsystem.
I guess there should be a property at connection level to say subsystem=MySubsystem.
But unfortunatly I haven't found that property.
Any hint would be appreciated.
Flavio
Let the system take care of the subsystem the job database server job is started in.
You should just focus on the application (which is what IBM i excels in).
If need be, you can tweak subsystem parameters for QUSRWRK to improve performance by allocating memory, etc.
The system uses a pool of prestarted jobs as described in the FAQ: When I do WRKACTJOB, why is the host server job running under QUSER instead of the profile specified on the AS400 object?
To improve performance, the host server jobs are prestarted jobs running under QUSER. When the Toolbox connects to a host server job in order to perform an API call, run a command, etc, a request is sent from the Toolbox to an available prestarted job. This request includes the user profile specified on the AS400 object that represents the connection. The host server job receives the request and swaps to the specified user profile before it runs the request. The host server itself originally runs under the QUSER profile, so output from the WRKACTJOB command will show the job as being owned by QUSER. However, the job is in fact running under the profile specified on the request. To determine what profile is being used for any given host server job, you can do one of three things:
1. Display the job log for that job and find the message indicating which user profile is used as a result of the swap.
2. Work with the job and display job status attributes to view the current user profile.
3. Use Navigator for i to view all of the server jobs, which will list the current user of each job. You can also use Navigator for i to look at the server jobs being used by a particular user.

How to run only failed sessions in a workflow

In a workflow there are sessions connected in parallel and in sequence. Suppose some sessions which are in parallel and in sequential mode are failed, How do I restart the workflow with only failed sessions. How can I design this in Informatica?
Turn 'suspend on error' for workflow
Turn 'restart on recovery' for each session in workflow
Now if any session fail workflow will be suspended until you fix the problem and hit recover on workflow in monitor. When you do so it cause to restart only failed sessions.
A large publishing client asked us to implement something similar to what you asked. We crated a database table to keep track of successful sessions within a workflow. Each session will have a mapping at the end that adds an entry to database which says I passed or failed. When we try to run in a recovery mode we query the database at the beginning of each session to find out if we need to run this session or not.
We also provided a web interface to this table where business users can manually choose which session to run or escape based on their needs.
Recovery option will work only if you have "workflow recovery" turned on in repository. If you dont, then you can check option "fail workflow if task fails" at individual session level and create condition on link that connect workflow to each other. Disadvantage of this method is that your workflow will appear failed and wont execute next sessions until failed one are fixed.
thanks.

Resources