Job action string too long - oracle

I'm trying to create a job that will sync two databases in the midnight. There are 10 tables that need to be synced. And it's a very long PL SQL script. When I set this script to JOB ACTION and try to create the job I get "string value too long for attribute job action". What do you suggest I do? Should I seperate the scipt into 10? Isn't there a way to make the job run the code as a script. If I do it manualy all 10 anonymous blocks get executed one after another. I need something that will kind of press F5 for me in the midnight.

What you need is a DBMS_Scheduler chain, in which each action is a separate step and they can be executed at the same time.
http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_sched.htm

Related

Should I create SYNC jobs only in SQLake?

Should we always be just creating sync jobs as our general rule of thumb in Upsolver SQLake?
In most cases, yes, you want to use sync jobs.The only case that you don't want to use sync job is when you have an input to table that you don't want to wait.
Example: you have 5 jobs that write to a table and some jobs that read from that table. If you don't want the entire pipeline to stuck if one of the 5 jobs is stuck, then your pipeline needs to be unsync (or at least this specific job that you think it may stuck to be unsync)
Note: unsync is not a keyword. If create JOB by default creates unsync job. CREATE SYNC job creates SYNC job.

Informatica PC restart workflow with different sql query

I am using Informatica PC.
I have workflow which have sql query.
This query like "select t1, t2, t3 from table where t1 between date '2020-01-01' and date '2020-01-31'"
I need to download all data between 2020 and 2022. But I can't write it in query because I will have ABORT SESSION from Teradata.
I want to write smth, which will restart workflow with different dates automatically.
From first start take 01.2020, second start 02.2020, third start 03.2020 and etc.
How can I solve this problem?
This is a long solution and can be achieved in two ways. Using only shell script will give you lot of flexibility.
First of all parameterize your mapping with two mapping parameter. Use them in SQL like below.
select t1, t2, t3 from table where t1 between date '$$START_DT' and date '$$END_DT'
Idea is to change them at each run.
Using only shell script - Its flexible because you can handle as many run as you want using this method. You need to call this shell script using some CMD task.
Create a master file which has data like this
2020-01-01,2020-01-31
2020-02-01,2020-02-29
2020-03-01,2020-03-31
Create three informatica parameter file using above entries. First file(file1) should look like this
[folder.workflow.session_name]
$$START_DT=2020-01-01
$$END_DT=2020-01-31
Use file(file1) in a pmcmd to kick off informatica workflow. Pls add --wait so it waits for this to complete.
Loop above steps until all entries of master file are complete.
Using informatica only method - This method is not as flexible as above and applicable for only your quesion.
Create a shell script that creates three parameter file using above master file.
Create three session or three worklets which uses above three parameter files. You need to be careful to use correct parameter for correct session.
You can attach those sessions/worklets one after another or in parallel.

Create One Time Scheduled Job, Run When Others Not Running

I want to create a Scheduled Job in Oracle11g Express.
I am new to Job Scheduling and my search so far points to chains but as I want to create a Job out of a function that runs irregulary at a yet unknown date I believe that Chains aren't working for my case.
The Job will be created when a Procedure finishes, which determines its Scheduled Date X.
The Job will perform some critical changes which is why I don't want it to start while other regular scheduled jobs are running.
I want it to wait until the other jobs finish.
I only want to have it run once and than drop the job.
Is there some good practice for this case or some Option I have missed?

Shell script for hourly run to pull data if exists

I am trying to optimize our batch process to pull and insert data into a database. Currently, we have a data source that we pull our data from, create a text file, and load into our reporting database. We have that on a time schedule in Autosys, since most of the time, data is available by a certain time. However, lately, the data source has been late and we are not able to pull the data during the scheduled time and we have to manually run the shell script.
I want to have a shell script that runs the queries every hour and if the data exists, spools to a file to be loaded into the DB. If the data isn't there, then try again next hour so that we can eliminate any manual intervention.
I know I can set up a file trigger in Autosys to run the load into the database if the file exists, but I am having issues setting up the shell script only pull the data once it is available and not repeat the next hour if the file has already been spooled. I am new to UNIX so I am not sure how to proceed. Any help would be great.
You haven't stated your priority clearly. The priorities could be:
load the data as soon as it is available
load the data at least once every x minutes or hours
eliminate any need for manual intervention (which is clear from your question)
This is what you could do, assuming there is no need to load the data as soon as it is available:
increase the frequency of the Autosys job (instead of hourly, may be make it once in 30 or 15 minutes)
change the script so that:
it attempts to load only if it has been x minutes since last successful load, otherwise does nothing and ends in success
stores the last successful load timestamp in a file (which would be touched only upon a successful load)
if data doesn't arrive even after x + some buffer minutes, it might make more sense for the load job to fail so that it gets the required attention.

Writing autosys job information to Oracle DB

Here's my situation: we have no access to the autosys server other than using the autorep command. We need to keep detailed statistics on each of our jobs. I have written some Oracle database tables that will store start/end times, exit codes, JIL, etc.
What I need to know is what is the easiest way to output the data we require (which is all available in the autosys tables that we do not have access to) to an Oracle database.
Here are the technical details of our system:
autosys version - I cannot figure out how to get this information
Oracle version - 11g
We have two separate environments - one for UAT/QA/IT and several PROD servers
Do something like below
Create a table with the parameters you want to put. Put a key columns which should be auto generated. The jil column should be able to handle huge data. Also add one columns for sysdate.
Create a shell script. Inside it do as follows
"autorep -j -l0" to get all the jobs you want and put them in a file. -l0 is to ignore duplicate jobs. If a Box contain a job, then without -l0 you will get the job twice.
create a loop and read all the job names one by one.
In the loop, set varaibles for jobname/starttime/endtime/status (which all you can get from autorep -j . Then use a variable to hold jil by autorep -q -j
Append all these variable values in a flat file.
End the loop. After exiting a loop you wil end up with a file with all the job details.
Then use SQL loader to put the data in your oracle table. You can hardcode a control file and use it for every run. But the content of data file will change for every run.
Let me know if any part is not clear.

Resources