In Autosys R11, I need job_b to run only if job_a succeeded within the last X hours. Apparently, R11 allows you to "set look back dependencies on job_a to only run if job_c has ran to S within X hours." What would be the syntax?
s(job_a)
What would I add if I want my job to run only if job_a succeeded within the last 12 hours, for example?
Should be
condition: success(job_a,12.00)
status(job_name, hhhh.mm)
Related
Currently, I have a sequence job in DataStage.
Here is the flow:
StartLoop Activity --> UserVariables Activity --> Job Activity --> Execute Command --> Endloop Activity
The job will run every 30 minutes (8 AM - 8 PM) to get real data. The first loop iteration will load data from 8 PM the previous day to 8 AM the current day, and the others will load data that happens in the last 30 minutes.
The UserVariables Activity is to pass variables (SQL statement) to filter data getting in the Job Activity. The first iteration the UserVariables pass variable A (SQL statement 1) to the Job Activity, from the second iteration, it will pass variable B (SQL statement 2) to the Job Activity.
The Execute Command I currently set the 'Sleep 1800' command for the job to sleep 30 minutes to end the iteration of the loop. But I realized now that it is affected by the running time of each iteration. So with my knowing-nothing about shell script, I have searched for solutions and have this file to sleep until a specific time when minute like 30 or 00 (delay 0-1 minute but it's fine).
The shell script is below, I ran it fine on my system but no success on making it as part of the job.
#!/bin/bash
minute=$(date +%M)
num_1=30
num_2=60
if [ $minute -le 30 ];
then
wait=$((($num_1 - $minute)*$num_2))
sleep $wait
fi
if [ $minute -gt 30 ];
then
wait=$((($num_2 - $minute)*$num_2))
sleep $wait
fi
I am now facing 2 problems right now that I need your help with.
The job runs the first iteration fine with the variable A below:
select * from my_table where created_date between trunc(sysdate-1) + 20/24 and trunc(sysdate) + 8/24;
But from the second iteration it failed with the Job Activity with the variable B below:
select * from my_table where created_date between trunc(sysdate-1/48, 'hh') + 30*trunc(to_number(to_char(sysdate-1/48,'MI'))/30)/1440 and trunc(sysdate, 'hh') + 30*trunc(to_number(to_char(sysdate,'MI'))/30)/1440;
In the parallel job, the log said:
INPUT,0: The following SQL statement failed: select * from my_table where created_date between trunc(sysdate-1/48, hh) + 30*trunc(to_number(to_char(sysdate-1/48,MI))/30)/1440 and trunc(sysdate, hh) + 30*trunc(to_number(to_char(sysdate,MI))/30)/1440.
I realized that maybe it failed to run the parallel job because it removed the single quote in hh and MI.
Is it because when passing variables from UserVariables Activity to Job Activity the variable will remove all the quotes? And how can I fix this?
2. How can I make the shell script above as part of the job like Execute Command or some other stage. I have searched for solutions and I think it's about the ExecSH Before/ After Routine Activity. But after reading from IBM pages, I still don't know where to start with it.
Sorry for adding 2 questions at 1 post that makes it so long but it's very relative to each other so it will take lots of time to answer if I separate it into 2 posts and you guys need more information about it.
Thank you!
Try escaping the single quote characters (precede each with a backslash).
Execute the shell script from an Execute Command activity ahead of the Job activity.
I’m new for spike and RISC V. I’m trying to do some dynamic instruction trace with spike. These instructions are from a sample.c file. I have tried the following commands:
$ riscv64-unknown-elf-gcc simple.c -g -o simple.out
$ riscv64-unknown-elf-objdump -d --line-numbers -S simple.out
But these commands display the assembled instructions in an out file, which is not I want. I need to trace the dynamic executed instruction in runtime. I find only two relative commands in spike host option:
-g - track histogram of PCs
-l - generate a log of execution
I’m not sure if the result is what I expected as above.
Does anyone have an idea how to do the dynamic instruction trace in spike?
Thanks a lot!
Yes, you can call spike with -l to get a trace of all executed instructions.
Example:
$ spike -l --isa=RV64gc ~/riscv/pk/riscv64-unknown-elf/bin/pk ./hello 2> ins.log
Note that this trace also contains all instructions executed by the proxy-kernel - rather than just the trace of your user program.
The trace can still be useful, e.g. you can search for the start address of your code (i.e. look it up in the objdump output) and consume the trace from there.
Also, when your program invokes a syscall you see something like this in the trace:
[.. inside your program ..]
core 0: 0x0000000000010088 (0x00000073) ecall
core 0: exception trap_user_ecall, epc 0x0000000000010088
core 0: 0x0000000080001938 (0x14011173) csrrw sp, sscratch, sp
[.. inside the pk ..]
sret
[.. inside your program ..]
That means you can skip to the sycall instruction (that are executed in the pk) by searching for the next sret.
Alternatively, you can call spike with -d to enter debug mode. Then you can set a breakpoint on the first instruction of interest in your program (until pc 0 YOURADDRESS - look up the address in the objdump output) and single step from there (by hitting return multiple times). See also the help screen by entering h at the spike prompt.
I'm making a WebSocket application, and need to get the current Pause Time of an Agent.
When I Call the action: QueueStatus, the return is QueueMember event.
an in JSON is returned something like this:
{ActionID: "WelcomeStatus/7000"
CallsTaken: "0"
Event: "QueueMember"
InCall: "0"
LastCall: "0"
LastPause: "1568301325"
Location: "Agent/7000"
Membership: "dynamic"
Name: "Agent/7000"
Paused: "1"
PausedReason: "Almoço"
Penalty: "0"
Queue: "queue1"
StateInterface: "Agent/7000"
Status: "4"}
Note, is returned "LastPause", "PausedReson" and "Pause"..
In "LastPause", aways show some crazy number (i dont understand that number hahahahah).
Well, how to get the current pause time from Asterisk 15?
--EDIT:
By retesting, I have found that what is causing this is that I am also submitting a Reason for Break.
If I do not send the Reason for break time works normally.
Thanks for u help.
Surfing on asterisk's forum, I found the release:
Bugs fixed in this release:
ASTERISK-27541 - app_queue: Queue paused reason was (big number) secs ago when reason is set (Reported by César Benjamín García Martínez)
But this release is for Asterisk 16, not for Asterisk 15.
I've decided to search this issue in some C files, and i found the fail.
Remember, I have to recompile my asterisk, because I change things straight from the source code.
So if you need to perform this procedure, do it in a test environment before it is passed to the production environment.
Open the file:
/usr/src/asterisk-15.7.3/apps/app_queue.c
And search for this line:
mem->reason_paused, (long) (time(NULL) - mem->lastcall), ast_term_reset());
Change:
mem->reason_paused, (long) (time(NULL) - mem->lastpause), ast_term_reset());
And on this line:
"LastPause", (int)mem->lastpause,
Change to:
"LastPause", (long) (time(NULL) - mem->lastpause),
I think is done... All AMI requests and commands on CLI for me is returning the correct information, and works pretty on my AMI Socket.
I have an infinite loop which uses aws cli to get the microservice names, it's parameters like desired tasks,number of running task etc for an environment.
There are 100's of microservices running in an environment. I have a requirement to compare the value of aws ecs metric running task for a particular microservice in the current loop and with that of the previous loop.
Say name a microservice X has the metric running task 5. As it is an infinite loop, after some time, again the loop come for the microservice X. Now, let's assume the value of running task is 4. I want to compare the running task for currnet loop, which is 4 with the value of the running task for the previous run, which is 5.
If you are asking a generic question of how to keep a previous value around so it can be compared to the current value, just store it in a variable. You can use the following as a starting point:
#!/bin/bash
previousValue=0
while read v; do
echo "Previous value=${previousValue}; Current value=${v}"
previousValue=${v}
done
exit 0
If the above script is called testval.sh. And you have an input file called test.in with the following values:
2
1
4
6
3
0
5
Then running
./testval.sh <test.in
will generate the following output:
Previous value=0; Current value=2
Previous value=2; Current value=1
Previous value=1; Current value=4
Previous value=4; Current value=6
Previous value=6; Current value=3
Previous value=3; Current value=0
Previous value=0; Current value=5
If the skeleton script works for you, feel free to modify it for however you need to do comparisons.
Hope this helps.
I dont know how your input looks exactly, but something like this might be useful for you :
The script
#!/bin/bash
declare -A app_stats
while read app tasks
do
if [[ ${app_stats[$app]} -ne $tasks && ! -z ${app_stats[$app]} ]]
then
echo "Number of tasks for $app has changed from ${app_stats[$app]} to $tasks"
app_stats[$app]=$tasks
else
app_stats[$app]=$tasks
fi
done <<< "$( cat input.txt)"
The input
App1 2
App2 5
App3 6
App1 6
The output
Number of tasks for App1 has changed from 2 to 6
Regards!
Im having a lot of trouble of automating my .R files, and im having trouble understand the information regarding it. But here goes:
Im using windows 7 and simply want to automatically run a R.script every morning at 08.00. The .R file spits out the output by itself, so I dont want a separate output-file. I've created a bat-file like this:
"C:\R\R-3.0.1\bin\x64\Rscript.exe" "C:\R\R-3.0.1\bin\x64\Scripts\24AR_v1bat.R"
Echo %DATE% %TIME% %ERRORLEVEL% >> C:\R\R-3.0.1\bin\x64\scripts\24AR_v1.txt
When I run this manually, it works perfectly. Both with/without the:
--default-packages=list
When I run it through the cmd-window, it works perfectly. Yet when I try to run it through the task-scheduler it runs, but does not work. (I either get a 1 or 2 error in my error-message file).
I've looked at R Introduction - Invoking R from the command line, and help(Rscript) but I still can't manage to get it to work.
NEW EDIT: I found that not doing the MS SQL-call, will let my code run from the scheduler. Not sure if I should make a new question or?
EDIT: Adding the R-script
# 24 Hour AR-model, v1 ----------------------------------------------------
#Remove all variables from the workspace
#rm(list=ls())
# Loading Packages
library(forecast)
#Get spot-prices System from 2012-01-01 to today
source("/location/Scripts/SQL_hourlyprices.R")
sys <- data.frame()
sys <- spot
rm(spot)
# Ordering the data, first making a matrix with names: SYS
colnames(sys) <- c("date","hour","day","spot")
hour <-factor(sys[,2])
day <-factor(sys[,3])
dt<-sys[,1]
dt<-as.Date(dt)
x<-sys[,4]
q <-ts(x, frequency=24)
x0<- q[hour==0]
x1<- q[hour==1]
x0 <-ts(x0, frequency=7)
x1 <-ts(x1, frequency=7)
# ARIMA MODELS
y0<-Arima(x0,order=c(2,1,0))
y1<-Arima(x1,order=c(2,1,1))
fr0 <- forecast.Arima(y0,h=1)
fr1 <- forecast.Arima(y1,h=1)
h1<-as.numeric(fr0$mean)
h2<-as.numeric(fr1$mean)
day1 <-Sys.Date()+1
atable<-data.frame
runtime<-Sys.time()
atable<-cbind(runtime,day1,h1,h2)
options(digits=4)
write.table(atable, file="//location/24ar_v1.csv",
append=TRUE,quote=FALSE, sep=",", row.names=F, col.names=F)
But as I said, I can manually run the code with the batch-file and have it work perfectly, yet with the scheduler it won't work.
After hours of trying everything, it seems the problem was that I had:
source("/location/Scripts/SQL_hourlyprices.R")
Where I simply had a SQL-call inside:
sqlQuery(dbdata2, "SELECT CONVERT(char(10), [lokaldatotid],126) AS date,
DATEPART(HOUR,lokaldatotid) as hour,
DATENAME(DW,lokaldatotid) as dag,
pris as spot
FROM [SpotPriser] vp1
WHERE (vp1.boers_id=0)
AND (vp1.omraade_id=0)
AND lokaldatotid >='2012-01-01'
GROUP BY lokaldatotid, pris
ORDER BY lokaldatotid, hour desc") -> spot
When I moved this directly into the script, and deleted the source-line, the scripts would run with the scheduler.
I have no idea why....