Export all failed jobs from control-m enterprise - export-to-excel

I'm looking for a possibility to export all failed jobs (resolved and not) of a day into a file (text, csv, xml,..)
Tendency is, I will not be able to check all resolved/forced-ok jobs which failed all throughout the day unless I do it manually by placing in a spreadsheet.
Does anybody know if there is such an utility? We're currently using Control-M in Version 7.0 on Server

You can schedule a job do so :-
run below script as command line with passing two argument , %%PARM1 %%PARM2
you need to update two filed in it :-
1. NDP time of your Environment , I have used as 0930
2. Control-M environment name
3. your email ID in last line of mailx and file path as per your system .
***you can use mutt -a if mailx -a is not working in your system for sending email with attached file .
----------------------------------
now job :-
Job Type : Command
File Path : not reuired
C ommand : path/report.sh %%PARM1 %%PARM2
rest all as normal ,
but don't forget to define PARM1 and PRAM2 in auto edit variable
PARM1 = %%$PREV
PARM2 = %%$DATE
-------------------------
Script
***********************************
report.sh
------------------------------------------------
#!/bin/bash
env=< Control-M user name > # Use control-M name
ctmlog list $1 0930 $2 0930 | grep NOTOK > $1_failedjob.txt # update time to NDP time ,i used 0930
cut -d'|' -f2,3,4,5,8 $1_failedjob.txt | sed 's/|/,/g' > $1_failed.csv
awk 'BEGIN {print "DATE,TIME,JOBNAME\t,ORDERID\t,STATUS";}
{print $0;}
END { print "\tReport generated\n";}' $1_failed.csv
rm $1_failedjob.txt
echo " Last 24 Hour failed job list " | mailx -s "Failed Job list for $1" -a "absolute path of file $1_failed.csv" youremail#domain.com
exit 0`
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
Apart from using this , you can always ask your ops team to send an report by exporting failed job for a particular time and date from Control-m EM GUI

Related

Create a dynamic header based from the output on the file (BASH)

i have a file that is quite dynamic. Dynamic in a sense that Host and iSCSI_Name may come short or long. It works well for some host with long iqn name like the sample below but it doesn't work for host with shorter name.
Would appreciate if someone had done this or maybe had a previous script that creates a dynamic header that follows the length on the given data or vice versa.
With long iqn name
MAPPING
==================================================================================
Host Status iSCSI_Name State
==================================================================================
irefr-esz-011 online iqn.2000-01.com.vmware:irefr-esz-011-312901 active
==================================================================================
with short iqn and host name output looks like this. Data is not in place.
MAPPING
==================================================================================
Host Status iSCSI_Name State
==================================================================================
esz1 online irefr-esz-011-312901 active
==================================================================================
Currently, i had this on my script.
echo -e "\e[96m MAPPING\e[0m"
echo "=================================================================================="
printf '%-12s %-30s %-21s %-30s %-20s\n' Host Status iSCSI_Name State
echo "=================================================================================="
cat outputfile | column -t
echo "=================================================================================="
echo
output file:
irefr-esz-011 online iqn.2000-01.com.vmware:irefr-esz-011-312901 active
esz1 online irefr-esz-011-312901 active
irefr-esz-011 online iqn.2000-01.com.vmware:irefr-esz-011-312901 active
esz1 online irefr-esz-011-312901 active
irefr-esz-011 online iqn.2000-01.com.vmware:irefr-esz-011-312901 active
esz1 online irefr-esz-011-312901 active
What's about
SEP="===================================================================================="
(echo Host Status iSCSI_Name State; cat outputfile) | column -t | \
sed "1 s/\(.*\)/$SEP\n\1\n$SEP/";echo $SEP
sed picks the first line and decorates it with a leading and a trailing separator.

Adding Job Array elements in Slurm after submission

I'm trying to use a Slurm-operated cluster to run LS-Dyna (a finite-element simulation program with a limited number of licenses available on my cluster). I am trying to write my batch scripts so that I do not waste processing time due to this license limit (as well as to improve legibility when running 'squeue' commands) by using job arrays -but I'm having trouble making that work.
I want to run identical Bash scripts in a variety of FEM meshes, each of which I have organized into different subfolders.
Given this folder structure on my cluster...
cluster root
|
...
|
|-+ my scratch space's root
|
|-+ this project
|
|--+ lat_-5mm
| |- runCurrentLine.bash
| |- other files
|
|--+ lat_-4.75mm
| |- runCurrentLine.bash
| |- other files
|
|--+ lat_-4.5mm
| |- runCurrentLine.bash
| |- other files
|
...
|
|--+ lat_5mm
| |- runCurrentLine.bash
| |- other files
|
|
|-sendDynaRuns.bash
|-other dependencies
...I'm trying to submit "runCurrentLine.bash" in each folder by running the following script in my login node.
#!/bin/bash
iter=0
for foldernow in */; do
# change to subdirectory for current line iteration
cd "./${foldernow}";
# make Slurm and user happy
echo "sending LS Dyna simulation for ${pos}mm line..."
sleep 1
# first line only: send batch, and get job ID
if [ "${iter}" == 0 ];then
# send the batch...
jobID=$(sbatch -J "Dyna" --array="${iter}"%15 runCurrentLine.bash)
# ...ensure that Slurm's output shows on console (which includes the job ID)...
echo "${jobID}"
# ...and extract the job ID and save as a variable
jobID=$(echo "${jobID}" | grep -Eo '[+-]?[0-9]+([.][0-9]+)?')
# subsequent lines: add current line to job array
else
scontrol update --jobid="${jobID}" --array="${iter}"%15 runCurrentLine.bash
fi
# prepare to move onto next position
iter=$((iter+1))
cd ../
done
This setup properly sends the batch job for the first line, at -0.25mm*. However, for the second line onwards, it doesn't seem to do the same thing... This is what I end up getting on my console:
*: I intended the "lat_xmm" folders to be numerically ordered, but Unix doesn't seem to recognize that
$ ./sendDynaRuns.bash
sending LS Dyna simulation for -0.25mm line...
Submitted batch job 1081040
sending LS Dyna simulation for 0.25mm line...
sbatch: error: Batch job submission failed: Invalid job id specified
sending LS Dyna simulation for -0.5mm line...
sbatch: error: Batch job submission failed: Invalid job id specified
I know that runCurrentLine.bash runs just fine if I manually send it as a batch (and it runs to completion within the time limit I specified in-file, mainly since it doesn't have to compete with other lines for open licenses). What should I do to be able to get my code to work?
Thank you in advance!
As state by #Poshi, you cannot add jobs to an existing array.
I would create a submission script like this one:
#!/bin/bash
#SBATCH --array=1-<nb of folders>%15
# ALL OTHER SLURM SBATCH DIRECTIVES HERE
folders=(lat_*)
foldernow=${folders[$SLURM_TASK_ARRAY_ID]}
cd $foldernow && ./runCurrentLine.bash
The only drawback is that you need setup explicitly the number of jobs the array based on the number of folders.

How to read every line from a txt file and print starting from the line which starts with "Created_Date" in shell scripting [duplicate]

This question already has answers here:
How to get the part of a file after the first line that matches a regular expression
(12 answers)
Closed 4 years ago.
5G_Fixed_Wireless_Dashboard_TestScedule||||||||||||||||^M
Report Run Date||08/07/2018|||||||||||||||||||||^M
Requesting User Company||NEW|||||||||||||||||||||^M
Report Criteria|||||||||||||||||||||||^M
" Service Job Updated from Date:
Service Job Updated to Date:
Service Job Created from Date: 08/06/2018
Service Job Created to Date:
Service Job Status:
Resolution Code:"|||||||||||||||||||||||^M
Created Date|Job Status|Schedule Date|Job
Number|Service Job Type|Verizon Customer Order
Number|Verizon Location Code|Service|Installation
Duration|Part Number
I want to print starting from Created Date. The result
file should be something like below.
Created Date|Job Status|Schedule Date|Job
Number|Service Job Type|Verizon Customer Order
Number|Verizon Location Code|Service|Installation
Duration|Part Number
I have tried the following lines after you people linked me to some other questions. But my requirement is to print the result to the same file.
FILELIST=find $MFROUTDIR -maxdepth 1 -name "XXXXXX_5G_Order_*.txt"
for nextFile in $FILELIST;do
cat $nextFile | sed -n -e '/Created Date/,$p'
done
By writing above lines of code, output is printed on console. Could you please suggest some way to print it in same file.
This can be easily done with a simple awk command:
awk '/^Created Date/{p=1} p' file
Created Date|Job Status|Schedule Date|Job
Number|Service Job Type|Verizon Customer Order
Number|Verizon Location Code|Service|Installation
Duration|Part Number
We set a flag p to 1 when we encounter a line that starts with Created Date. Later we use awk default action to print each line when p==1.
References:
Effective AWK Programming
Awk Tutorial

Bash builtin read command difference from Korn shell

I use the following script to retrieve information about mounted file-systems on several hundred Solaris (v9,10,11) and Red Hat Enterprise Linux (v5,6,7) servers for analysis.
# retrieves for all mounted file-systems: server, device, allocated, used, available, percent_used, mount_directory, permissions, owner_name, and group_name
server=$(uname -n)
df -h | awk '
NF == 6 { print ($0); }
NF == 1 { device = $1; }
NF == 5 { print (device, " ", $0); }
' | while read device allocated used available percent mount
do
ls -ld "${mount}" | read permissions links owner_name group_name size month day time directory
echo "${server} ${device} ${allocated} ${used} ${available} ${percent} ${mount} ${permissions} ${owner_name} ${group_name}"
done
I perform this operation from Windoze using PuTTY "plink" utility.
plink -m filesys.script server_name >>filesys.txt
All worked as expected until my default shell was changed from ksh to bash on all servers. Now, the second read command that obtains ls output for permissions, owner_name, and group_name is not functioning and does not produce any error messages either. Therefore the result is that only seven tokens are in output (server through mount) and there is nothing for permissions, owner_name, or group_name.
I have confirmed that if I upload the script to the Unix server with a shebang (#!/bin/ksh) at the top line the script works as expected. However, I do not want to push this script to hundreds of servers and maintain the script in a distributed mechanism. I would like to retain the script on central Windoze workstation and call with -m parameter of plink. Placing a shabang at top of the file does not execute ksh using plink -m option.
The Bash shell versions that are in play are 3.2 and 4.1. I have also made certain that the Windoze script file has carriage returns removed. The awk utility is used to handle situations where the device name is too long and df breaks the output over two lines.
Again, the first read (from df/awk) is working fine but the second (ls output) is not. I confirmed by placing a 'set' following the second read and those environment varriables were not in the environment.
The read (as a pipe element) happens in a subshell, so even though it actually does execute perfectly, once that pipeline exits its results aren't available to the echo running on a separate line (as part of the parent process that originally spawned the pipeline). This is fully allowed by POSIX; which component of a pipeline, if any, is performed by the shell spawning that pipeline is unspecified by the standard and thus implementation-defined.
You can address the issue by putting the echo inside of the same pipeline element as the read:
server=$(uname -n)
df -h | awk '
NF == 6 { print ($0); }
NF == 1 { device = $1; }
NF == 5 { print (device, " ", $0); }
' | while read device allocated used available percent mount
do
# NOTE: parsing output from "ls" is unreliable
ls -ld "${mount}" | {
read permissions links owner_name group_name size month day time directory
echo "${server} ${device} ${allocated} ${used} ${available} ${percent} ${mount} ${permissions} ${owner_name} ${group_name}"
}
done
References:
BashFAQ #24 (I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates? Or, why can't I pipe data to read?)
ParsingLs (Why you shouldn't parse the output of ls(1))
If you have GNU stat or find, either of which allows you to provide a format string to control metadata output, I would strongly suggest using them in place of ls -l for parsing metadata. Even perl is somewhat better for the purpose, having only a single universally available implementation with uniform stat behavior between releases.

Use multiple column variables in bash script to pull output from routers

I have a script that logs on to routers and pulls output that is named routerauto. I would like to use data from a text file to automatically populate required commands to pull required info from a large number of routers.
Ultimately I would like the script to move through each line of the text file, filling in the gaps with the output from the columns as below. The text file uses tab as separator.
routerauto VARIABLE1 "sh service id VARIABLE2 sap VARIABLE4 detail"
Example data:
hostnamei serv-id cct sap
london-officei 123456 No987654321 8/1/4:100
Example output:
routerauto london-office "sh service id 123456 sap 8/1/4:100 detail"
Here is a bash only solution:
#!/bin/bash
while read hostnamei servid cct sap; do
echo routerauto $hostnamei \"sh service id $servid sap $sap detail\"
done < <(tail -n +2 sample.data)
Producing given your sample file:
routerauto london-officei "sh service id 123456 sap 8/1/4:100 detail"
Please note this assume no space are allowed in your various data fields.

Resources