Add string to default SAS log file name from bash script - bash

I'm trying to build a simple bash script to automate running some monthly SAS programs at work. The problem I'm running into is that we like to keep logs based on the day a program was run, in case the underlying data changes, but I can't find a way to append the date to the log file.
My base code is as follows:
#!/bin/bash
month=`date +%Y%m -d "1 month ago"` #Previous month for log folder
sysdate=`date "+%Y_%m_%d"` #today's date
sasbatdir=/c01/sasdata/public
sasdir=/n04/directory-where-programs-are
saslog=/n04/directory-where-programs-are/Log/$month
cd $sasdir
$sasbatdir/batchsas.sh -s PROGRAM_01.sas -o $saslog -k traditional
$sasbatdir/batchsas.sh -s PROGRAM_02.sas -o $saslog -k traditional
$sasbatdir/batchsas.sh -s PROGRAM_03.sas -o $saslog -k traditional
... etc
exit 0
So the above works, but it obviously only outputs log files with the name PROGRAM_01.log, PROGRAM_02.log, etc. format, which get overwritten the next time the script is run in that month.
Things I have tried:
$sasbatdir/batchsas.sh -s PROGRAM_01.sas -o $saslog/PROGRAM_01_"$sysdate".log -k traditional
and
$sasbatdir/batchsas.sh -s PROGRAM_01.sas -log $saslog/PROGRAM_01_"$sysdate".log -k traditional
Does not work. nohup returns an "Output directory not found" error, and appears to be treating the log name as a directory instead of a file.
$sasbatdir/batchsas.sh -s PROGRAM_01.sas -o -t 1 > $saslog/PROGRAM_01_"$sysdate".log -k traditional 2>&1
Mostly works, but returns two log files: one with the correct name, but only containing the nohup output, and the other with the SAS log, but with both the date (in the wrong format) and the job ID appended. Removing the 2>$1 prevents either from being written. I'd honestly take the second one, if I could figure out how to produce it without the first, though I would prefer to stick to the Program_Name_YYYY_MM_DD.log format.
In case it's relevant, the command I'm using to test the programs is nohup /n04/directory-where-program-is-stored/test_script.sh

I would add the following command:
exec > "$saslog/PROGRAM_01_${sysdate}.log" 2>&1
It would be preferible to add this command inside the "batchas.sh" script, but in case it is not possible, you can add this line before every "batchas.sh" script call with the new log file.
This command redirects stdout and stderr to the indicated file, regardless the script was launched from command line, crontab or with nohup.

Related

Bash check if script is running with exact options

I know how to check if a script is already running (if pidof -o %PPID -x "scriptname.sh"; then...). But now I have a script that accepts inputs as flags, so it can be used in several different scenarios, many of which will probably run at the same time.
Example:
/opt/scripts/backup/tar.sh -d /directory1 -b /backup/dir -c /config/dir
and
/opt/scripts/backup/tar.sh -d /directory2 -b /backup/dir -c /config/dir
The above runs a backup script that I wrote, and the flags are the parameters for the script: the directory being backed up, the backup location, and the configuration location. The above example are two different backups (directory 1 and directory 2) and therefore should be allowed to run simultaneously.
Is there any way for a script to check if it is being run and check if the running version is using the exact same parameters/flags?
The ps -Af command will provide you all the processes that run on you os with the "command" line used to run them.
One solution :
if ps auxwww | grep '/[o]pt/scripts/backup/tar.*/directory2'; then
echo "running"
else
echo "NOT running"
fi

No display of variable data on mail output - Shell scripting

I've scheduled a task in a UNIX environment, which sends a report of services running/stopped using Shell scripting. Here is the code for same;
#!/bin/bash
echo -e "\t\tServer daily monitoring report\n">/home/user/MailLog.txt
echo -e "\t\t`date "+%Y-%m-%d %H:%M:%S"`\n">>/home/user/MailLog.txt
sudo bash /home/user/commands.sh>>/home/user/MailLog.txt
echo >>/home/user/MailLog.txt
cat /home/user/MailLog.txt>>/home/user/StatusLog.txt
rn=`grep -c "running" MailLog.txt`
sp=`grep -c "stopped" MailLog.txt`
echo -e "Server status report\n\nServices running:\t $rn \nServices
stopped:\t $sp "|mailx -v -s "Services report." -a /home/user/MailLog.txt
useremail1#domain.com,useremail2#domain.com
#echo $run $stp
#rm /home/user/MailLog.txt
As per scheduled task, I receive the mail and attachment alright. But I get a blank in front of 'Services running: ' and 'Services stopped: '.
When I manually run the script, I get the proper output (numbers + attachment).
Please tell me what I'm doing wrong.
Replace MailLog.txt by /home/user/MailLog.txt in both grep commands. It's very likely that you usually manually run the commands from the /home/user/ directory but the script's working directory isn't /home/user, which makes the relative path MailLog.txt point to an inexistant file.
rn=$(grep -c "running" /home/user/MailLog.txt)
sp=$(grep -c "stopped" /home/user/MailLog.txt)
Better yet, set the file path in a variable and reuse that one each time you want to refer to the file :
work_file="/home/user/MailLog.txt"
#[...]
rn=$(grep -c "running" "$work_file")
sp=$(grep -c "stopped" "$work_file")
Note that your code could be improved in many other ways, I suggest you validate it with shellcheck (you can ignore the sudo+redirect warning since your user has write permissions to the MailLog.txt file).

How to run a series of vim commands from command prompt

I have four text files A.txt, B.txt, C.txt and D.txt
I have to perform a series of vim editing in all these files.
Currently how I am doing is open each files and do the same vim commands one by one.
Is it possible to make a script file which I can run from the command prompt, means without open the actual file for vim editing.
for example, if I have to perform the below vim commands after opening the A.txt file in vim editor:
:g/^\s*$/d
:%s/^/[/
:%s/\(.*\)\(\s\+\)\(.*\)/\3\2\1
:%s/$/]/
:%s/design/test/
Is it possible to make a script file and put all these commands including gvim A.txt (first command in the file).
and edit run the script file from command prompt.
If it is possible, please let me know how to do it and how it can be done with single or multiple files at a time?
vim -c <command> Execute <command> after loading the first file
Does what you describe, but you'll have to do it one file at a time.
So, in a windows shell...
for %a in (A,B,C,D) do vim -c ":g/^\s*$/d" -c "<another command>" %a.txt
POSIX shells are similar, but I don't have a machine in front of me at the moment.
I imagine you could load all the files at once and do it, but it would require repeating the commands on the vim command line for each file, similar to
vim -c "<command>" -c "<command>" -c ":n" (repeat the previous -c commands for each file.) <filenames go here>
EDIT: June 08 2014:
Just an FYI, I discovered this a few minutes ago.
vim has the command bufdo to do things to each buffer (file) loaded in the editor. Look at the docs for the bufdo command. In vim, :help bufdo
The amount of -c commands directly passed to Vim on the command-line is limited to 10, and this is not very readable. Alternatively, you can put the commands into a separate script and pass that to Vim. Here's how:
Silent Batch Mode
For very simple text processing (i.e. using Vim like an enhanced 'sed' or 'awk', maybe just benefitting from the enhanced regular expressions in a :substitute command), use Ex-mode.
REM Windows
call vim -N -u NONE -n -es -S "commands.ex" "filespec"
Note: silent batch mode (:help -s-ex) messes up the Windows console, so you may have to do a cls to clean up after the Vim run.
# Unix
vim -T dumb --noplugin -n -es -S "commands.ex" "filespec"
Attention: Vim will hang waiting for input if the "commands.ex" file doesn't exist; better check beforehand for its existence! Alternatively, Vim can read the commands from stdin. You can also fill a new buffer with text read from stdin, and read commands from stderr if you use the - argument.
Full Automation
For more advanced processing involving multiple windows, and real automation of Vim (where you might interact with the user or leave Vim running to let the user take over), use:
vim -N -u NONE -n -c "set nomore" -S "commands.vim" "filespec"
Here's a summary of the used arguments:
-T dumb Avoids errors in case the terminal detection goes wrong.
-N -u NONE Do not load vimrc and plugins, alternatively:
--noplugin Do not load plugins.
-n No swapfile.
-es Ex mode + silent batch mode -s-ex
Attention: Must be given in that order!
-S ... Source script.
-c 'set nomore' Suppress the more-prompt when the screen is filled
with messages or output to avoid blocking.
With all the commands you want to run on each file saved in a script, say "script.vim", you can execute that script on one file like this (as others have mentioned):
vim -c "source script.vim" A.txt
Taking this one step further, you can save your file at the end of the script, either by putting a :w command inside the script itself, or passing it from the command-line:
vim -c "source script.vim | w" A.txt
Now, you can run any command in Vim on multiple files, by using the argdo command. So your command turns into:
vim -c "argdo source script.vim | w" A.txt B.txt C.txt D.txt
Finally, if you want to quit Vim after running your script on every file, just add another command to quit:
vim -c "argdo source script.vim | w" -c "qa" A.txt B.txt C.txt D.txt
Try the following syntax:
ex foo.txt <<-EOF
g/^\s*$/d
%s/^/[/
%s/\(.*\)\(\s\+\)\(.*\)/\3\2\1
%s/$/]/
%s/design/test/
wq " Update changes and quit.
EOF
The ex command is equivalent to vim -E. Add -V1 for verbose output.
Alternative one-liner syntax is for example:
ex +"g/^\s*$/d" +"%s/^/[/" +"%s/design/test/" -cwq foo.txt
To load commands from the file, use -s cmds.vim.
You can also use shebang for Vim to parse the file from the argument.
For more examples, see:
How to edit files non-interactively (e.g. in pipeline)?
BashFAQ/021
JimR and Ingo have provided excellent answers for your use case.
Just to add one more way to do it, however, you could use my vimrunner plugin to script the interaction in ruby: https://github.com/AndrewRadev/vimrunner.
Example:
vim = Vimrunner.start
vim.edit "file.txt"
vim.insert "Hello"
vim.write
vim.kill
This can be useful for more complicated interactions, since you get the full power of a programming language.
Use vim -s ... to script not only colon commands, but also normal-mode commands such as = for formatting:
Create a file with all keystrokes that you want vim to execute.
Run vim -s SCRIPT-FILE FILE-TO-EDIT
For example: Let's say you want to use vim's = command to re-indent all the lines of myfile.html. First, using vim itself, make a file named myscript that has this:
gg=G:wq
(gg moves to the top of the file; =G re-indents from the current location to the end of the file; :wq<Enter> saves the file and exits.)
Then, run this to have vim launch, edit myfile.html, and save it:
vim -s myscript myfile.html

How to run a cron job and then email output if two files contents differ?

I'm trying to run a scheduled cron job and email the output to a few users. However, I only want to e-mail the users if something new happened.
Essentially, this is what happens:
I run a python script, it then checks the filename on an FTP server. If the filename is different, it then downloads the file and starts parsing the information. The filename of the previously downloaded file is stored in last.txt - and if it does, indeed, find a new file then it just updates the filename in last.txt
If the filename is the same, it stops processing and just outputs the file is the same.
Essentially, my thoughts were I could do something similar to:
cp last.txt temp.last.txt | python script.py --verbose > report.txt | diff last.txt temp.last.txt
That's where I got stuck, though. Essentially I want to diff the two files, and if they're the same - nothing happens. If they differ, though, I can e-mail the contents of report.txt to a couple of e-mail address via mail command.
Hopefully I was detailed enough, thanks in advance!
First of all, no need for the pipes | in your code, you should issue each command separately.
Either separate them by semicolon or write them on separate lines of the script.
For the problem itself, one solution would be to redirect the output of diff to a report file like:
cp last.txt temp.last.txt
python script.py --verbose > report.txt
diff last.txt temp.last.txt > diffreport.txt
You can then check if the report file is empty or not as described here: http://www.cyberciti.biz/faq/linux-unix-script-check-if-file-empty-or-not/
Based on the result, you can send diffreport.txt and report.txt or just delete all of it.
Here is a quick example for how your cron job script should look like:
#!/bin/bash
# Run the python script
cp last.txt temp.last.txt
python script.py --verbose > report.txt
diff last.txt temp.last.txt > diffreport.txt
# Check if file is empty or not
if [ -s "diffreport.txt" ]
then
# file is not empty, send a mail with the attachment
# May be call another script that will take care of this task.
else
# file is empty, clean up everything
rm diffreport.txt report.txt temp.last.txt
fi

How to backup filesystem with tar using a bash script?

I want to backup my ubuntu filesystem, and I wrote this little script. It is very basic, but being my first try I am afraid to do mistakes. And since it will take few hours to complete to see results, I think it is better to ask you as experienced programmers if I did something wrong.
I'm particularly interested in > will that record output of mv or will it output also results of tar?
Also variables inside tar command is it correct way?
#!/bin/bash
mybackupname="backup-fullsys-$(date +%Y-%m-%d).tar.gz"
{ time tar -cfpzv $mybackupname --exclude=/$mybackupname --exclude=/proc --exclude=/lost+found --exclude=/sys --exclude=/mnt --exclude=/media --exclude=/dev / && ls -gh $mybackupname && mv -v $mybackupname backups/filesystem/ ; } > backup-system.log
exit
Anything I should know before I run this?
Sandro, you might want to consider spacing things out in your script and producing individual errors. Makes things much easier to read.
#!/bin/bash
mybackupname="backup-fullsys-$(date +%Y-%m-%d).tar.gz"
# Record start time by epoch second
start=$(date '+%s')
# List of excludes in a bash array, for easier reading.
excludes=(--exclude=/$mybackupname)
excludes+=(--exclude=/proc)
excludes+=(--exclude=/lost+found)
excludes+=(--exclude=/sys)
excludes+=(--exclude=/mnt)
excludes+=(--exclude=/media)
excludes+=(--exclude=/dev)
if ! tar -czf "$mybackupname" "${excludes[#]}" /; then
status="tar failed"
elif ! mv "$mybackupname" backups/filesystem/ ; then
status="mv failed"
else
status="success: size=$(stat -c%s backups/filesystem/$mybackupname) duration=$((`date '+%s'` - $start))"
fi
# Log to system log; handle this using syslog(8).
logger -t backup "$status"
If you wanted to keep debug information (like the stderr of tar or mv), that could be handled with redirection to a tmpfile or debug file. But if the command is being run via cron and has output, cron should send it to you via email. A silent cron job is a successful cron job.
The series of ifs causes each program to be run as long as the previous one was successful. It's like chaining your commands with &&, but lets you run other code in case of failure.
Note that I've changed the order of options for tar, because the thing that comes after -f is the file you're saving things to. Also, the -p option is only useful when extracting files from a tar. Permissions are always saved when you create (-c) a tar.
Others might wish to note that this usage of the stat command works in GNU/Linux, but not other unices like FreeBSD or Mac OSX. In BSD, you'd use stat -f%z $mybackupname.
The file redirection as you have it will only record the output of mv.
You can do
{ tar ... && mv ... ; } > logfile 2>&1
to capture the output of both, plus any errors that may occur.
It's a good idea to always be in the habit of quoting variables when they are expanded.
There's no need for the exit.

Resources