No display of variable data on mail output - Shell scripting - bash

I've scheduled a task in a UNIX environment, which sends a report of services running/stopped using Shell scripting. Here is the code for same;
#!/bin/bash
echo -e "\t\tServer daily monitoring report\n">/home/user/MailLog.txt
echo -e "\t\t`date "+%Y-%m-%d %H:%M:%S"`\n">>/home/user/MailLog.txt
sudo bash /home/user/commands.sh>>/home/user/MailLog.txt
echo >>/home/user/MailLog.txt
cat /home/user/MailLog.txt>>/home/user/StatusLog.txt
rn=`grep -c "running" MailLog.txt`
sp=`grep -c "stopped" MailLog.txt`
echo -e "Server status report\n\nServices running:\t $rn \nServices
stopped:\t $sp "|mailx -v -s "Services report." -a /home/user/MailLog.txt
useremail1#domain.com,useremail2#domain.com
#echo $run $stp
#rm /home/user/MailLog.txt
As per scheduled task, I receive the mail and attachment alright. But I get a blank in front of 'Services running: ' and 'Services stopped: '.
When I manually run the script, I get the proper output (numbers + attachment).
Please tell me what I'm doing wrong.

Replace MailLog.txt by /home/user/MailLog.txt in both grep commands. It's very likely that you usually manually run the commands from the /home/user/ directory but the script's working directory isn't /home/user, which makes the relative path MailLog.txt point to an inexistant file.
rn=$(grep -c "running" /home/user/MailLog.txt)
sp=$(grep -c "stopped" /home/user/MailLog.txt)
Better yet, set the file path in a variable and reuse that one each time you want to refer to the file :
work_file="/home/user/MailLog.txt"
#[...]
rn=$(grep -c "running" "$work_file")
sp=$(grep -c "stopped" "$work_file")
Note that your code could be improved in many other ways, I suggest you validate it with shellcheck (you can ignore the sudo+redirect warning since your user has write permissions to the MailLog.txt file).

Related

Bash check if script is running with exact options

I know how to check if a script is already running (if pidof -o %PPID -x "scriptname.sh"; then...). But now I have a script that accepts inputs as flags, so it can be used in several different scenarios, many of which will probably run at the same time.
Example:
/opt/scripts/backup/tar.sh -d /directory1 -b /backup/dir -c /config/dir
and
/opt/scripts/backup/tar.sh -d /directory2 -b /backup/dir -c /config/dir
The above runs a backup script that I wrote, and the flags are the parameters for the script: the directory being backed up, the backup location, and the configuration location. The above example are two different backups (directory 1 and directory 2) and therefore should be allowed to run simultaneously.
Is there any way for a script to check if it is being run and check if the running version is using the exact same parameters/flags?
The ps -Af command will provide you all the processes that run on you os with the "command" line used to run them.
One solution :
if ps auxwww | grep '/[o]pt/scripts/backup/tar.*/directory2'; then
echo "running"
else
echo "NOT running"
fi

Adding printers by shell script; works in terminal but not as .command

I am trying to provide a clickable .command to set up printers in Macs for my workplace. I thought since it is something I do very frequently, I can write a shellscript for each printer and save it on a shared server. Then, when I need to add a printer for someone, I can just find the shell script on the server and execute it. My current command works in terminal, but once executed as a .command, it comes up with the errors.
This is my script:
#!/bin/sh
lpadmin -p ‘PRINTERNAME’ -D PRINTER\ NAME -L ‘OFFICE’ -v lpd://xx.xx.xx.xx -P /Library/Printers/PPDs/Contents/Resources/Xerox\ WorkCentre\ 7855.gz -o printer-is-shared=false -E​
I get this error after running the script:
lpadmin: Unknown option “?”.
I find this strange, because there is no "?" in the script.
I have a idea, why not try it like this ? there are huge differences between sh shells, so let me know if it rocks, I have more ideas.
#!/bin/sh
PPD="PRINTERNAME"
INFO="PRINTER\ NAME"
LOC="OFFICE"
URI="lpd://xx.xx.xx.xx"
OP ="printer-is-shared=false"
# This parameter P is new to me. Is it the paper-name ?
P="/Library/Printers/PPDs/Contents/Resources/Xerox\ WorkCentre\ 7855.gz"
lpadmin -p "$PPD" -D "$INFO" -L "$LOC" -v "$URI" -P "$P" -o "$OP" -E;

Linux source does not work in .sh file?

I have a .sh (start_sim.sh) and a .bash (sim_sources.bash) file.
The sim_sources.bash file is called from within the start_sim.sh and should set an environment variable $ROBOT to a certain value. However the ROBOT variable never changes when I call ./start_sim.sh. Is there a fundamental mistake in the way I am trying to do this?
start_sim.sh contains:
#!/bin/bash
echo -n "sourcing sim_sources.bash..."
source /home/.../sim_sources.bash
echo "done."
sim_sources.bash contains:
# set the robot id
export ROBOT=robot
EDIT: Could you also propose a way to work around this issue? I would still need to set variables from with in the .bash file.
EDIT2:
Thanks for your replys!
Finally I ended up solving it with a screen and stuffing commands to it:
echo -n "starting screen..."
screen -dmS "sim_screen"
sleep 2
screen -S "sim_screen" -p 0 -X stuff "source /home/.../sim_sources.bash$(printf \\r)"
sleep 5
screen -S "sim_screen" -p 0 -X stuff "source /home/.../start_sim.sh$(printf \\r)"
You're setting the ROBOT variable in the start_sim.sh script, but that's not available to parent processes (your spawning shell/command-prompt).
Exporting a variable e.g. export ROBOT=robot makes the variable available to the current process and child processes. When you invoke ./start_sim.sh you're invoking a new process.
If you simply source start_sim.sh in your shell, that script runs as part of your shell process and then your variable will be available.
As Brian pointed out the variables are not available outside of the script.
Here a adapted script that shows this point:
#!/bin/bash
echo -n "sourcing sim_sources.bash..."
. sim_sources.bash
echo $ROBOT
echo "done."
The workaround you are asking for is to start a new shell from the actual shell with the environmental values already set:
#!/bin/bash
echo -n "sourcing sim_sources.bash..."
. sim_sources.bash
echo "done."
bash
This results in:
bash-4.1$ printenv | grep ROBOT
ROBOT=robot
I am on Ubuntu 16.04
I used /bin/sh instead of /bin/bash and it works !

Create a detailed self tracing log in bash

I know you can create a log of the output by typing in script nameOfLog.txt and exit in terminal before and after running the script, but I want to write it in the actual script so it creates a log automatically. There is a problem I'm having with the exec >>log_file 2>&1 line:
The code redirects the output to a log file and a user can no longer interact with it. How can I create a log where it just basically copies what is in the output?
And, is it possible to have it also automatically record the process of files that were copied? For example, if a file at /home/user/Deskop/file.sh was copied to /home/bckup, is it possible to have that printed in the log too or will I have to write that manually?
Is it also possible to record the amount of time it took to run the whole process and count the number of files and directories that were processed or am I going to have to write that manually too?
My future self appreciates all the help!
Here is my whole code:
#!/bin/bash
collect()
{
find "$directory" -name "*.sh" -print0 | xargs -0 cp -t ~/bckup #xargs handles files names with spaces. Also gives error of "cp: will not overwrite just-created" even if file didn't exist previously
}
echo "Starting log"
exec >>log_file 2>&1
timelimit=10
echo "Please enter the directory that you would like to collect.
If no input in 10 secs, default of /home will be selected"
read -t $timelimit directory
if [ ! -z "$directory" ] #if directory doesn't have a length of 0
then
echo -e "\nYou want to copy $directory." #-e is so the \n will work and it won't show up as part of the string
else
directory=/home/
echo "Time's up. Backup will be in $directory"
fi
if [ ! -d ~/bckup ]
then
echo "Directory does not exist, creating now"
mkdir ~/bckup
fi
collect
echo "Finished collecting"
exit 0
To answer the "how to just copy the output" question: use a program called tee and then a bit of exec magic explained here:
redirect COPY of stdout to log file from within bash script itself
Regarding the analytics (time needed, files accessed, etc) -- this is a bit harder. Some programs that can help you are time(1):
time - run programs and summarize system resource usage
and strace(1):
strace - trace system calls and signals
Check the man pages for more info. If you have control over the script it will be probably easier to do the logging yourself instead of parsing strace output.

OSX bash script works but fails in crontab on SFTP

this topic has been discussed at length, however, I have a variant on the theme that I just cannot crack. Two days into this now and decided to ping the community. THx in advance for reading..
Exec. summary is I have a script in OS X that runs fine and executes without issue or error when done manually. When I put the script in the crontab to run daily it still runs but it doesnt run all of the commands (specifically SFTP).
I have read enough posts to go down the path of environment issues, so as you will see below, I hard referenced the location of the SFTP in the event of a PATH issue...
The only thing that I can think of is the IdentityFile. NOTE: I am putting this in the crontab for my user not root. So I understand that it should pickup on the id_dsa.pub that I have created (and that has already been shared with the server)..
I am not trying to do any funky expect commands to bypass the password, etc. I dont know why when run from the cron that it is skipping the SFTP line.
please see the code below.. and help is greatly appreciated.. thx
#!/bin/bash
export DATE=`date +%y%m%d%H%M%S`
export YYMMDD=`date +%y%m%d`
PDATE=$DATE
YDATE=$YYMMDD
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED="~/Dropbox/"
USER="user"
HOST="host.domain.tld"
A="/tmp/5nPR45bH"
>${A}.file1${PDATE}
>${A}.file2${PDATE}
BYEbye ()
{
rm ${A}.file1${PDATE}
rm ${A}.file2${PDATE}
echo "Finished cleaning internal logs"
exit 0
}
echo "get -r *" >> ${A}.file1${PDATE}
echo "quit" >> ${A}.file1${PDATE}
eval mkdir ${FEED}${YDATE}
eval cd ${FEED}${YDATE}
eval /usr/bin/sftp -b ${A}.file1${PDATE} ${USER}#${HOST}
BYEbye
exit 0
Not an answer, just comments about your code.
The way to handle filenames with spaces is to quote the variable: "$var" -- eval is not the way to go. Get into the habit of quoting all variables unless you specifically want to use the side effects of not quoting.
you don't need to export your variables unless there's a command you call that expects to see them in the environment.
you don't need to call date twice because the YYMMDD value is a substring of the DATE: YYMMDD="${DATE:0:6}"
just a preference: I use $HOME over ~ in a script.
you never use the "file2" temp file -- why do you create it?
since your sftp batch file is pretty simple, you don't really need a file for it:
printf "%s\n" "get -r *" "quit" | sftp -b - "$USER#$HOST"
Here's a rewrite, shortened considerably:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED_DIR="$HOME/Dropbox/$(date +%Y%m%d)"
USER="user"
HOST="host.domain.tld"
mkdir "$FEED_DIR" || { echo "could not mkdir $FEED_DIR"; exit 1; }
cd "$FEED_DIR"
{
echo "get -r *"
echo quit
} |
sftp -b - "${USER}#${HOST}"

Resources