output not reflecting in cronjob - shell

I have a script which send the output of a command. The command takes few seconds to execute. But when I put the command in the cron, the output is not reflected in the mail received nor in the file from where the script fetches the output.
echo "$(date)" > /home/checks.txt
status=`sysstatus`
echo "$(sysstatus)">> /home/checks.txt
for MAIL in abc#xyz.com def#xyz.com
do
mailx -s "$Date Daily check on system" "$MAIL" < /home/checks.txt
done
exit 0

Giving full path to the command status in the script solved the issue.

Related

How to catch/write - success/failure logs for sFTP - PUT command

In my shell script, after sFTP put process is completed. I need to check if the PUT process is completed successfully or failed.
Don't use expect at all.
#!/usr/bin/env bash
batchfile=$(mktemp -t sftp-batchfile.XXXXXX) || exit
trap 'rm -f "$batchfile"' EXIT
cat >"$batchfile" <EOF
put test_file.txt $dest_location
bye
EOF
if sftp -b "$batchfile" "user#hostname"; then
echo "The put succeeded"
else
echo "The put failed"
fi
As given in the SFTP man page, with emphasis added:
The final usage format allows for automated sessions using the -b option. In such cases, it is necessary to configure non-interactive authentication to obviate the need to enter a password at connection time (see sshd(8) and ssh-keygen(1) for details).

Output redirection to console in shell script , not reflecting realtime

I have encountered a weird problem with console output when calling a subscript from inside another script.
Below is the Main Script which is calling a TestScript.
The TestScript is an installation script written in perl which takes some time to execute and prints messages as the installation progresses.
My problem here is that the output from the called perl script is only shown on the console once the installation is completed and the script returns.
Oddly i have used this kind of syntax successfully before for calling shell scripts and it works fine for them and output is shown simultaneously without waiting for the subscript to return.
I need to capture the output of the script so that i can grep if the installation was successful.
I do not control the perl script and cannot modify it in any way.
Any help would be greatly appreciated.
Thanks in advance.
#!/bin/sh
echo " Main script"
output=`/var/tmp/Packages/TestScript.pl | tee /dev/tty`
exitCode=$?
echo $output | grep -q "Installation completed successfully"
if [ $? -eq 0 ]; then
echo "Installation was successful"
fi
echo $exitCode

how can I change SLEEP time in a running bash script

Appendix: the code below runs fine, as Matthias pointed out. The err happened at another place. In short: if you want that sleep is changed during script runtime, e.g. due to a certain event, you might use the code below.
Original description:
My bash script ought to check a certain status - e.g. the existence of a file - every 5 minutes. If the status is as expected, everything is fine. But if the status is otherwise, the checks ought to happen in a shorter frequency, until everything is normal again.
Example:
NORMAL_SLEEP=300
SHORT_SLEEP=30
CUR_SLEEP=''
while :
do
if [ -f /tmp/myfile ]; then
logger "myfile still exists. Next check in 5min"
CUR_SLEEP=$NORMAL_SLEEP
else
logger "myfile disappeared. Check again in 30s!"
CUR_SLEEP=$SHORT_SLEEP
echo "/tmp/myfile was removed. Check this!" \
| mailx -s "alert: myfile missed" johndoe#somewhere.com
fi
trap 'kill $SLEEP_PID; exit 1' 15
sleep $CUR_SLEEP &
SLEEP_PID=$!
wait
done
Problem: the sleep time does not adapt...
Had a look at Bash Script: While-Loop Subshell Dilemma but unfortunately can't see how it could solve my problem.
The code ran fine on my machine. Here's what I ran (changed the time values just to test):
./script.sh --> "myfile disappeared. Check again in 30s!" printed at 2 sec intervals
touch /tmp/myfile
./script.sh --> "myfile still exists. Next check in 5min" printed at 5 sec intervals
The file, script.sh:
#!/bin/bash
NORMAL_SLEEP=5
SHORT_SLEEP=2
CUR_SLEEP=''
while :
do
if [ -f /tmp/myfile ]; then
echo "myfile still exists. Next check in 5min"
CUR_SLEEP=$NORMAL_SLEEP
else
echo "myfile disappeared. Check again in 30s!"
CUR_SLEEP=$SHORT_SLEEP
echo "/tmp/myfile was removed. Check this!" \
| mailx -s "alert: myfile missed" johndoe#somewhere.com
fi
trap 'kill $SLEEP_PID; exit 1' 15
sleep $CUR_SLEEP &
SLEEP_PID=$!
wait
done
And I probably sent some mail to johndoe#somewhere.com but that's okay.

bash script to accept log on stdin and email log if inputting process fails

I'm a sysadmin and I frequently have a situation where I have a script or command that generates a lot of output which I would only like to have emailed to me if the command fails. It's pretty easy to write a script that runs the command, collects the output and emails it if the command fails, but I was thinking I should be able to write a command that
1) accepts log info on stdin
2) waits for the inputting process to exit and see what it's exit status was
3a) if the inputting process exited cleanly, append the logging input to a normal log file
3b) if the inputting process failed, append the logging input to the normal log and also send me an email.
It would look something like this on the command line:
something_important | mailonfail.sh me#example.com /var/log/normal_log
That would make it really easy to use in crontabs.
I'm having trouble figuring out how to make my script wait for the writing process and evaluate how that process exits.
Just to be exatra clear, here's how I can do it with a wrapper:
#! /bin/bash
something_important > output
ERR=$!
if [ "$ERR" -ne "0" ] ; then
cat something_important | mail -s "something_important failed" me#example.com
fi
cat something_important >> /var/log/normal_log
Again, that's not what I want, I want to write a script and pipe commands into it.
Does that make sense? How would I do that? Am I missing something?
Thanks Everyone!
-Dylan
Yes it does make sense, and you are close.
Here are some advises:
#!/bin/sh
TEMPFILE=$(mktemp)
trap "rm -f $TEMPFILE" EXIT
if [ ! something_important > $TEMPFILE ]; then
mail -s 'something goes oops' -a $TEMPFILE you#example.net
fi
cat $TEMPFILE >> /var/log/normal.log
I won't use bashisms so /bin/sh is fine
create a temporary file to avoid conflicts using mktemp(1)
use trap to remove file when the script exit, normally or not
if the command fail
then attach the file, which would or would not be preferred over embedding it
if it's a big file you could even gzip it, but the attachment method will change:
# using mailx
gzip -c9 $TEMPFILE | uuencode fail.log.gz | mailx -s subject ...
# using mutt
gzip $TEMPFILE
mutt -a $TEMPFILE.gz -s ...
gzip -d $TEMPFILE.gz
etc.

Automate FTP using Shell Script

I am Using a shell script to transfer files via FTP, my shell works fine.
But the problem is my shell script hangs and does not exits if the FTP connection drops down in between the transfer.
this is how my shell script looks like.
echo "open $ip" > ${cmd_File}
echo "user $usrnm $psswd" >> ${cmd_File}
echo "cd $location" >> ${cmd_File}
echo "binary" >> ${cmd_File}
echo "put $filename" >> ${cmd_File}
echo "bye" >> ${cmd_File}
progress=$(ftp -vin < ${cmd_File} 2>&1) 1> /dev/null
I would be glad if someone can help me to handle the error, my code works really fine unless connection drops in between.
this code does hangs up there only, I need to exit the code when such a thing happens.
Thanks,
Abhijit
Consider rewriting your script using "expect" where you can set a timeout. An example is here. Another example is here.
EDITED:
Alternatively, you could do error checking pretty easily in Perl, like this.
Ok, you can do it in the shell using something along these lines:
YOURTFPCMD & PID=$! ; (sleep $TIMEOUT && kill $PID 2> /dev/null & ) ; wait $PID
which starts your FTP command and saves its PID. It them immediately starts a subshell which will kill your FTP command after $TIMEOUT seconds if it hasn't finished, then waits for your FTP command to exit.
I resolved it using lftp instead of ftp.
In my case I was trying to upload files on GoDaddy Online Storage FTP. For some reason the transfer of the biggest file (500 MB) was hanging forever.
Install it as usual (present in main distros):
yum install lftp (CentOS)
zypper install lftp (openSuse)
...
Then create your script:
#!/bin/sh
echo FTP begin at : $(date)
lftp -u myUser,myPassword myFTPSite <<EOF
put myfile.gz
bye
EOF
echo $(date) : FTP ended
echo Validating RAID
cat /proc/mdstat
exit 0
Use -q quittime option in ftp command:
As per mn ftp:
-q quittime
Quit if the connection has stalled for quittime seconds.
Try this command e.g.:
progress=$(ftp -q 30 -vin < ${cmd_File} 2>&1) 1> /dev/null

Resources