Bash script with sendmail delivers email when executed manually but not from crontab - bash

I wrote the following bash script to send me an alert if there is a problem with my website:
#!/bin/bash
# 1. download the page
BASE_URL="https://www.example.com/ja"
JS_URL="https://www.example.com/"
# # 2. search the page for the following URL: /sites/default/files/google_tag/google_tag.script.js?[FIVE-CHARACTER STRING WITH LETTERS AND NUMBERS]
curl -k -L ${BASE_URL} 2>/dev/null | grep -Eo "/sites/default/files/google_tag/google_tag.script.js?[^<]+" | while read line
do
# 3. download the js file
if curl -k -L ${JS_URL}/$line | grep gtm_preview >/dev/null 2>&1; then
# 4. check if this js file has the text "gtm_preview" or not; if it does, send an email
# echo "Error: gtm_preview found"
sendmail error-ec2#example.com < email-gtm-live.txt
else
echo "No gtm_preview tag found."
fi
done
I am running this from an Amazon EC2 Ubuntu instance. When I execute the script manually like ./script.sh, I receive an email in my webmail inbox for example.com.
However, when I configure this script to run via crontab, the mail does not get sent via the Internet; instead, it gets sent to /var/mail on the EC2 instance.
I don't understand why this is happening or what I can do to fix it. Why does sendmail behave different if it is being run from bash vs being run from crontab?

Be aware that the PATH environment variable is different for crontab executions than it is for your typical interactive sessions. Also, not all of the same environment variables are set. Consider specifying the full path for the sendmail executable ( which you can learn by issuing the 'which sendmail' command ).

Related

AWS SES sendmail from CRON Fails

Using Integrating Amazon SES with Sendmail I configured SES to allow it to send emails from a verified email address. I was able to successfully send email from the command line using the verified email address:
sudo /usr/sbin/sendmail -f from#example.com to#example.com < file_to_send.txt
Next I setup a bash script to gather some daily report information.
#!/bin/bash
# copy the cw file
cp /var/log/cwr.log /cwr_analysis/cwr.log
# append the cw info to the subject file
cat /cwr_analysis/subject.txt /cwr_analysis/cwr.log > /cwr_analysis/daily.txt
# send the mail
/usr/sbin/sendmail -f from#example.com to#example.com < /cwr_analysis/daily.txt
If I run the bash script manually from the command line the report is gathered and emailed as it should be. I changed the permissions on the file to allow it to be executed by root (similar to other CRON jobs on the AWS instance):
-rwxr-xr-x 1 root root 375 Jan 6 17:37 cwr_email.sh
PROBLEM
I setup a CRON job and set it to run every 5 minutes for testing (the script is designed to be run once per day once production starts):
*/5 * * * * /home/ec2-user/cwr_email.sh
The bash script copies and then appends the daily.txt file properly but does not send the email. There is no bounce in the email spool or any other errors.
I have spent the better part of today searching for an answer and many of the searches end up on dead-ends with little to no information about using a CRON to send email via AWS SES.
How can I fix this issue?
One "problem" with cron is that lack of environment variables (for obvious security reasons). You are probably missing PATH and HOME. You can define those in the script directly or in the crontab file.
Add PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/us‌​r/bin to the crontab before you call the sendmail script and it should work
#!/bin/bash
#Adding the path
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/us‌​r/bin
# copy the cw file
cp /var/log/cwr.log /cwr_analysis/cwr.log
# append the cw info to the subject file
cat /cwr_analysis/subject.txt /cwr_analysis/cwr.log > /cwr_analysis/daily.txt
# send the mail
/usr/sbin/sendmail -f from#example.com to#example.com < /cwr_analysis/daily.txt
You'll have to test until all the necessary variables are defined as required by the script.

Changing IP log with GPS information and mail. I need robustness

I've created a script in order to receive a mail with wan ip information and GPS location of my macbookpro. The content of the script is this:
#!/bin/bash
# -*- ENCODING: UTF-8 -*-
if [ ! -e /tmp/ip ]; then
curl -s icanhazip.com > /tmp/ip
fi
curl -s icanhazip.com > /tmp/ip2
newip=$(diff /tmp/ip /tmp/ip2 | wc -l)
if [ $newip -ne 0 ]; then
mv -f /tmp/ip2 /tmp/ip
date > IPlog.txt
curl -s icanhazip.com >> IPlog.txt
sudo ./Downloads/whereami >> IPlog.txt
mailx mymailadress#mail.com < IPlog.txt
rm IPlog.txt
else
rm /tmp/ip2
fi
Every minute the sistem executes this script that verifies if the wan ip has changed. If it has changed, the script send me a mail with the new information. The problems are:
1.- The mail is not always correctly sent. Sometimes I don't reveive it.
2.- The mail isn't contain all the info. Sometimes it includes only the new wan ip adress.
3.- Sometimes the mail is qualified as spam and I don't know why because the sender is always the same adress.
I have some suggestions to debug your problems.
First you should use a different location to store the ip than tmp. If your system wipes your tmp folder on boot and your system gets a new WAN ip after boot you would loose the previous recorded ip.
Check the exit code of mailx when sending using $?. 0 is ok. You could do a while loop and keep trying to send it until you get exit code 0.
You could add the info for the mail to a local variable instead of a file.
IPLog=`date`
IPLog+=`curl -s icanhazip.com`
The spam problem might be due to the IP address in the mail. Or whatever ./Downloads/whereami is adding to the file. Adding the sending email address as a trusted sender might do it.
Check the email header for information about spam score.

How to execute shell script on two servers

I want to automate the below scenario:
I have two servers connected in a n/w. If wanted to fetch the /var/logs/messages on both servers. I try to automate it as below but I couldnt proceed from step4 as after logging into the other server the process does not run anymore.
1 echo "Hello $HOSTNAME"
2 date
3 echo -n > /var/log/messages
4 ssh 10.30.3.2;echo -n > /var/log/messages;exit
5 ls
6 cp /var/log/messages > my_bug_log.txt
7 ssh 10.30.3.2;cp /var/log/messages > my_bug_log.txt
How to automate and fetch the logs from both servers ?
EDITED :
1 #!/bin/bash
2
3 echo "Hello $HOSTNAME"
4 date
5 echo -n > /var/log/messages
6 ssh 10.30.3.2 "echo -n > /var/log/messages ";exit
7 echo "welcome"
The echo "welcome" is not executed after exiting from the other host.
EDITED :
ssh 10.30.3.2 "cd /var/log" "touch bug_iteration_$i" "cp /var/log/messages > bug_iteration_$i"
Fetching both message logs from each of the remote servers is fairly easy. However, you don't seem to be using the correct tool for the job. This answer presumes that you are familiar with creating dsa keys to allow passwordless connections between the hosts with ssh-keygen and that is setup and working properly. I will also presume you have needed permission to copy the message logs.
The correct tool for the job is rsync (others will work, but rsync is the defacto standard). What you want to do to retrieve the files is:
rsync -uav 10.30.3.2:/var/log/messages /path/to/store/messages.10.30.3.2
This will get /var/log/messages on 10.30.3.2 and save it on the local machine at /path/to/store/messages.10.30.3.2.
Now if you want to modify it in some way as your echo -n > /var/log/messages suggest before using rsync to retrieve the messages log, remember ssh will execute any command you tell it to on the remote host. So if you want to enter something in the remote log before retrieving it, then you can use:
ssh 10.30.3.2 "echo -n 'something' > /var/log/messages"
(I'm not sure your reason for suppressing the newline in echo... but to each his own) Another trick for executing multiple commands on 10.30.3.2 easily is to create a script on 10.30.3.2 that does what you need and make sure it has the execute bit set. Then you can run the script on 10.30.3.2 from your machine by ssh:
`
ssh 10.30.3.2 /path/to/your/script.sh
If this hasn't answered your question, send a comment. It was somewhat unclear what your were actually attempting to do from your post.
after comment
It is still unclear what you are trying to do. It appears that you want to echo the hostname and date then truncate the messages file by echo -n > /var/log/messages, then ssh 10.30.3.2 truncate its /var/log/messages file and after the ssh command completes exit the script before you echo "Welcome". You see, when ssh 10.30.3.2 "echo -n > /var/log/messages " completes, your next exit command causes the script you are running to exit. You don't need that exit there.
second addendum:
Let's do it this way. You want to run the same commands on each host and you want to be able to run those commands on a remote host via ssh. So let's create a script on each box in /usr/local/bin/empty_message_log.sh that contains the following:
#!/bin/bash
echo "Hello $HOSTNAME" # echos Hello hostname to terminal
date # echos date to terminal
echo -n > /var/log/messages # truncates /var/log/messages to (empty)
if [ "$HOSTNAME" == "fillin localhost hostname" ]; then
# runs this same script on 10.30.3.2
# only run if called on local machine
ssh 10.30.3.2 /usr/local/bin/empty_message_log.sh
fi
echo "welcome" # echos welcome and exits
Now make sure it has the execute bit set:
chmod 0755 /usr/local/bin/empty_message_log.sh
# adjust permissions as required
Put this script on all the hosts you want this capability on. Now you can call this script on your local machine and it will run the same set of command on the remote host at 10.30.3.2. It will only call and execute the script remotely if "fillin localhost hostname" matches the box it is run on.
Did you consider using scp to fetch the files you need?
Apart from that, if you need to perform the same actions of multiple machines, you might look at ansible (http://www.ansibleworks.com)

Send shell script eval results by email

This question follows on from my own answer to one of my previous questions. The script I have given there works just fine. To round things off I would like to grab the results of the eval'd rsync results and email them off. I haven't got the foggiest idea how that should be done.
Update
In response to the comments I am providing more details in this question. The script in question copied over from my own answer to my other question:
#! /bin/bash
# Backup to EVBackup Server
local="/var/lib/automysqlbackup/daily/dbname"
#replace dbname as appropriate
svr="$(uname -n)"
remote="$(svr/.example.com/)"
#strip out the .example.com bit to get the destination folder on the remote server
remote+="/"
evb="user#user.evbackup.com:"
#user will have to be replaced with your username
evb+=$remote
cmd='rsync -avz -e "ssh -i /backup/ssh_key" '
#your ssh key location may well be different
cmd+=$local
cmd+=$evb
#at this point cmd will be something like
#rsync -avz -e "ssh -i /backup/ssh_key" /home bob#bob.evbackup.com:home-data
eval $cmd
When you run this script from the terminal you see the output from the Rsync command. What I would like to be able to do is to grab that output into a shell variable and then send it off by email so I can keep a human aware of any issues that might have been encountered whilst doing the Rsync.
I guess I will be able to figure out how to send emails from a shell script. What has me stumped is how I capture the payload for that email - i.e. the Rsync output.

How to use SSH to run a local shell script on a remote machine?

I have to run a local shell script (windows/Linux) on a remote machine.
I have SSH configured on both machine A and B. My script is on machine A which will run some of my code on a remote machine, machine B.
The local and remote computers can be either Windows or Unix based system.
Is there a way to run do this using plink/ssh?
If Machine A is a Windows box, you can use Plink (part of PuTTY) with the -m parameter, and it will execute the local script on the remote server.
plink root#MachineB -m local_script.sh
If Machine A is a Unix-based system, you can use:
ssh root#MachineB 'bash -s' < local_script.sh
You shouldn't have to copy the script to the remote server to run it.
This is an old question, and Jason's answer works fine, but I would like to add this:
ssh user#host <<'ENDSSH'
#commands to run on remote host
ENDSSH
This can also be used with su and commands which require user input. (note the ' escaped heredoc)
Since this answer keeps getting bits of traffic, I would add even more info to this wonderful use of heredoc:
You can nest commands with this syntax, and that's the only way nesting seems to work (in a sane way)
ssh user#host <<'ENDSSH'
#commands to run on remote host
ssh user#host2 <<'END2'
# Another bunch of commands on another host
wall <<'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
You can actually have a conversation with some services like telnet, ftp, etc. But remember that heredoc just sends the stdin as text, it doesn't wait for response between lines
I just found out that you can indent the insides with tabs if you use <<-END!
ssh user#host <<-'ENDSSH'
#commands to run on remote host
ssh user#host2 <<-'END2'
# Another bunch of commands on another host
wall <<-'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<-'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
(I think this should work)
Also see
http://tldp.org/LDP/abs/html/here-docs.html
Also, don't forget to escape variables if you want to pick them up from the destination host.
This has caught me out in the past.
For example:
user#host> ssh user2#host2 "echo \$HOME"
prints out /home/user2
while
user#host> ssh user2#host2 "echo $HOME"
prints out /home/user
Another example:
user#host> ssh user2#host2 "echo hello world | awk '{print \$1}'"
prints out "hello" correctly.
This is an extension to YarekT's answer to combine inline remote commands with passing ENV variables from the local machine to the remote host so you can parameterize your scripts on the remote side:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo $ARG1 $ARG2
ENDSSH
I found this exceptionally helpful by keeping it all in one script so it's very readable and maintainable.
Why this works. ssh supports the following syntax:
ssh user#host remote_command
In bash we can specify environment variables to define prior to running a command on a single line like so:
ENV_VAR_1='value1' ENV_VAR_2='value2' bash -c 'echo $ENV_VAR_1 $ENV_VAR_2'
That makes it easy to define variables prior to running a command. In this case echo is our command we're running. Everything before echo defines environment variables.
So we combine those two features and YarekT's answer to get:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'...
In this case we are setting ARG1 and ARG2 to local values. Sending everything after user#host as the remote_command. When the remote machine executes the command ARG1 and ARG2 are set the local values, thanks to local command line evaluation, which defines environment variables on the remote server, then executes the bash -s command using those variables. Voila.
<hostA_shell_prompt>$ ssh user#hostB "ls -la"
That will prompt you for password, unless you have copied your hostA user's public key to the authorized_keys file on the home of user .ssh's directory. That will allow for passwordless authentication (if accepted as an auth method on the ssh server's configuration)
I've started using Fabric for more sophisticated operations. Fabric requires Python and a couple of other dependencies, but only on the client machine. The server need only be a ssh server. I find this tool to be much more powerful than shell scripts handed off to SSH, and well worth the trouble of getting set up (particularly if you enjoy programming in Python). Fabric handles running scripts on multiple hosts (or hosts of certain roles), helps facilitate idempotent operations (such as adding a line to a config script, but not if it's already there), and allows construction of more complex logic (such as the Python language can provide).
cat ./script.sh | ssh <user>#<host>
chmod +x script.sh
ssh -i key-file root#111.222.3.444 < ./script.sh
Try running ssh user#remote sh ./script.unx.
Assuming you mean you want to do this automatically from a "local" machine, without manually logging into the "remote" machine, you should look into a TCL extension known as Expect, it is designed precisely for this sort of situation. I've also provided a link to a script for logging-in/interacting via SSH.
https://www.nist.gov/services-resources/software/expect
http://bash.cyberciti.biz/security/expect-ssh-login-script/
ssh user#hostname ". ~/.bashrc;/cd path-to-file/;. filename.sh"
highly recommended to source the environment file(.bashrc/.bashprofile/.profile). before running something in remote host because target and source hosts environment variables may be deffer.
I use this one to run a shell script on a remote machine (tested on /bin/bash):
ssh deploy#host . /home/deploy/path/to/script.sh
if you wanna execute command like this
temp=`ls -a`
echo $temp
command in `` will cause errors.
below command will solve this problem
ssh user#host '''
temp=`ls -a`
echo $temp
'''
If the script is short and is meant to be embedded inside your script and you are running under bash shell and also bash shell is available on the remote side, you may use declare to transfer local context to remote. Define variables and functions containing the state that will be transferred to the remote. Define a function that will be executed on the remote side. Then inside a here document read by bash -s you can use declare -p to transfer the variable values and use declare -f to transfer function definitions to the remote.
Because declare takes care of the quoting and will be parsed by the remote bash, the variables are properly quoted and functions are properly transferred. You may just write the script locally, usually I do one long function with the work I need to do on the remote side. The context has to be hand-picked, but the following method is "good enough" for any short scripts and is safe - should properly handle all corner cases.
somevar="spaces or other special characters"
somevar2="!##$%^"
another_func() {
mkdir -p "$1"
}
work() {
another_func "$somevar"
touch "$somevar"/"$somevar2"
}
ssh user#server 'bash -s' <<EOT
$(declare -p somevar somevar2) # transfer variables values
$(declare -f work another_func) # transfer function definitions
work # call the function
EOT
The answer here (https://stackoverflow.com/a/2732991/4752883) works great if
you're trying to run a script on a remote linux machine using plink or ssh.
It will work if the script has multiple lines on linux.
**However, if you are trying to run a batch script located on a local
linux/windows machine and your remote machine is Windows, and it consists
of multiple lines using **
plink root#MachineB -m local_script.bat
wont work.
Only the first line of the script will be executed. This is probably a
limitation of plink.
Solution 1:
To run a multiline batch script (especially if it's relatively simple,
consisting of a few lines):
If your original batch script is as follows
cd C:\Users\ipython_user\Desktop
python filename.py
you can combine the lines together using the "&&" separator as follows in your
local_script.bat file:
https://stackoverflow.com/a/8055390/4752883:
cd C:\Users\ipython_user\Desktop && python filename.py
After this change, you can then run the script as pointed out here by
#JasonR.Coombs: https://stackoverflow.com/a/2732991/4752883 with:
`plink root#MachineB -m local_script.bat`
Solution 2:
If your batch script is relatively complicated, it may be better to use a batch
script which encapsulates the plink command as well as follows as pointed out
here by #Martin https://stackoverflow.com/a/32196999/4752883:
rem Open tunnel in the background
start plink.exe -ssh [username]#[hostname] -L 3307:127.0.0.1:3306 -i "[SSH
key]" -N
rem Wait a second to let Plink establish the tunnel
timeout /t 1
rem Run the task using the tunnel
"C:\Program Files\R\R-3.2.1\bin\x64\R.exe" CMD BATCH qidash.R
rem Kill the tunnel
taskkill /im plink.exe
This bash script does ssh into a target remote machine, and run some command in the remote machine, do not forget to install expect before running it (on mac brew install expect )
#!/usr/bin/expect
set username "enterusenamehere"
set password "enterpasswordhere"
set hosts "enteripaddressofhosthere"
spawn ssh $username#$hosts
expect "$username#$hosts's password:"
send -- "$password\n"
expect "$"
send -- "somecommand on target remote machine here\n"
sleep 5
expect "$"
send -- "exit\n"
You can use runoverssh:
sudo apt install runoverssh
runoverssh -s localscript.sh user host1 host2 host3...
-s runs a local script remotely
Useful flags:
-g use a global password for all hosts (single password prompt)
-n use SSH instead of sshpass, useful for public-key authentication
If it's one script it's fine with the above solution.
I would set up Ansible to do the Job. It works in the same way (Ansible uses ssh to execute the scripts on the remote machine for both Unix or Windows).
It will be more structured and maintainable.
It is unclear if the local script uses locally set variables, functions, or aliases.
If it does this should work:
myscript.sh:
#!/bin/bash
myalias $myvar
myfunction $myvar
It uses $myvar, myfunction, and myalias. Let us assume they is set locally and not on the remote machine.
Make a bash function that contains the script:
eval "myfun() { `cat myscript.sh`; }"
Set variable, function, and alias:
myvar=works
alias myalias='echo This alias'
myfunction() { echo This function "$#"; }
And "export" myfun, myfunction, myvar, and myalias to server using env_parallel from GNU Parallel:
env_parallel -S server -N0 --nonall myfun ::: dummy
Extending answer from #cglotr. In order to write inline command use printf, it useful for simple command and it support multiline using char escaping '\n'
example :
printf "cd /to/path/your/remote/machine/log \n tail -n 100 Server.log" | ssh <user>#<host> 'bash -s'
See don't forget to add bash -s
I created a solution that works better for me by combining the use of a heredoc from Yarek T's answer with the piped cat method from cglotr's answer along with some other tricks for non-interactive login (using sshpass), using variables from the local and remote host in the script, and enabling sudo commands. The code is longer just because it includes some additional tricks that are likely desired, but the original questioner didn't ask for them.
The problem I have with Yarek's answer is that all the warnings and commands in the heredoc print to the screen. The problem I have with cglotr's answer is that is requires a script file and a complex command with additional interaction to execute the script. With my solution, I write a script that does everything by simply calling the script with the remote host IP address as the first argument like this:
./MYSCRIPT REMOTE_IP_ADDRESS
The script to be run on the remote host is saved to a variable within the script on the local host using a heredoc so that you don't need to do any quote escaping. Then, the variable containing the script is echo piped to sshpass. Be sure to indent the commands with tabs and not spaces (you'll get spaces instead of tabs when you copy the code). Here is an example of the remote script within the local script.
!/bin/bash
# Input argument 1 should be the target host IP address (required)
RX_IP="/(\b25[0-5]|\b2[0-4][0-9]|\b[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}/"
IS_IP=$(echo $1 | sed -nr "${RX_IP}p" | wc -l)
if (( $IS_IP )); then
USERNAME=remoteuser
HOSTNAME=$1
# Export the SSH password to environment variable for sshpass and sudo.
# The space before the command prevents saving the command to history.
export SSHPASS=mypassword;
while read -r -d '' SCRIPT <<-EOS
# Enable sudo commands with the following command.
# The space before echo prevents saving the command to history.
echo $SSHPASS | sudo -Sv
# Do stuff here. Escape variables to be be accessed on the remote host.
# For example, escape print variable in an awk command:
# This command lists all USB block device partitions.
ls -l /dev /dev/mapper | awk '/^b/ && /sd[a-z][1-9]/ {print \$10}'
exit
EOS
echo "$SCRIPT" | sshpass -e ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${USERNAME}#${HOSTNAME} &>/dev/null
echo 'DONE'
else
echo "Missing IP address of target host."
echo "Usage: ./SCRIPT_NAME IP_ADDRESS
fi
You need to install sshpass on the local host like this (for Debian based distros).
sudo apt install sshpass
There is another approach ,you can copy your script in your host with scp command then execute it easily .
First, copy the script over to Machine B using scp
[user#machineA]$ scp /path/to/script user#machineB:/home/user/path
Then, just run the script
[user#machineA]$ ssh user#machineB "/home/user/path/script"
This will work if you have given executable permission to the script.

Resources