Using Integrating Amazon SES with Sendmail I configured SES to allow it to send emails from a verified email address. I was able to successfully send email from the command line using the verified email address:
sudo /usr/sbin/sendmail -f from#example.com to#example.com < file_to_send.txt
Next I setup a bash script to gather some daily report information.
#!/bin/bash
# copy the cw file
cp /var/log/cwr.log /cwr_analysis/cwr.log
# append the cw info to the subject file
cat /cwr_analysis/subject.txt /cwr_analysis/cwr.log > /cwr_analysis/daily.txt
# send the mail
/usr/sbin/sendmail -f from#example.com to#example.com < /cwr_analysis/daily.txt
If I run the bash script manually from the command line the report is gathered and emailed as it should be. I changed the permissions on the file to allow it to be executed by root (similar to other CRON jobs on the AWS instance):
-rwxr-xr-x 1 root root 375 Jan 6 17:37 cwr_email.sh
PROBLEM
I setup a CRON job and set it to run every 5 minutes for testing (the script is designed to be run once per day once production starts):
*/5 * * * * /home/ec2-user/cwr_email.sh
The bash script copies and then appends the daily.txt file properly but does not send the email. There is no bounce in the email spool or any other errors.
I have spent the better part of today searching for an answer and many of the searches end up on dead-ends with little to no information about using a CRON to send email via AWS SES.
How can I fix this issue?
One "problem" with cron is that lack of environment variables (for obvious security reasons). You are probably missing PATH and HOME. You can define those in the script directly or in the crontab file.
Add PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin to the crontab before you call the sendmail script and it should work
#!/bin/bash
#Adding the path
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# copy the cw file
cp /var/log/cwr.log /cwr_analysis/cwr.log
# append the cw info to the subject file
cat /cwr_analysis/subject.txt /cwr_analysis/cwr.log > /cwr_analysis/daily.txt
# send the mail
/usr/sbin/sendmail -f from#example.com to#example.com < /cwr_analysis/daily.txt
You'll have to test until all the necessary variables are defined as required by the script.
Related
I'm attempting to have my Raspberry Pi use rysnc to upload files to an SFTP server sporadically throughout the day. To do so, I created a bash script and installed a crontab to run it every couple hours during the day. If I run the bash script, it works perfectly, but it never seems to run using crontab.
I did the following:
"sudo nano upload.sh"
Create the following bash script:
#!/bin/bash
sshpass -p "password" rsync -avh -e ssh /local/directory host.com:/remote/directory
"sudo chmod +x upload.sh"
Test running it with "./upload.sh"
Now, I have tried all the following ways to add it to crontab ("sudo crontab -e")
30 8,10,12,14,16 * * * ./upload.sh
30 8,10,12,14,16 * * * /home/picam/upload.sh
30 8,10,12,14,16 * * * bash /home/picam/upload.sh
None of these work based on the fact that new files are not uploaded. I have another bash script running using method 2 above without issue. I would appreciate any insight into what might be going wrong. I have done this on eight separate Raspberry Pi 3B that are all taking photos throughout the day. The crontab upload works on none of them.
UPDATE:
Upon logging the crontab job, I found the following error:
Host key verification failed.
rsync error: unexplained error (code 255) at rsync.c(703) [sender=3.2.3]
This error also occurred if I tried running my bash script without first connecting to the server via scp and accepting the certificate. How to get around this when calling rsync from crontab?
check if the script is working properly at all ( paste it in shell)
Check if your crond.service working properly - systemctl status crond.service.
Output should be "active (running)"
Then you can try add simply test job to cron: * * * * * echo "test" >> /path/whichyou/want/file.txt
and check if this job work properly
Thanks to logging recommendation from Gordon Davisson in comments, I was able to identify the problem.
A logging error occurred, as mentioned in the original question update above where rsync would choke on a host key verification.
My solution: tell rsync not to check host key certificates. I simply changed the upload.sh bash file the following:
#!/bin/bash
sshpass -p "password" rsync -avh -e "ssh -o StrictHostKeyChecking=no" /local/directory host.com:/remote/directory
Working perfectly now -- hope this helps someone.
On a CentOS 7.2 server the following command runs successfully manually -
scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +"%Y-%m-%d").csv /home/vyv/data/AWS/ 2>&1 >> /home/vyv/data/AWS/scp-log.txt
This command simply takes the file that has the current date in its filename from a directory on a remote server and stores it to a directory on the local server and prints out the log file in the same directory.
Public key authentication is setup so there is no prompt for the password when run manually.
I have it configured in crontab to run 3 minutes after the top of every hour as in the following format -
3 * * * * scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +"%Y-%m-%d").csv /home/vyv/data/AWS/ 2>&1 >> /home/vyv/data/AWS/scp-log.txt
However, I wait patiently and don't see any files being downloaded automatically.
I've checked the /var/log/cron logs and see an entry on schedule like this -
Feb 9 17:30:01 intranet CROND[9380]: (wzw) CMD (scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +")
There are other similar jobs set in crontab that work perfectly.
Can anyone offer suggestions/clues on why this is not working?
Gratefully,
Rakesh.
Use full path for scp (or any other binary) in crontab:
3 * * * * /usr/bin/scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +"%Y-%m-%d").csv /home/vyv/data/AWS/ 2>&1 >> /home/vyv/data/AWS/scp-log.txt
I wrote the following bash script to send me an alert if there is a problem with my website:
#!/bin/bash
# 1. download the page
BASE_URL="https://www.example.com/ja"
JS_URL="https://www.example.com/"
# # 2. search the page for the following URL: /sites/default/files/google_tag/google_tag.script.js?[FIVE-CHARACTER STRING WITH LETTERS AND NUMBERS]
curl -k -L ${BASE_URL} 2>/dev/null | grep -Eo "/sites/default/files/google_tag/google_tag.script.js?[^<]+" | while read line
do
# 3. download the js file
if curl -k -L ${JS_URL}/$line | grep gtm_preview >/dev/null 2>&1; then
# 4. check if this js file has the text "gtm_preview" or not; if it does, send an email
# echo "Error: gtm_preview found"
sendmail error-ec2#example.com < email-gtm-live.txt
else
echo "No gtm_preview tag found."
fi
done
I am running this from an Amazon EC2 Ubuntu instance. When I execute the script manually like ./script.sh, I receive an email in my webmail inbox for example.com.
However, when I configure this script to run via crontab, the mail does not get sent via the Internet; instead, it gets sent to /var/mail on the EC2 instance.
I don't understand why this is happening or what I can do to fix it. Why does sendmail behave different if it is being run from bash vs being run from crontab?
Be aware that the PATH environment variable is different for crontab executions than it is for your typical interactive sessions. Also, not all of the same environment variables are set. Consider specifying the full path for the sendmail executable ( which you can learn by issuing the 'which sendmail' command ).
I'm currently using Sikuli to upload a PDF file to a website server. This seems inefficient. Ideally I would like to run a shell script and get it to upload this file on a certain day/time (i.e Sunday at 5AM) without the use of Sikuli.
I'm currently running Mac OS Yosemite 10.10.1 and the FileZilla FTP Client.
Any help is greatly appreciated, thank you!
Create a bash file like this (replace all [variables] with actual values):
#!/bin/sh
cd [source directory]
ftp -n [destination host]<<END
user [user] [password]
put [source file]
quit
END
Name it something like upload_pdf_to_server.sh
Make sure it has right permission to be executed:
chmod +x upload_pdf_to_server.sh
Set a cron job based on your need to execute the file periodically using command crontab -e
0 5 * * * /path/to/script/upload_pdf_to_server.sh >/dev/null 2>&1
(This one will execute the bash file every day at 5AM)
How to set cronjob
Cronjob generator
This is Srikanth from Hyderabad.
I the Linux Administrator in one of the corporate company. We have a squid server, So i prepared a Backup squid server, so that when LIVE Squid server goes down i can put the backup server into LIVE.
My squid servers are configured with Centos 5.5. I have prepared a script to take backup of all configuration files in /etc/squid/ of LIVE server to the backup server. i.e It will copy all files from Live server's /etc/squid/ to backup server's /etc/squid/
Here's the script saved as squidbackup.sh in the directory /opt/ with permission 755(rwxr-xr-x)
#! /bin/sh
username="<username>"
password="<password>"
host="Server IP"
expect -c "
spawn /usr/bin/scp -r <username>#Server IP:/etc/squid /etc/
expect {
"*password:*"{
send $password\r;
interact;
}
eof{
exit
}
}
** Kindly note that this will be executed in the backup server that will check for the user which is mentioned in the script. I have created a user in the live server and given the same in the script too.
When i execute this command using the below command
[root#localhost ~]# sh /opt/squidbackup.sh
Everything works fine till now, this script downloads all the files from the directory /etc/squid/ of LIVE server to the location /etc/squid/ of Backup server
Now the problem raises, If i set this in crontab like below or with other timings
50 23 * * * sh /opt/squidbackup.sh
Dont know what's wrong, it is not downloading all files. i.e Cronjob is downloading only few files from /etc/squid/ of LIVE server to the /etc/squid/ of backup server.
**Only few files are downloaded when cron executes the script, If i run this script manually then it is downloading all files perfectly with out any errors or warnings.
If you have any more questions, Please go ahead to post it.
Now i kindly request to give if any solutions are available.
Please Please, Thank you in advance.
thanks for your interest. I have tried what you have said, it show like below, but previously i use to get the same output to mail of the User in the squid backup server.
Even in cron logs it show the same, but i was not able to understand what was the exact error from the below lines.
Please note that only few files are getting downloaded with cron.
spawn /usr/bin/scp -r <username>#ServerIP:/etc/squid /etc/
<username>#ServerIP's password:
Kindly check if you can suggest any thing else.
Try the simple options first. Capture the stdout and stderr as shown below. These files should point to the problem.
Looking at the script, you need to specify the location of expect. That could be an issue.
50 23 * * * sh /opt/squidbackup.sh >/tmp/cronout.log 2>&1