Automated Bash Upload Script with Crontab Raspberry Pi Not Running - bash

I'm attempting to have my Raspberry Pi use rysnc to upload files to an SFTP server sporadically throughout the day. To do so, I created a bash script and installed a crontab to run it every couple hours during the day. If I run the bash script, it works perfectly, but it never seems to run using crontab.
I did the following:
"sudo nano upload.sh"
Create the following bash script:
#!/bin/bash
sshpass -p "password" rsync -avh -e ssh /local/directory host.com:/remote/directory
"sudo chmod +x upload.sh"
Test running it with "./upload.sh"
Now, I have tried all the following ways to add it to crontab ("sudo crontab -e")
30 8,10,12,14,16 * * * ./upload.sh
30 8,10,12,14,16 * * * /home/picam/upload.sh
30 8,10,12,14,16 * * * bash /home/picam/upload.sh
None of these work based on the fact that new files are not uploaded. I have another bash script running using method 2 above without issue. I would appreciate any insight into what might be going wrong. I have done this on eight separate Raspberry Pi 3B that are all taking photos throughout the day. The crontab upload works on none of them.
UPDATE:
Upon logging the crontab job, I found the following error:
Host key verification failed.
rsync error: unexplained error (code 255) at rsync.c(703) [sender=3.2.3]
This error also occurred if I tried running my bash script without first connecting to the server via scp and accepting the certificate. How to get around this when calling rsync from crontab?

check if the script is working properly at all ( paste it in shell)
Check if your crond.service working properly - systemctl status crond.service.
Output should be "active (running)"
Then you can try add simply test job to cron: * * * * * echo "test" >> /path/whichyou/want/file.txt
and check if this job work properly

Thanks to logging recommendation from Gordon Davisson in comments, I was able to identify the problem.
A logging error occurred, as mentioned in the original question update above where rsync would choke on a host key verification.
My solution: tell rsync not to check host key certificates. I simply changed the upload.sh bash file the following:
#!/bin/bash
sshpass -p "password" rsync -avh -e "ssh -o StrictHostKeyChecking=no" /local/directory host.com:/remote/directory
Working perfectly now -- hope this helps someone.

Related

Bash Command Runs Manually But NOT in Crontab

On a CentOS 7.2 server the following command runs successfully manually -
scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +"%Y-%m-%d").csv /home/vyv/data/AWS/ 2>&1 >> /home/vyv/data/AWS/scp-log.txt
This command simply takes the file that has the current date in its filename from a directory on a remote server and stores it to a directory on the local server and prints out the log file in the same directory.
Public key authentication is setup so there is no prompt for the password when run manually.
I have it configured in crontab to run 3 minutes after the top of every hour as in the following format -
3 * * * * scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +"%Y-%m-%d").csv /home/vyv/data/AWS/ 2>&1 >> /home/vyv/data/AWS/scp-log.txt
However, I wait patiently and don't see any files being downloaded automatically.
I've checked the /var/log/cron logs and see an entry on schedule like this -
Feb 9 17:30:01 intranet CROND[9380]: (wzw) CMD (scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +")
There are other similar jobs set in crontab that work perfectly.
Can anyone offer suggestions/clues on why this is not working?
Gratefully,
Rakesh.
Use full path for scp (or any other binary) in crontab:
3 * * * * /usr/bin/scp usrname#storage-server:/share/Data/homes/AWS/2301KM-$(date +"%Y-%m-%d").csv /home/vyv/data/AWS/ 2>&1 >> /home/vyv/data/AWS/scp-log.txt

How to FTP a PDF file to a website server

I'm currently using Sikuli to upload a PDF file to a website server. This seems inefficient. Ideally I would like to run a shell script and get it to upload this file on a certain day/time (i.e Sunday at 5AM) without the use of Sikuli.
I'm currently running Mac OS Yosemite 10.10.1 and the FileZilla FTP Client.
Any help is greatly appreciated, thank you!
Create a bash file like this (replace all [variables] with actual values):
#!/bin/sh
cd [source directory]
ftp -n [destination host]<<END
user [user] [password]
put [source file]
quit
END
Name it something like upload_pdf_to_server.sh
Make sure it has right permission to be executed:
chmod +x upload_pdf_to_server.sh
Set a cron job based on your need to execute the file periodically using command crontab -e
0 5 * * * /path/to/script/upload_pdf_to_server.sh >/dev/null 2>&1
(This one will execute the bash file every day at 5AM)
How to set cronjob
Cronjob generator

Creating cron entry on server using ssh login within shell script

I need to upload a file (bash script) to a remote sever. I use the scp command. After the file has been copied to the remote server I want to create a cron entry in the crontab file on the remote server.
However, the file upload and writing the cron entry need to occur within a bash shell script so that I only need to execute the script on my local machine and the script is copied to the remote host and the cron entry is written to the crontab.
Is there a way that I can use an ssh command, within the script, that logs me into the remote server, opens the crontab file and writes the cron entry.
Any help is very welcome
I would:
extract the user's crontab with crontab -l > somefile
modify that file with the desired job
import the new crontab with crontab somefile
I just did something like this where I needed to create a multiline line crontab on a remote machine. By far the simplest solution was to pipe the content to the remote crontab command through ssh like this:
echo "$CRON_CONTENTS" | ssh username#server crontab
mailo seemed almost right, but the command would be the second argument to the ssh command, like this:
ssh username#server 'echo "* * * * * /path/to/script/" >> /etc/crontab'
Or if your system doesn't automatically load /etc/crontab you should be able to pipe to the crontab command like this:
ssh username#server 'echo "* * * * * myscript" | /usr/bin/crontab'
Say you want to copy $local to $remote on $host and add an hourly job there to run at 14 past every hour, using a single SSH session;
ssh "$host" "cat >'$remote' &&
chmod +x '$remote' &&
( crontab -l;
echo '14 * * * * $remote' ) | crontab" <"$local"
This could obviously be much more robust with proper error checking etc, but hopefully it should at least get you started.
The two keys here are that the ssh command accepts an arbitrarily complex shell script as the remote command, and gets its standard input from the local host.
(With double quotes around the script, all variables will be interpolated on the local host; so the command executed on the remote host will be something like cat >'/path/to/remote' && chmod +x '/path/to/remote' && ... With the single quotes, you could have whitespace in the file name, but I didn't put them in the crontab entry because it's so weird. If you need single quotes there as well, I believe it should work.)
You meant something like
ssh username#username.server.org && echo "* * * * * /path/to/script/" >> /etc/crontab
?

ssh-agent and crontab -- is there a good way to get these to meet?

I wrote a simple script which mails out svn activity logs nightly to our developers. Until now, I've run it on the same machine as the svn repository, so I didn't have to worry about authentication, I could just use svn's file:/// address style.
Now I'm running the script on a home computer, accessing a remote repository, so I had to change to svn+ssh:// paths. With ssh-key nicely set up, I don't ever have to enter passwords for accessing the svn repository under normal circumstances.
However, crontab did not have access to my ssh-keys / ssh-agent. I've read about this problem a few places on the web, and it's also alluded to here, without resolution:
Why ssh fails from crontab but succedes when executed from a command line?
My solution was to add this to the top of the script:
### TOTAL HACK TO MAKE SSH-KEYS WORK ###
eval `ssh-agent -s`
This seems to work under MacOSX 10.6.
My question is, how terrible is this, and is there a better way?
In addition...
If your key have a passhphrase, keychain will ask you once (valid until you reboot the machine or kill the ssh-agent).
keychain is what you need! Just install it and add the follow code in your .bash_profile:
keychain ~/.ssh/id_dsa
So use the code below in your script to load the ssh-agent environment variables:
. ~/.keychain/$HOSTNAME-sh
Note: keychain also generates code to csh and fish shells.
Copied answer from https://serverfault.com/questions/92683/execute-rsync-command-over-ssh-with-an-ssh-agent-via-crontab
When you run ssh-agent -s, it launches a background process that you'll need to kill later. So, the minimum is to change your hack to something like:
eval `ssh-agent -s`
svn stuff
kill $SSH_AGENT_PID
However, I don't understand how this hack is working. Simply running an agent without also running ssh-add will not load any keys. Perhaps MacOS' ssh-agent is behaving differently than its manual page says it does.
I had a similar problem. My script (that relied upon ssh keys) worked when I ran it manually but failed when run with crontab.
Manually defining the appropriate key with
ssh -i /path/to/key
didn't work.
But eventually I found out that the SSH_AUTH_SOCK was empty when the crontab was running SSH. I wasn't exactly sure why, but I just
env | grep SSH
copied the returned value and added this definition to the head of my crontab.
SSH_AUTH_SOCK="/tmp/value-you-get-from-above-command"
I'm out of depth as to what's happening here, but it fixed my problem. The crontab runs smoothly now.
One way to recover the pid and socket of running ssh-agent would be.
SSH_AGENT_PID=`pgrep -U $USER ssh-agent`
for PID in $SSH_AGENT_PID; do
let "FPID = $PID - 1"
FILE=`find /tmp -path "*ssh*" -type s -iname "agent.$FPID"`
export SSH_AGENT_PID="$PID"
export SSH_AUTH_SOCK="$FILE"
done
This of course presumes that you have pgrep installed in the system and there is only one ssh-agent running or in case of multiple ones it will take the one which pgrep finds last.
My solution - based on pra's - slightly improved to kill process even on script failure:
eval `ssh-agent`
function cleanup {
/bin/kill $SSH_AGENT_PID
}
trap cleanup EXIT
ssh-add
svn-stuff
Note that I must call ssh-add on my machine (scientific linux 6).
To set up automated processes without automated password/passphrase hacks,
I use a separate IdentityFile that has no passphrase, and restrict the target machines' authorized_keys entries prefixed with from="automated.machine.com" ... etc..
I created a public-private keyset for the sending machine without a passphrase:
ssh-keygen -f .ssh/id_localAuto
(Hit return when prompted for a passphrase)
I set up a remoteAuto Host entry in .ssh/config:
Host remoteAuto
HostName remote.machine.edu
IdentityFile ~/.ssh/id_localAuto
and the remote.machine.edu:.ssh/authorized_keys with:
...
from="192.168.1.777" ssh-rsa ABCDEFGabcdefg....
...
Then ssh doesn't need the externally authenticated authorization provided by ssh-agent or keychain, so you can use commands like:
scp -p remoteAuto:watchdog ./watchdog_remote
rsync -Ca remoteAuto/stuff/* remote_mirror
svn svn+ssh://remoteAuto/path
svn update
...
Assuming that you already configured SSH settings and that script works fine from terminal, using the keychain is definitely the easiest way to ensure that script works fine in crontab as well.
Since keychain is not included in most of Unix/Linux derivations, here is the step by step procedure.
1. Download the appropriate rpm package depending on your OS version from http://pkgs.repoforge.org/keychain/. Example for CentOS 6:
wget http://pkgs.repoforge.org/keychain/keychain-2.7.0-1.el6.rf.noarch.rpm
2. Install the package:
sudo rpm -Uvh keychain-2.7.0-1.el6.rf.noarch.rpm
3. Generate keychain files for your SSH key, they will be located in ~/.keychain directory. Example for id_rsa:
keychain ~/.ssh/id_rsa
4. Add the following line to your script anywhere before the first command that is using SSH authentication:
source ~/.keychain/$HOSTNAME-sh
I personally tried to avoid to use additional programs for this, but everything else I tried didn't work. And this worked just fine.
Inspired by some of the other answers here (particularly vpk's) I came up with the following crontab entry, which doesn't require an external script:
PATH=/usr/bin:/bin:/usr/sbin:/sbin
* * * * * SSH_AUTH_SOCK=$(lsof -a -p $(pgrep ssh-agent) -U -F n | sed -n 's/^n//p') ssh hostname remote-command-here
Here is a solution that will work if you can't use keychain and if you can't start an ssh-agent from your script (for example, because your key is passphrase-protected).
Run this once:
nohup ssh-agent > .ssh-agent-file &
. ssh-agent-file
ssh-add # you'd enter your passphrase here
In the script you are running from cron:
# start of script
. ${HOME}/.ssh-agent-file
# now your key is available
Of course this allows anyone who can read '~/.ssh-agent-file' and the corresponding socket to use your ssh credentials, so use with caution in any multi-user environment.
Your solution works but it will spawn a new agent process every time as already indicated by some other answer.
I faced similar issues and I found this blogpost useful as well as the shell script by Wayne Walker mentioned in the blog on github.
Good luck!
Not enough reputation to comment on #markshep's answer, just wanted to add a simpler solution. lsof was not listing the socket for me without sudo, but find is enough:
* * * * * SSH_AUTH_SOCK="$(find /tmp/ -type s -path '/tmp/ssh-*/agent.*' -user $(whoami) 2>/dev/null)" ssh-command
The find command searches the /tmp directory for sockets whose full path name matches that of ssh agent socket files and are owned by the current user. It redirects stderr to /dev/null to ignore the many permission denied errors that will usually be produced by running find on directories that it doesn't have access to.
The solution assumes only one socket will be found for that user.
The target and path match might need modification for other distributions/ssh versions/configurations, should be straightforward though.

Why ssh fails from crontab but succeeds when executed from a command line?

I have a bash script that does ssh to a remote machine and executes a command there, like:
ssh -nxv user#remotehost echo "hello world"
When I execute the command from a command line it works fine, but it fails when is being executed as a part of crontab (errorcode=255 - cannot establish SSH connection). Details:
...
Waiting for server public key.
Received server public key and host key.
Host 'remotehost' is known and matches the XXX host key.
...
Remote: Your host key cannot be verified: unknown or invalid host key.
Server refused our host key.
Trying XXX authentication with key '...'
Server refused our key.
...
When executing locally I'm acting as a root, crontab works as root as well.
Executing 'id' from crontab and command line gives exactly the same result:
$ id
> uid=0(root) gid=0(root) groups=0(root),...
I do ssh from some local machine to the machine running crond. I have ssh key and credentials to ssh to crond machine and any other machine that the scripts connects to.
PS. Please do not ask/complain/comment that executing anything as root is bad/wrong/etc - it is not the purpose of this question.
keychain
solves this in a painless way. It's in the repos for Debian/Ubuntu:
sudo apt-get install keychain
and perhaps for many other distros (it looks like it originated from Gentoo).
This program will start an ssh-agent if none is running, and provide shell scripts that can be sourced and connect the current shell to this particular ssh-agent.
For bash, with a private key named id_rsa, add the following to your .profile:
keychain --nogui id_rsa
This will start an ssh-agent and add the id_rsa key on the first login after reboot. If the key is passphrase-protected, it will also ask for the passphrase. No need to use unprotected keys anymore! For subsequent logins, it will recognize the agent and not ask for a passphrase again.
Also, add the following as a last line of your .bashrc:
. ~/.keychain/$HOSTNAME-sh
This will let the shell know where to reach the SSH agent managed by keychain. Make sure that .bashrc is sourced from .profile.
However, it seems that cron jobs still don't see this. As a remedy, include the line above in the crontab, just before your actual command:
* * * * * . ~/.keychain/$HOSTNAME-sh; your-actual-command
I am guessing that normally when you ssh from your local machine to the machine running crond, your private key is loaded in ssh-agent and forwarded over the connection. So when you execute the command from the command line, it finds your private key in ssh-agent and uses it to log in to the remote machine.
When crond executes the command, it does not have access to ssh-agent, so cannot use your private key.
You will have to create a new private key for root on the machine running crond, and copy the public part of it to the appropriate authorized_keys file on the remote machine that you want crond to log in to.
Don't expose your SSH keys without passphrase. Use ssh-cron instead, which allows you to schedule tasks using SSH agents.
So I had a similar problem. I came here and saw various answers but with some experimentation here is how I got it work with sshkeys with passphrase, ssh-agent and cron.
First off, my ssh setup uses the following script in my bash init script.
# JFD Added this for ssh
SSH_ENV=$HOME/.ssh/environment
# start the ssh-agent
function start_agent {
echo "Initializing new SSH agent..."
# spawn ssh-agent
/usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}"
echo succeeded
chmod 600 "${SSH_ENV}"
. "${SSH_ENV}" > /dev/null
/usr/bin/ssh-add
}
if [ -f "${SSH_ENV}" ]; then
. "${SSH_ENV}" > /dev/null
ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {
start_agent;
}
else
start_agent;
fi
When I login, I enter my passphrase once and then from then on it will use ssh-agent to authenticate me automatically.
The ssh-agent details are kept in .ssh/environment. Here is what that script will look like:
SSH_AUTH_SOCK=/tmp/ssh-v3Tbd2Hjw3n9/agent.2089; export SSH_AUTH_SOCK;
SSH_AGENT_PID=2091; export SSH_AGENT_PID;
#echo Agent pid 2091;
Regarding cron, you can setup a job as a regular user in various ways.
If you run crontab -e as root user it will setup a root user cron. If you run as crontab -u davis -e it will add a cron job as userid davis. Likewise, if you run as user davis and do crontab -e it will create a cron job which runs as userid davis. This can be verified with the following entry:
30 * * * * /usr/bin/whoami
This will mail the result of whoami every 30 minutes to user davis. (I did a crontabe -e as user davis.)
If you try to see what keys are used as user davis, do this:
36 * * * * /usr/bin/ssh-add -l
It will fail, the log sent by mail will say
To: davis#xxxx.net
Subject: Cron <davis#hostyyy> /usr/bin/ssh-add -l
Could not open a connection to your authentication agent.
The solution is to source the env script for ssh-agent above. Here is the resulting cron entry:
55 10 * * * . /home/davis/.ssh/environment; /home/davis/bin/domythingwhichusesgit.sh
This will run the script at 10:55. Notice the leading . in the script. It says to run this script in my environment similar to what is in the .bash init script.
Yesterday I had similar problem...
I have cron job on one server, which start some action on other server, using ssh... Problem was user permissions, and keys...
in crontab I had
* * * * * php /path/to/script/doSomeJob.php
And it simply didn't work ( didnt have permissions ).
I tryed to run cron as specific user, which is connected to other server
* * * * * user php /path/to/script/doSomeJob.php
But with no effect.
Finally, i navicate to script and then execute php file, and it worked..
* * * * * cd /path/to/script/; php doSomeJob.php

Resources