OS X + Crontab: How do you run SCP via cron? - macos

This works fine when I run it by hand:
#!/bin/bash
eval `ssh-agent`
ssh-add
/usr/bin/scp me#server:~/file ./
exit 0
However, when the cron runs the file is never touched. I know the ssh keys are right - replace that scp with an ssh and it runs fine.

You also might consider using rsync for this process instead of scp'ing the file as a cron script.

cron generally is run as root, have you tested this script as root to ensure that the ssh keys are in the correct location that root looks for? Or do you have your sshkeys in your user profile?

Not sure what version you are running, but in Snow Leopard cron jobs run as the user (check with whoami in your cron'd script) ... at least when the user is currently logged in.

Related

Run an shell script on startup (not login) on Ubuntu 14.04

I have a build server. I'm using the Azure Build Agent script. It's a shell script that will run continuously while the server is up. Problem is that I cannot seem to get it to run on startup. I've tried /etc/init.d and /etc/rc.local and the agent is not being run. Nothing concerning the build agent in the boot logs.
For /etc/init.d I created the script agent.sh which contains:
#!/bin/bash
sh ~/agent/run.sh
Gave it the proper permissions chmod 755 agent.shand moved it to /etc/init.d.
and for /etc/rc.local, I just appended the following
sh ~/agent/run.sh &
before exit 0.
What am I doing wrong?
EDIT: added examples.
EDIT 2: Just noticed that the init.d README says that shell scripts need to start with #!/bin/sh and not #!/bin/bash. Also used absolute path, but no change.
FINAL EDIT: As #ewrammer suggested, I used cron and it worked. crontab -e and then #reboot /home/user/agent/run.sh.
It is hard to see what is wrong if you are not posting what you have done, but why not add it as a cron job with #reboot as pattern? Then cron will run the script every time the computer starts.
Just in case, using a supervisor could be a good idea, In Ubuntu 14 you don't have systemd but you can choose from others https://en.wikipedia.org/wiki/Process_supervision.
If using immortal, after installing it, you just need to create a run.yml file in /etc/immortal with something like:
cmd: /path/to/command
log:
file: /var/log/command.log
This will start your script/command on every start, besides ensuring your script/app is always up and running.

Running Docker commands included in a shell script alongside other Linux commands and switching users

Using the Linux terminal, I run bash scripts (.sh files) containing sequences of commands I want to execute.
The issue is that I am unable to run a Docker command from within my shell script. I can run this Docker command when it's typed directly at the terminal with root privileges but not when I include it in the shell script file.
My script executed as a general user from command line, looks like this:
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su
# Copy a folder from Docker container to host OS
docker cp <container-name>:/home/user/data /home/user/docker_backup
# More general user commands
cd ..
My code only runs until the su line above. After i enter the root password, nothing happens. if i type exit, i get permission errors, meaning the docker cp command failed.
**
This is my desired solution
**After thorough research, as I wanted to run my script as a general user, and only run certain commands as Root when necessary, I came up with a solution that works.
My script now looks like this (run with
$ sh script_name.sh):
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su - root -c "docker cp <container-name>:/home/user/data /home/user/docker_backup"
# More general user commands
cd ..
Run shell script as general user. For commands that require root privileges, I use su - root -c "<command>". Terminal prompts for root password and executes command in quotes as root, then shell proceeds as general user.
Actually posting this as an answer:
You switch your current user to root during the script, but the script was executed by your own user.
So the docker cp command will also be executed as your own user, but you will be logged into the root account.
This results in you not seeing the output of docker cp (which might give you insight to not working - I think insufficient privilege).
A solution to this is either using sudo before docker cp, starting the script as root or adding your user to the group "docker", which authorizes your user to use the docker commands
I had the similar issue where the docker commands were running fine on the Terminal but the same commands were not running when I compiled them into a bash script and the issue was basically because of two reasons.
The docker commands need to be run with uplifted privileges that is with the sudo command ( Eg: sudo docker ps works but docker ps won't work). One could add the current user to docker group so that we need not use sudo with each docker command. Please visit this link and follow the section 2 to do the same.
Run the script in the correct way
One should have #! bin/bash at the starting of the script. It is a shebang that is required by each script.
One should save the file without .sh extension
One should provide the execution permission to the script by giving command chmod 777 script_name
run the script with bash script_name

Issue with running cgminer as a cron job - Ubuntu

I have installed cgminer in my machine and could able to start it without any issues
when running ./cgminer in terminal.
But for a specific feature, i am trying to invoke the cgminer from
using shell script via a cron job.
1) The cgminer command executes correctly when i run the shell script
2) But it is not executing when i set the shell script as a cron job.
Below is the content in the shell script.
#!/bin/bash
export DISPLAY=:0.0
/root/test/cgminer/cgminer/cgminer >> /home/balan/temp/script/log.txt;
Please suggest.
Are you running this as a cronjob under the root user or the balan user? Since the cgminer binary is in the root directory this probably needs to run as root.
If you are running as root, try redirecting the error output and see what errors are being logged:
/root/test/cgminer/cgminer/cgminer >> /home/balan/temp/script/log.txt 2>/home/balan/temp/script/error_log.txt;
Solution :
The variable TERM has to be set as like below and respective host, username and password
to be given for the cgminer to execute from cronjob.
export TERM=xterm
#Change the below cgminer path - IMPORTANT
/root/test/cgminer/cgminer/cgminer -o $host -u $user -p $password

Run a bash script on startup, before login as a user

I am trying to have my Ubuntu 12.04 LTS server run a bash script I have to start a Minecraft server on start up, prior to log in but as user minecraft. I can have it run as root by placing the following in /etc/rc.local
bash /path/to/script/script.sh
which runs the script as root, I have tried the following in /etc/rc.local
su -c `bash /path/to/script/script.sh` minecraft
but to no avail. Can anyone tell me what I am doing wrong or should be doing instead? The first line of my script is
#!/bin/bash
in case it is important. Thanks much!
Try
su minecraft -c '/bin/bash /path/to/script/script.sh &'
The user should be the first argument to su.
You should use quotes and not ticks for the command argument (-c)
You may want to consider using su -l minecraft to have the script run in an environment which would be similar to that if the user minecraft logged in directly.
Give this a shot and let me know if it works.

why my svn backup shell script, works fine in terminal, but fails in crontab?

I have a svn backup script in a redhat linux. let's it called svnbackup.sh
It works fine, when I run it in terminal.
But when I put it into crontab, it will not bring the svnserve back, even the data is backuped correctly.
What's wrong with me???
killall svnserve
tar -zcf /svndir /backup/
svnserve -d -r /svndir
Usually, 'environment' is the problem in a cron job that works when run 'at the terminal' but not when it is run by cron. Most probably, your PATH is not set to include the directory where you keep svnserve.
Either use an absolute pathname for svnserve or set PATH appropriately in the script.
You can debug, in part, by adding a line such as:
env > /tmp/cron.job.env
to your script to see exactly how little environment is set when your cron job is run.
If you are trying to backup a live version of a repository, you probably should be using svnadmin hotcopy. That said, here are a few possibilities that come to mind as to what might be wrong:
You've put each of those statements as separate entries in your crontab (can't tell from the Q).
The svnserve command takes a password, which cron, in turn, cannot supply.
The svnserve command blocks or hangs indefinitely and gets killed by cron.
The command svnserve is not in your PATH in cron.
Assuming that svnserve does not take a password, this might fix the problem:
#! /bin/bash
# backup_and_restart_svnserve.sh
export PATH=/bin:/sbin:/usr/bin:/usr/local/bin # set up your path here
killall svnserve && \
tar -zcf /svndir /backup/ && \
svnserve -d -r /svndir >/dev/null 2>&1 &
Now, use "backup_and_restart_svnserve.sh" as the script to execute. Since it runs in the background, it should hopefully continue running even when cron executes the next task.

Resources