I need to run mongodump on my database everyday.
How do I automate this reasonably? Every day I want a new folder created with the timestamp and the dump data inside.
Thanks.
Look at
https://github.com/micahwedemeyer/automongobackup
Otherwise use standard tools like cron or shell scripting for wrapping the mongodump call.
I have a super quick handy script. Sometimes I create a cron job for one of my database.
ssh root#hostname "mongodump --db myDatabaseName --out /tmp/mongo-backup ; zip -r /tmp/mongo-backup$(date "+%Y.%m.%d").zip /tmp/mongo-backup ; rm -rf /tmp/mongo-backup" ;
scp root#hostname:/tmp/mongo-backup$(date "+%Y.%m.%d").zip ./
The above script does two things.
Runs the mongodump script and builds a ZIP file like: mongo-backup2017.03.02.zip
Downloads that file via SCP to your local machine.
You can use the cron scheduler to run a mongodump shell script every day. Or you can even use iCal by creating an event, editing it, and selecting Run Script.
Related
Good morning,
I am working on a link between my Laravel file server and a sinology backup. The command I am using uses a sudo command to create and then disconnect the link. I want to know if I would be able to run this command from the scheduler.
Thanks
You can use for example (to run at midnight, every day):
0 0 * * * /path/to/your/command
This is a record you can add in cron of the user you use to run the command. Be aware cron have different environment so you should set all the variables you need.
You may need to create special shell script to include there your environment variables:
. ~/.bash_profile
/path/to/your/command
I have a build server. I'm using the Azure Build Agent script. It's a shell script that will run continuously while the server is up. Problem is that I cannot seem to get it to run on startup. I've tried /etc/init.d and /etc/rc.local and the agent is not being run. Nothing concerning the build agent in the boot logs.
For /etc/init.d I created the script agent.sh which contains:
#!/bin/bash
sh ~/agent/run.sh
Gave it the proper permissions chmod 755 agent.shand moved it to /etc/init.d.
and for /etc/rc.local, I just appended the following
sh ~/agent/run.sh &
before exit 0.
What am I doing wrong?
EDIT: added examples.
EDIT 2: Just noticed that the init.d README says that shell scripts need to start with #!/bin/sh and not #!/bin/bash. Also used absolute path, but no change.
FINAL EDIT: As #ewrammer suggested, I used cron and it worked. crontab -e and then #reboot /home/user/agent/run.sh.
It is hard to see what is wrong if you are not posting what you have done, but why not add it as a cron job with #reboot as pattern? Then cron will run the script every time the computer starts.
Just in case, using a supervisor could be a good idea, In Ubuntu 14 you don't have systemd but you can choose from others https://en.wikipedia.org/wiki/Process_supervision.
If using immortal, after installing it, you just need to create a run.yml file in /etc/immortal with something like:
cmd: /path/to/command
log:
file: /var/log/command.log
This will start your script/command on every start, besides ensuring your script/app is always up and running.
So I have a user, userA on Ubuntu. When the machine starts I want to add a script to /etc/rc0.d called startService
From inside of this script it will start several services using three scripts
startServiceA.sh
startServiceB.sh
startServiceC.sh
I'd like those three scripts to be started from userA, not root. How would I achieve this?
You can use commands like: su, sudo, runuser
Be sure to check the man pages.
This site might be able to help you also
http://www.cyberciti.biz/open-source/command-line-hacks/linux-run-command-as-different-user/
You can run commands inside your startup script with:
sudo -u <username> ....
Note: you will need to to preface every command in the file that you want to run as another user. I'd recommend making a variable at the top of your script like so:
SUDO="sudo -u <username>"
Then just do: $SUDO <command>
This question already has answers here:
CronJob not running
(19 answers)
Closed last month.
I have a cron job that I want to execute every 5 minutes:
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /scr_temp/scheduleSpider.sh
In /var/spool/cron/crontabs/root
The cron should execute a shell script:
#!/bin/sh
if [ ! -f "sync.txt" ]; then
touch "sync.txt"
chmod 777 /scr_temp
curl someLink
fi
That works fine from command line but not from cron. However the cron itself is startet but the script does not start.
I read about the path problem but I dont really understand it. I setup a cron that writes some env data to a file. This is the output:
HOME=/root
LOGNAME=root
PATH=/usr/bin:/bin
SHELL=/bin/sh
If I execute the env command in command line I get following output for PATH
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
What path do I have to set in my shell script?
Your $PATH is fine; leave it alone. On Ubuntu, all the commands you're invoking (touch, chmod, curl) are in /bin and/or /usr/bin.
How did you set up the cron job? Did you run crontab some-file as root?
It seems that /etc/crontab is the usual mechanism for running cron commands as root. On my Ubuntu system, sudo crontab -l says no crontab for root. Running crontab as root, as you would for any non-root account, should be ok, but you might consider using /etc/crontab instead. Note that it uses a different syntax than an ordinary crontab, as explained in the comments at the top of /etc/crontab:
$ head -5 /etc/crontab
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
Run sudo crontab -l. Does it show your command?
Temporarily modify your script so it always produces some visible output. For example, add the following right after the #!/bin/sh:
echo "Running scheduleSpider.sh at \`date\`" >> /tmp/scheduleSpider.sh.log
and see what's in /tmp/scheduleSpider.sh.log after a few minutes. (You can set the command to run every minute so you don't have to wait as long for results.) If that works (it should), you can add more echo commands to your script to see in detail what it's doing.
It looks like your script is designed to run only once; it creates the sync.txt file to prevent it from running again. That could be the root (ahem) of your problem. What that your intent? Did you mean to delete sync.txt after running the command, and just forgot to do it?
root's home directory on Ubuntu is /root. The first time your script runs, it should create /root/sync.txt. Does that file exist? If so, how old is it?
Note that curl someLink (assuming someLink is a valid URL) will just dump the content from the specified link to standard output. Was that your intent (it will show up as e-mail to root? Or did you just not show us the entire command?
First: you can substitute the first field with */5 (see man 5 crontab)
Second: have cron mail the output to your email address by entering MAILTO=your#email.address in your crontab. If the script has any output, it'll be mailed. Instead of that, you may have a local mailbox in which you can find the cron output (usually $MAIL).
A better syntax for you CRON is
*/5 * * * * /scr_temp/scheduleSpider.sh
Also, check the authority of your scheduleSpider.sh file. Cron runs under a different user than the one you are likely executing your program interactively, so it may be that cron does not have authority. Try chmod 777 for now, just to check.
I suggest to:
check that /scr_temp/scheduleSpider.sh has executable bit
set PATH properly inside your script or use absolute path to command (/bin/touch instead of touch)
specify absolute path to sync.txt file (or calculate it relatively to script)
Have you added the comand via crontab -e or just by editing the crontab file? You should use crontab -e to get it correctly updated.
Set the working directory in the cron script, it probably doesn't execute the things where you think it should.
You should add /bin/sh before the absolute path of your script.
*/5 * * * * /bin/sh /scr_temp/scheduleSpider.sh
I have a svn backup script in a redhat linux. let's it called svnbackup.sh
It works fine, when I run it in terminal.
But when I put it into crontab, it will not bring the svnserve back, even the data is backuped correctly.
What's wrong with me???
killall svnserve
tar -zcf /svndir /backup/
svnserve -d -r /svndir
Usually, 'environment' is the problem in a cron job that works when run 'at the terminal' but not when it is run by cron. Most probably, your PATH is not set to include the directory where you keep svnserve.
Either use an absolute pathname for svnserve or set PATH appropriately in the script.
You can debug, in part, by adding a line such as:
env > /tmp/cron.job.env
to your script to see exactly how little environment is set when your cron job is run.
If you are trying to backup a live version of a repository, you probably should be using svnadmin hotcopy. That said, here are a few possibilities that come to mind as to what might be wrong:
You've put each of those statements as separate entries in your crontab (can't tell from the Q).
The svnserve command takes a password, which cron, in turn, cannot supply.
The svnserve command blocks or hangs indefinitely and gets killed by cron.
The command svnserve is not in your PATH in cron.
Assuming that svnserve does not take a password, this might fix the problem:
#! /bin/bash
# backup_and_restart_svnserve.sh
export PATH=/bin:/sbin:/usr/bin:/usr/local/bin # set up your path here
killall svnserve && \
tar -zcf /svndir /backup/ && \
svnserve -d -r /svndir >/dev/null 2>&1 &
Now, use "backup_and_restart_svnserve.sh" as the script to execute. Since it runs in the background, it should hopefully continue running even when cron executes the next task.