I have a file, run.py currently on a working ec2 instance. I want to run it every hour.
Here is the cronjob I wrote:
0 * * * * python run.py
However, this doesn't work because it needs the full filepath to run.py. However for the life of me I cannot figure out how to find that filepath. All the tutorials I have read just magically have that filepath at the ready somehow.
Assuming a Linux installation, you could use "find" to find the path:
find / -name "run.py" -print
This will search the whole disc and might take a few minutes.
realpath run.py will do it, if you're in the directory that run.py resides in. locate run.py will find it (after a sudo updatedb unless that's already run), but might return lots of other stuff that contain the string run.py.
Related
I'm relatively new to unix scripting, so apologies for the newbie question.
I need to create a script which will permanently run in the background, and monitor for a file to arrive in an FTP landing directory, then copy it to a different directory, and lastly remove the file from the original directory.
The script is running on a Ubuntu box.
The landing directory is /home/vpntest
The file needs to be copied as /etc/ppp/chap-secrets
So far, I've got this
#/home/vpntest/FTP_file_copy.sh
if [ -f vvpn_azure.txt ]; then
cp vvpn_azure.txt /etc/ppp/chap-secrets
rm vvpn_azure.txt
fi
I can run this as root, and it works, but only as a one off (I need it to run permanently in the background, and trigger each time a new file is received in the landing zone.)
If I don't run as root, I get issues with permissions (even if I run it from within the directory /home/vpntest.)
Any help would be much appreciated.
Updated: crontab correction and extra info
One way to have a check and move process in background with root permissions, is the "polling" approach done via root user's crontab, with your script.
Steps:
Revise your /home/vpntest/FTP_file_copy.sh:
#!/bin/bash
new_file=/home/vpntest/vvpn_azure.txt
if [ -f "$new_file" ]; then
mv $new_file /etc/ppp/chap-secrets
fi
Log out. Log in as root user.
Add a cron task to run the script:
crontab -e
If this is a new machine, and your first time running crontab, you may get a prompt first to choose an editor for crontab, just choose and continue into the editor.
The format is m h dom mon dow command, so if checking every 5 minutes is sufficiently frequent, do:
*/5 * * * * /home/vpntest/FTP_file_copy.sh
Save and close to apply.
It will now automatically run the script every 5 minutes in the background, helping you to move the file if found.
Explanation
Root user, because you mentioned it only worked for you as root.
So we set this in the root user's crontab to have sufficient permissions.
man 5 crontab informs us:
Steps are also permitted after an asterisk, so if you want to say
'every two hours', just use '*/2'.
Thus we write */5 in the first column, which is the minutes column,
to set for "every 5 minutes".
FTP_file_copy.sh:
uses absolute paths, can run from anywhere
re-arranged so one variable new_file can be re-used
good practice to enclose any values being checked within your [ ] test
uses mv to write over the destination while removing itself from the source directory
This question already has answers here:
Why do you need ./ (dot-slash) before executable or script name to run it in bash?
(9 answers)
Closed 2 years ago.
In ubuntu scripts can be executed with following commands:
$ chmod +x manage.py
$ manage.py
However in mac you need to use ./ in order to actually run the script, as follow:
$ chmod +x manage.py
$ ./manage.py
I would like to know what is exactly ./ (especially that both system use bash by default) and if there is a way to run scripts directly in mac?
It's because you (very sensibly) don't have . in your PATH environment variable. If you do, it becomes an attack vector for people to get you to execute their own code instead of real stuff.
For example, let's say your path is:
.:/usr/bin
so that commands will first be searched for in your current directory, then in /usr/bin.
Then another user creates an executable script file ls in their home directory which changes to your home directory and deletes all your files. Then they tell you they've got something interesting in their home directory. You run ls to see what they have, and your files are deleted. All because it ran ls from your current directory first.
This is a particular favorite attack vector against naive system admins.
To be honest, on my home machines, I don't worry too much, since I'm the only user and I'm not prone to downloading stuff I don't trust. So I usually add . to my path for convenience, but usually at the end so it doesn't get in the way of my more regular commands.
When you are executing a command that file (script/binary) needs to be found by the system. That is done by putting directories where to look for scripts into the PATH environment variable. So if it works in ubuntu it means PATH includes '.' (the current directory). If you want the same behavior on mac then put something like export PATH="$PATH:." in your .bashrc (asuming you are using bash..)
I'm currently writing a bash script wherein a portion of it needs to be able to look at a bunch of directory hierarchies and spit out two text files each containing a list of the directories and all the files, respectively, in the given directory.
As I understand the following should do the trick:
find $directory -type d >> alldirs.txt
where directory is assigned different directory path names since I'm supposed to check a number of them.
I have a for loop the iterates through my list of directories and uses the above function to complete my task. The above command gets to a certain point and then it gets stuck. When I investigated the issue it seemed like it would get to a directory that's empty and then it get stuck. And or it would actually start looking for directories that don't exist in the first place then it would get stuck. Any ideas?
Is there something I'm missing? Or did I understand how that works incorrectly? Is there a better alternative?
You haven't said $directory is a name. Without it, bash will complain that "find: $directory: No such file or directory"
For example:
find . -iname $directory -type d >> alldirs.txt
Note: The above will start searching in the current directory, specified by the "."
Change it to whatever directory you wish e.g. /home/mys.celeste
I had similar issue: find / -name blahblah stuck somewhere
When debugging I tried to search in all root directories like/tmp, /var, /sbin, /user and so on. And found that it is stuck on /media.
In /media I had RHEL repo mounted. So afterunmount - find continue to work normally.
I want to make a cron job that checks if a folder exists, and it if does to delete all the contents of that folder. For example, I know that the following will delete the contents of my folder in using cron:
0 * * * * cd home/docs/reports/;rm -r *
However, I realized that if the folder is removed (or the wrong file path is given) instead of the contents of that folder being deleted, cd fails and all files are deleted on my operating system. To prevent this from happening (again) I want to check for the existence of the folder first, and then to delete the contents. I want to do something like the following, but I'm not sure how to use a bash script with cron.
if [ -d "home/docs/reports/" ]; then
cd home/docs/reports/;rm -r *
fi
I'm new to bash and cron (in case it is not obvious).
I think cron uses /bin/sh to execute commands. sh is typically a subset of bash, and you're not doing anything bash-specific.
Execute the rm command only if the cd command succeeds:
0 * * * * cd home/docs/reports/ && rm -r *
NOTE Please wait a few minutes while I test this. If this note is gone, I've tried it and it works.
Yes, it works. (Note that testing whether the directory exists is less reliable; it's possible that the directory exists but you can't cd into it, or it might cease to exist between the test and the cd command.)
But actually you don't need to use a compound command like that:
0 * * * * rm -r home/docs/reports/*
Still the && trick, and the corresponding || operator to execute a second command only if the first one fails, can be very useful for more complicated operations.
(Did you mean /home/docs rather than home/docs? The latter will be interpreted relative to your home directory.)
Though this worked ok when I tried it, use it at your own risk. Any time you combine rm -r with wildcards, there's a risk. If possible, test in a directory you're sure you don't care about. And you might consider using rm -rf if you want to be as sure as possible that everything is deleted. Finally, keep in mind that the * wildcard doesn't match files or directories whose names start with ..
#include <stddisclaimer.h>
EDIT :
The comments have given me a better understanding of what you're trying to do. These are files that users are going to download shortly after they're created (right?), so you don't want to delete anything less than, say, 5 minutes old.
Assuming you have GNU findutils, you can do something like this:
0 * * * * find /home/docs/reports/* -cmin +5 -delete 2>/dev/null
Using the -delete option to find means you're deleting files and/or directories one at a time, not deleting entire subtrees; the main difference is that an old directory with a new file in it will not be deleted. Applying -delete to a non-empty directory will fail with an error message.
Read the GNU find documentation (info find) for more information on the -cmin and -delete options. Note that -ctime operates on the time of the last status change of the file, not its creation time (Unix doesn't record file creation times). For your situation, it's likely to be the same.
(If you omit the /* on the path, it will delete the reports directory itself.)
Wrap your entire command (including the if logic) into a script.sh.
Specify #!/bin/bash at the top of your script.
Make it executable:
chmod +x script.sh
Then specify the full path of the script in your cron job.
Easiest thing by far is to do SHELL=/bin/bash at the top of your crontab. Works for me.
I have two ruby script cron jobs that I'm trying to run under Ubuntu 10.04.2 LTS on an AWS EC2 instance. They are both failing silently - I see them being run in /var/log/syslog, but there's no resulting files, and piping the output into a file creates no result.
The scripts are based on the ruby sql backups here:
http://pauldowman.com/2009/02/08/mysql-s3-backup/
(It's a full backup of the db and an incremental bin-log output. Not sure that matters.)
The script works fine if run from the command line by either root or another user - it runs, and I see the files appearing in the S3 repo
I've tested cron with a simple "touch ~/foo" type entry and that worked fine.
My cron entry under root is this:
*/5 * * * * /home/ubuntu/mysql_s3_backup/incremental_backup.rb
Appreciate any help or debugging suggestions. My thought is that some of the ruby library dependencies might not be available when cron is running the job. But I don't understand why I can't seem to get any output at all returned to me. Very frustrating. Thanks.
The full_backup.rb script you link to contains this:
cmd = "mysqldump --quick --single-transaction ...
#...
run(cmd)
Notice that there is no full path on mysqldump. Cron jobs generally run with a very limited PATH in their environment and I'd guess that mysqldump isn't in that limited PATH. You can try setting your own PATH in your crontab:
PATH='/bin:/usr/bin:/whatever/else/you/need'
*/5 * * * * /home/ubuntu/mysql_s3_backup/incremental_backup.rb
Or in your Ruby script:
ENV['PATH'] = '/bin:/usr/bin:/whatever/else/you/need'
Or specify the full path to mysqldump (and any other external executables) in your backup script.
I'd go with one of the latter two options (i.e. specify ENV['PATH'] in your script or use full paths to executables) as that will reduce your dependence on external factors and these will also help avoid issues with people having their own versions of commands that you need in their PATH.
A bit of error checking and handling on the run call might also be of use.
If any of the necessary Ruby libraries weren't accessible (either due to permissions or path issues) then you'd probably get complaints from the script.