I'm relatively new to unix scripting, so apologies for the newbie question.
I need to create a script which will permanently run in the background, and monitor for a file to arrive in an FTP landing directory, then copy it to a different directory, and lastly remove the file from the original directory.
The script is running on a Ubuntu box.
The landing directory is /home/vpntest
The file needs to be copied as /etc/ppp/chap-secrets
So far, I've got this
#/home/vpntest/FTP_file_copy.sh
if [ -f vvpn_azure.txt ]; then
cp vvpn_azure.txt /etc/ppp/chap-secrets
rm vvpn_azure.txt
fi
I can run this as root, and it works, but only as a one off (I need it to run permanently in the background, and trigger each time a new file is received in the landing zone.)
If I don't run as root, I get issues with permissions (even if I run it from within the directory /home/vpntest.)
Any help would be much appreciated.
Updated: crontab correction and extra info
One way to have a check and move process in background with root permissions, is the "polling" approach done via root user's crontab, with your script.
Steps:
Revise your /home/vpntest/FTP_file_copy.sh:
#!/bin/bash
new_file=/home/vpntest/vvpn_azure.txt
if [ -f "$new_file" ]; then
mv $new_file /etc/ppp/chap-secrets
fi
Log out. Log in as root user.
Add a cron task to run the script:
crontab -e
If this is a new machine, and your first time running crontab, you may get a prompt first to choose an editor for crontab, just choose and continue into the editor.
The format is m h dom mon dow command, so if checking every 5 minutes is sufficiently frequent, do:
*/5 * * * * /home/vpntest/FTP_file_copy.sh
Save and close to apply.
It will now automatically run the script every 5 minutes in the background, helping you to move the file if found.
Explanation
Root user, because you mentioned it only worked for you as root.
So we set this in the root user's crontab to have sufficient permissions.
man 5 crontab informs us:
Steps are also permitted after an asterisk, so if you want to say
'every two hours', just use '*/2'.
Thus we write */5 in the first column, which is the minutes column,
to set for "every 5 minutes".
FTP_file_copy.sh:
uses absolute paths, can run from anywhere
re-arranged so one variable new_file can be re-used
good practice to enclose any values being checked within your [ ] test
uses mv to write over the destination while removing itself from the source directory
Related
I have a bash script that I can execute with cd ~/Documents/Code/shhh/ && ./testy if i'm in any directory on my computer and that successfully pushes to github which is what i want.
I'm trying to schedule a cron job to do this daily so I ran crontab -e which opens a nano editor and then I put 30 20 * * * cd ~/Documents/Code/shhh/ && ./testy to run daily at 10:30pm and hit control O, enter and control X. But still it it didn't execute. When I type crontab -l it shows my command & I have a You have new mail. message when I open a new window. Still my command doesn't execute even though it will when I run it from any other directory.
I think my crontab job is at /var/at/tmp so I ran 30 20 * * * cd ../../../Users/squirrel/Documents/Code/shhh/ && ./testy but still nothing, even though it does work when I write it out myself from that directory. Sidenote, I can't enter into the tmp folder even after using sudo
OK When I type in mail I see a lot message and inside i get this error
---------------Checking Status of 2---------------
[master 0c1fff8] hardyharhar
1 file changed, 1 insertion(+), 1 deletion(-)
fatal: could not read Username for 'https://github.com': Device not configured
When you by default open nano Input_file it opens it up in INSERT mode(unlike vi where we have to explicitly go to INSERT mode by pressing i key). Now when you have done CTNRL+O it will then ask you if you want to save changes to opened file or not eg--> File Name to Write: Input_file If you press ENTER then it will save it and will be back on the screen(to your Input_file) where you entered new line. Now you could press CONTRL+X to come out of Input_file. May be you are stuck after saving it and want to come out then try this out once?
crontab -e does not edit the crontab file "live" at all -- changes are saved to the active file only after you save changes and exit the editor.
It also sounds like you may be using incorrect directory paths. The command in a crontab entry will generally be executed starting from the user's home directory. So if your home directory is /Users/squirrl, the command cd ../parent_directory/ will try to move to /Users/parent_directory. I suspect this is not what you want.
Finally, note that cron jobs run with a very minimal environment, without running most of your usual shell setup files (e.g. .bashrc). Most notably, if your script uses any commands that aren't in /bin or /usr/bin, you'll either need to use explicit full paths to them, or change the PATH variable to include the directories they're in.
I just during the weekend decided to try out zsh and have a bit of fun with it. Unfortunately I'm an incredible newbie to shell scripting in general.
I have this folder with a file, which filename is a hash (4667e85581f80b6936f8811f0a7493c70eae4ee7) without a file-extension.
What I would like to do is copy this file to another folder and rename it to "screensaver.png".
I've tried with the following code:
#!/usr/bin/zsh
KUVVA_CACHE="$HOME/Library/Containers/com.kuvva.Kuvva-Wallpapers/Data/Library/Application Support/Kuvva"
DEST_FOLDER="/Library/Desktop Pictures/Kuvva/$USERNAME/screensaver.png"
for wallpaper in ${KUVVA_CACHE}; do
cp -f ${wallpaper} ${DEST_FOLDER}
done
This returns the following error:
cp: /Users/Morten/Library/Containers/com.kuvva.Kuvva-Wallpapers/Data/Library/Application Support/Kuvva is a directory (not copied).
And when I try to echo the $wallpaper variable instead of doing "cp" then it just echo's the folder path.
The name of the file changes every 6 hour, which is why I'm doing the for-loop. So I never know what the name of the file will be, but I know that there's always only ONE file in the folder.
Any ideas how I can manage to do this? :)
Thanks a lot!
Morten
It should work with regular filename expansion (globbing).
KUVVA_CACHE="$HOME/Library/Containers/com.kuvva.Kuvva-Wallpapers/Data/Library/Application Support/Kuvva/"
And then copy
cp -f ${KUVVA_CACHE}/* ${DEST_FOLDER}
You can add the script to your crontab so it will be run at a certain interval. Edit it using 'crontab -e' and add
30 */3 * * * /location/of/your/script
This will run it every third hour. First digit is minutes. Star indicates any. Exit the editor by pressing the escape-key, then shift+: and type wq and press enter. These vi-commands.
Don't forget to 'chmod 0755 file-name' the script so it becomes executable.
Here is the script.
#!/bin/zsh
KUVVA_CACHE="$HOME/Library/Containers/com.kuvva.Kuvva-Wallpapers/Data/Library/Application Support/Kuvva"
DEST_FOLDER="/Library/Desktop Pictures/Kuvva/$USERNAME/screensaver.png"
cp "${KUVVA_CACHE}/"* "${DEST_FOLDER}"
I created this simple script to allow the user to remove files created by the web server in his home directory without giving him "su". Both scripts are set with "chmod 4750".
The craziest thing is that they DID work and now they don't. Here's the scripts:
#!/bin/bash
# Ask for directory to delete
echo "Enter the file or directory you would like to delete, the assumed path is /home/user"
read DIRECTORY
rm -rf /home/user/"$DIRECTORY"
echo "Deleting /home/user/$DIRECTORY ..."
exit 0
2:
#!/bin/bash
# Reset permissions
echo "Resetting the ownership of the contents of /home/user to user."
chown -R user /home/user
exit 0
I will make them a little more advanced and work for multiple users but right now I cannot even get the simple version to work. It works when run as root of course. It used to work when run as user 'user' but now it doesn't. I get this:
user#dev:/home/user$ delete.sh
Enter the file or directory you would like to delete, the assumed path is /home/user/[your input]
test-dir
rm: cannot remove ‘/home/user/test-dir/test-file’: Permission denied
Deleting /home/user/test-dir ...
and
chown: changing ownership of ‘/home/user/test-dir’: Operation not permitted
What can possibly be the problem?
-rwsr-x--- 1 root user 291 Nov 6 05:23 delete.sh
-rwsr-x--- 1 root user 177 Nov 6 05:45 perms.sh
There is a pretty comprehansive answer at https://unix.stackexchange.com/questions/364/allow-setuid-on-shell-scripts
Bottom line is that there are two main points against it:
A race condition between when the Kernel opens the file to find which interpreter it should execute and when the interpreter opens the file to read the script.
Shell scripts which execute many external programs without proper checks can be fooled into executing the wrong program (e.g. using malicious PATH), or expand variables in a broken way (e.g. having white space in variable values), and generally it has less control on how well the external programs it executes handle the input.
Historically, there was a famous bug in the original Bourne shell (at least on 4.2BSD, which is where I saw this in action) which allowed anyone to get interactive root shell by creating a symlink called -i to a suid shell script. That's possibly the original trigger for this being prohibited.
EDIT: To answer "How do I fix it" - configure sudo to allow users to execute only these scripts as user root, and perhaps use a trick like in https://stackoverflow.com/a/4598126/164137 to find the original user's name and force operation on their own home directory, instead of letting them pass in any arbitrary input (i.e. in their current state, nothing in the scripts you include in your question prevents user1 from executing the scripts and passing them users2's directory, or any directory for that matter)
I want to make a cron job that checks if a folder exists, and it if does to delete all the contents of that folder. For example, I know that the following will delete the contents of my folder in using cron:
0 * * * * cd home/docs/reports/;rm -r *
However, I realized that if the folder is removed (or the wrong file path is given) instead of the contents of that folder being deleted, cd fails and all files are deleted on my operating system. To prevent this from happening (again) I want to check for the existence of the folder first, and then to delete the contents. I want to do something like the following, but I'm not sure how to use a bash script with cron.
if [ -d "home/docs/reports/" ]; then
cd home/docs/reports/;rm -r *
fi
I'm new to bash and cron (in case it is not obvious).
I think cron uses /bin/sh to execute commands. sh is typically a subset of bash, and you're not doing anything bash-specific.
Execute the rm command only if the cd command succeeds:
0 * * * * cd home/docs/reports/ && rm -r *
NOTE Please wait a few minutes while I test this. If this note is gone, I've tried it and it works.
Yes, it works. (Note that testing whether the directory exists is less reliable; it's possible that the directory exists but you can't cd into it, or it might cease to exist between the test and the cd command.)
But actually you don't need to use a compound command like that:
0 * * * * rm -r home/docs/reports/*
Still the && trick, and the corresponding || operator to execute a second command only if the first one fails, can be very useful for more complicated operations.
(Did you mean /home/docs rather than home/docs? The latter will be interpreted relative to your home directory.)
Though this worked ok when I tried it, use it at your own risk. Any time you combine rm -r with wildcards, there's a risk. If possible, test in a directory you're sure you don't care about. And you might consider using rm -rf if you want to be as sure as possible that everything is deleted. Finally, keep in mind that the * wildcard doesn't match files or directories whose names start with ..
#include <stddisclaimer.h>
EDIT :
The comments have given me a better understanding of what you're trying to do. These are files that users are going to download shortly after they're created (right?), so you don't want to delete anything less than, say, 5 minutes old.
Assuming you have GNU findutils, you can do something like this:
0 * * * * find /home/docs/reports/* -cmin +5 -delete 2>/dev/null
Using the -delete option to find means you're deleting files and/or directories one at a time, not deleting entire subtrees; the main difference is that an old directory with a new file in it will not be deleted. Applying -delete to a non-empty directory will fail with an error message.
Read the GNU find documentation (info find) for more information on the -cmin and -delete options. Note that -ctime operates on the time of the last status change of the file, not its creation time (Unix doesn't record file creation times). For your situation, it's likely to be the same.
(If you omit the /* on the path, it will delete the reports directory itself.)
Wrap your entire command (including the if logic) into a script.sh.
Specify #!/bin/bash at the top of your script.
Make it executable:
chmod +x script.sh
Then specify the full path of the script in your cron job.
Easiest thing by far is to do SHELL=/bin/bash at the top of your crontab. Works for me.
We have application logs that are rotated on a size basis, I.e. Each time the log reaches 1mb, the log file changes from abc.log to abc.log.201110329656, so on. When that happens, abc.log starts again from 0mb. The frequency of the log rotation is around 30 mins.
We have a cron batch job running in the background against abc.log to check for nullpointerexception every 30 mins.
The problem is, sometimes the log is rotated faster than the next batch job can run, causing the nullpointerexception to go undetected because the batch job couldn't get a chance to run.
Is there a way to solve this problem? No, I cannot change the behavior of the application logging, size, name or rotation. I cannot change the frequency of the cron interval, which is fixed at 30 minutes. However, I can freely change other things of batch job which is a bash script.
How can this be solved?
find(1) is your friend:
$ find /var/log/myapp -cmin -30 -type f -name 'abc.log*'
This gives you a list of all log files under /var/log/myapp touched in the last 30 minutes. Let your cron job script work on all these files.
You've pretty much stated what the problem is:
You have a log that automatically rolls when the log gets to a certain size.
You have another job that runs against the log file, and the log file only.
You can't adjust the log roll, and you can't adjust when the check of the log happens.
So, if the log file changes, you are searching the wrong file. Can you do a check against all log files that you've previously haven't checked with your batch script? Or, are you only allowed to check the current log file?
One way to do this is to track when you last checked the log files, and then check all those log files that are newer than the last time you did a check. You can use a file called last.check for this. This file has no contents (the contents are irrelevant), but you use the timestamp on this file to figure out when the last time your log ran. You can then use touch to change the timestamp once you've successfully checked the logs:
last_check="$log_dir/last.check"
if [ ! -e "$last_check" ]
then
echo "Error: $last_check doesn't exist"
exit 2
fi
find $log_dir -newer "$last_check" | while read file
do
[Whatever you do to check for nullpointerexception]
done
touch "$last_check"
You can create the original $last_check file using the touch command:
$ touch -m 201111301200.00 $log_dir/last.check #Touch date is in YYYYMMDDHHMM.SS format
Using a touch file provides a bit more flexibility in case things change. For example, what if you decide in the future to run the crontab every hour instead of every 30 minutes.