My ruby file is like this.
`mkdir #{HOST} -p`
It works fine by: ruby mycode.rb
But in a cron job
0 * * * * ruby ~/backup.rb >> backup.log
It will a -p folder. Why?
The #1 problem that anybody runs into with cron jobs is that usually, for security reasons, cron jobs run with a minimal $PATH. So, it could be that your cron job runs with a different path than when you run the script from the shell, which would mean that it is possible that within the cron job a different mkdir comman gets called, which interprets its arguments differently.
Usually, the first filename argument stops option processing and everything that comes after that will be treated as a filename. So, since #{HOST} is a filename, everything after that will also be treated as a filename, which means that the call will be interpreted as "make two directories, one named #{HOST} and the other named -p" If you look for example at the specification of mkdir, it is simply illegal to pass an option after the filenames.
Another possibility is that for some reason #{HOST} will be empty when running under cron. Then the whole call expands to mkdir -p, which again, depending on your implementation of mkdir might be interpreted as "create one directory named -p".
It is not quite clear to me why you are passing the options and operands in the wrong order, instead of mkdir -p #{HOST}. It's also not clear to me why you use the shell at all, instead of just FileUtils.mkdir_p(HOST).
Another problem I've seen is the #! script line fails when /usr/bin/env is used. For instance:
#!/usr/bin/env ruby
doesn't find ruby when running under cron. You have to use
#!/usr/local/bin/ruby
or the equivalent on your platform.
Related
I noticed that my script was ignoring my positional arguments in old terminal tabs, but working on recently created ones, so I decided to reduce it to the following:
TAG=test
while getopts 't:' c
do
case $c in
t)
TAG=$OPTARG
;;
esac
done
echo $TAG
And running the script I have:
~ source my_script
test
~ source my_script -t "test2"
test2
~ source my_script -t "test2"
test
I thought it could be that c was an special used variable elsewhere but after changing it to other names I had the exact same problem. I also tried adding a .sh extension to the file to see it that was a problem, but nothing worked.
Am I doing something wrong ? And why does it work the first time, but not the subsequent attempts ?
I am on MacOS and I use zsh.
Thank you very much.
The problem is that you're using source to run the script (the . command does the same thing). This makes it run in your current (interactive) shell (rather than a subprocess, like scripts normally do). This means it uses the same variables as the current shell, which is necessary if you want it to change those variables, but it can also have weird effects if you're not careful.
In this case, the problem is that getopts uses the variable OPTIND to keep track of where it is in the argument list (so it doesn't process the same argument twice). The first time you run the script with -t test2, getopts processes those arguments, and leaves OPTIND set to 3 (meaning that it's already done the first two arguments, "-t" and "test2". The second time you run it with options, it sees that OPTIND is set to 3, so it thinks it's already processed both arguments and just exits the loop.
One option is to add unset OPTIND before the while getopts loop, to reset the count and make it start from the beginning each time.
But unless there's some reason for this script to run in the current shell, it'd be better to make it a standard shell script and have it run as a subprocess. To do this:
Add a "shebang" line as the first line of the script. To make the script run in bash, that'd be either #!/bin/bash or #!/usr/bin/env bash. For zsh, use #!/bin/zsh or #!/usr/bin/env zsh. Since the script runs in a separate shell process, the you can run bash scripts from zsh or zsh scripts from bash, or whatever.
Add execute permission to the script file with chmod -x my_script (or whatever the file's actual name is).
Run the script with ./my_script (note the lack of a space between . and /), or by giving the full path to the script, or by putting the script in some directory in your PATH (the directories that're automatically searched for commands) and just running my_script. Do NOT run it with the bash, sh, zsh etc commands; these override the shebang and therefore can cause confusion.
Note: adding ".sh" to the filename is not recommended; it does nothing useful, and makes the script less convenient to run since you have to type in the extension every time you run it.
Also, a couple of recommendations: there are a bunch of all-caps variable names with special meanings (like PATH and OPTIND), so unless you want one of those special meanings, it's best to use lower- or mixed-case variable names (e.g. tag instead of TAG). Also, double-quoting variable references (e.g. echo "$tag" instead of echo $tag) avoids a lot of weird parsing headaches. Run your scripts through shellcheck.net; it's good at spotting common mistakes like this.
I’m having some serious issues with trying to get the proper format for my bash script to be able to run successfully in crontab. The bash script runs successfully when manually prompted from the command line.
Here is the bash script in question (the actual parameters themselves [$1 & $2] have been manually placed in the script):
#!/bin/bash
# Usage: ./s3DeleteByDateVirginia "bucketname" "file type"
past=$(date +"%F" -d "60 days ago")
aws s3api list-objects --bucket $1 --query 'Contents[?LastModified<=`'$past'`][].{Key:Key}' | grep $2 | while read -r line
do
fileName=`echo $line`
aws s3api delete-object --bucket $1 --key "$fileName"
done;
The script is in this bash file: /home/ubuntu/s3DeleteByDateVirginiaSoco1
To set up the script I use: sudo crontab –e
Now I see people online saying you need to give it the proper path which doesn’t make any sense to me especially when it comes to putting it in the right location because I’m seeing a number of various modifications of this online but it consists of this format: SHELL=/bin/sh sPATH=/bin:/sbin:/usr/bin:/usr/sbin but I don't know where to put it.
According to the syslog the cron functionalities that parts working but the script itself doesn’t execute:
In addition to this the script has all of the proper permissions to run.
All in all, I’m more confused that when I started and I’m not seeing that much documentation on how crontab works.
Crontab in question:
Additional Edits based on user's suggestions:
Here's my polished script:
Here's the crontab line:
# m h dom mon dow command
PATH=/usr/local/bin:/usr/bin:/bin:/root/.local/bin/aws
33 20 * * * /home/ubuntu/s3DeleteByDateSoco1
Updated syslog:
Ok, I see several problems here. First, you need to put this in the crontab file for the user you want the script to run as. If you want to run it under your user account, do not use just crontab -e instead of sudo crontab -e (with sudo, it edits the root user's crontab file).
Second, you need to use the correct path & name for the script; it looks like it's /home/ubuntu/s3DeleteByDateVirginiaSoco1, so that's what should be in the crontab entry. Don't add ".sh" if it's not actually part of the filename. It also looks like you tried adding "root" in front of the path; don't do that either, since crontab will try to execute "root" as a command, and it'll fail. bash -c doesn't hurt, but it doesn't help at all either, so don't use it.
Third, the PATH needs to be set appropriately for the executables you use in the script. By default, cron jobs execute with a PATH of just "/usr/bin:/bin", so when you use a command like aws, it'll look for it as /usr/bin/aws, not find it, look for it as /usr/aws, not find it, and give the error "aws: command not found" that you see in the last log entry. First, you need to find out where aws (and any other programs your script depends on) are; you can use which aws in your regular shell to find this out. Suppose it's /usr/local/bin/aws. Then you can either:
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the crontab file, before the line that says to run your script.
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the your script file, before the lines that use aws.
In your script, use an explicit path every time you want to run aws (something like /usr/local/bin/aws s3api list-objects ...)
You can use any (or all) of the above, but you must use at least one or it won't be able to find the aws command (or anything else that isn't in the set of core commands that come with the OS).
Fourth, I don't see where $1 and $2 are supplied. You say they've been manually placed in the script, but I don't know what you mean by that. Since the script expects them as parameters, you need to specify them in the crontab file (i.e. the command in crontab should be something like /home/ubuntu/s3DeleteByDateVirginiaSoco1 bucketname pattern).
Fifth, the script itself doesn't follow good quoting conventions. In general, all variable references should be in double-quotes. For example, use grep "$2" instead of grep $2. Without the double-quotes, variables that contain spaces or certain shell metacharacters can cause weird parsing problems.
Finally, why do you do fileName=echo $line (with backquotes I can't replicate here)? This mostly just copies the value of $line into the variable fileName, but can have those weird parsing problems I mentioned in the last point. If you want to copy a variable reliably, just use fileName="$line" (or fileName=$line -- this is one of the few cases where it's safe to leave the double-quotes off).
BTW, shellcheck.net is good at spotting common problems like bad quoting; I recommend running your scripts through it to see what it finds.
I have a C program which uses argv[0] inside the program. I understand that argv[0] is the path of the program being executed. I want to pass a custom string as argv[0] to the program instead of its program name. Is there a way to do this in shell?
I read about exec command. But I am unsure about the usage. help exec says I have to pass exec -a <string>
Is there any other way of doing this?
Is there any escape method which I need to use if I am passing special characters or path of another file using exec command?
To clarify the problem:
I am running a program prog1. To enter a particular section in the program I have to give a SIGALRM to the program. This step itself was difficult as I had to create a race around condition to send the signal right when the program starts.
while true;do ./prog1 2; done & while true; do killall -14 prog1; done
The above while loops help me to enter the part of program and that part of program uses argv[0] for a system call. This system call is system(echo something argv[0])
Is there a way to modify the above while loop and put ;/bin/myprogram instead of argv[0].
Bottom line: I need /bin/myprogram to be executed with the privilege of prog1 and it's output.
exec -a is precisely the way to solve this problem.
There are no restrictions that I know of on the string passed as an argument to exec. Normal shell quoting should be sufficient to pass anything you want (as long as it doesn't contain embedded NUL bytes, of course).
The problem with exec is that it replaces the current shell with the named command. If you just want to run a command, you need to spawn a new shell to be replaced; that is as simple as surrounding the command with parentheses:
$ ( exec -a '; /bin/myprogram' bash -c 'echo "$0"'; )
; /bin/myprogram
The brute-force method would be to create your own symlink and run the command that way.
ln -s /path/to/mycommand /tmp/newname
/tmp/newname arg1
rm /tmp/newname
The main problem with this is finding a secure, race-condition-free way to create the symlink that guarantees you run the command you intend to, which is why bash adds a non-standard -a extension to exec so that you don't need such file-system-based workarounds.
Typically, though, commands restrict their behavioral changes to a small, fixed set of possible names. This means that any such links can be created when the program is first installed, and don't need to be created on the fly. In this scenario, there is no need for exec -a, since all possible "virtual" executables already exist.
I am curious about the following:
I have a bash script, which is executed once in a month through a cronjob. The following line is giving an error "unknown command" when ran through the cronjob:
echo $P | chpasswd
When I execute the bash script directly, it is working properly.
Anyone with an idea?
Converting commentary into an answer.
What is the PATH supplied to your cron job? Where is chpasswd stored? Since the directory where chpasswd is stored is not listed in the path provided by cron, it fails to find it. You get a very limited environment with cron; running anything the least out of the ordinary means great care is required.
Either set PATH more fully in the script run by the cron job, or specify the absolute pathname of the commands that are not in /bin or /usr/bin.
Incidentally, how do you set P for echo to echo it? Doesn't it set the same value each month? Is that wise?
There are numerous other questions on Stack Overflow about difficulties running commands from cron jobs. Amongst others, see Bash script not running in cron correctly and Perl script works but not via cron and Is there a special restriction on commands executed by cron?, to name but three.
I'm a newbie to scripting languages trying to learn bash programming.
I have very basic question. Suppose I want to create three folders like $HOME/folder/
with two child folders folder1 and folder2.
If I execute command in shell like
mkdir -p $HOME/folder/{folder1,folder2}
folder will be created along with child folder.
If the same thing is executed through script I'm not able get expected result. If sample.sh contains
#!/bin/sh
mkdir -p $HOME/folder/{folder1,folder2}
and I execute sh ./sample.sh, the first folder will be created then in that a single {folder1,folder2} directory is created. The separate child folders are not created.
My query is
How the script file works when we compared to as terminal command? i.e., why is it not the same?
How to make it work?
bash behaves differently when invoked as sh, to more closely mimic the POSIX standard. One of the things that changes is that brace expansion (which is absent from POSIX) is no longer recognized. You have several options:
Run your script using bash ./sample.sh. This ignores the hashbang and explicitly uses bash to run the script.
Change the hashbang to read #!/bin/bash, which allows you to run the script by itself (assuming you set its execute bit with chmod +x sample.sh).
Note that running it as sh ./sample.sh would still fail, since the hashbang is only used when running the file itself as the executable.
Don't use brace expansion in your script. You could still use as a longer method for avoiding duplicate code:
for d in folder1 folder2; do
mkdir -p "$HOME/folder/$d"
done
Brace expansion doesn't happen in sh.
In sh:
$ echo {1,2}
produces
{1,2}
In bash:
$ echo {1,2}
produces
1 2
Execute your script using bash instead of using sh and you should see expected results.
This is probably happening because while your tags indicate you think you are using Bash, you may not be. This is because of the very first line:
#/bin/sh
That says "use the system default shell." That may not be bash. Try this instead:
#!/usr/bin/env bash
Oh, and note that you were missing the ! after #. I'm not sure if that's just a copy-paste error here, but you need the !.