Properly formating a crontab executable on a bash script - bash

I’m having some serious issues with trying to get the proper format for my bash script to be able to run successfully in crontab. The bash script runs successfully when manually prompted from the command line.
Here is the bash script in question (the actual parameters themselves [$1 & $2] have been manually placed in the script):
#!/bin/bash
# Usage: ./s3DeleteByDateVirginia "bucketname" "file type"
past=$(date +"%F" -d "60 days ago")
aws s3api list-objects --bucket $1 --query 'Contents[?LastModified<=`'$past'`][].{Key:Key}' | grep $2 | while read -r line
do
fileName=`echo $line`
aws s3api delete-object --bucket $1 --key "$fileName"
done;
The script is in this bash file: /home/ubuntu/s3DeleteByDateVirginiaSoco1
To set up the script I use: sudo crontab –e
Now I see people online saying you need to give it the proper path which doesn’t make any sense to me especially when it comes to putting it in the right location because I’m seeing a number of various modifications of this online but it consists of this format: SHELL=/bin/sh sPATH=/bin:/sbin:/usr/bin:/usr/sbin but I don't know where to put it.
According to the syslog the cron functionalities that parts working but the script itself doesn’t execute:
In addition to this the script has all of the proper permissions to run.
All in all, I’m more confused that when I started and I’m not seeing that much documentation on how crontab works.
Crontab in question:
Additional Edits based on user's suggestions:
Here's my polished script:
Here's the crontab line:
# m h dom mon dow command
PATH=/usr/local/bin:/usr/bin:/bin:/root/.local/bin/aws
33 20 * * * /home/ubuntu/s3DeleteByDateSoco1
Updated syslog:

Ok, I see several problems here. First, you need to put this in the crontab file for the user you want the script to run as. If you want to run it under your user account, do not use just crontab -e instead of sudo crontab -e (with sudo, it edits the root user's crontab file).
Second, you need to use the correct path & name for the script; it looks like it's /home/ubuntu/s3DeleteByDateVirginiaSoco1, so that's what should be in the crontab entry. Don't add ".sh" if it's not actually part of the filename. It also looks like you tried adding "root" in front of the path; don't do that either, since crontab will try to execute "root" as a command, and it'll fail. bash -c doesn't hurt, but it doesn't help at all either, so don't use it.
Third, the PATH needs to be set appropriately for the executables you use in the script. By default, cron jobs execute with a PATH of just "/usr/bin:/bin", so when you use a command like aws, it'll look for it as /usr/bin/aws, not find it, look for it as /usr/aws, not find it, and give the error "aws: command not found" that you see in the last log entry. First, you need to find out where aws (and any other programs your script depends on) are; you can use which aws in your regular shell to find this out. Suppose it's /usr/local/bin/aws. Then you can either:
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the crontab file, before the line that says to run your script.
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the your script file, before the lines that use aws.
In your script, use an explicit path every time you want to run aws (something like /usr/local/bin/aws s3api list-objects ...)
You can use any (or all) of the above, but you must use at least one or it won't be able to find the aws command (or anything else that isn't in the set of core commands that come with the OS).
Fourth, I don't see where $1 and $2 are supplied. You say they've been manually placed in the script, but I don't know what you mean by that. Since the script expects them as parameters, you need to specify them in the crontab file (i.e. the command in crontab should be something like /home/ubuntu/s3DeleteByDateVirginiaSoco1 bucketname pattern).
Fifth, the script itself doesn't follow good quoting conventions. In general, all variable references should be in double-quotes. For example, use grep "$2" instead of grep $2. Without the double-quotes, variables that contain spaces or certain shell metacharacters can cause weird parsing problems.
Finally, why do you do fileName=echo $line (with backquotes I can't replicate here)? This mostly just copies the value of $line into the variable fileName, but can have those weird parsing problems I mentioned in the last point. If you want to copy a variable reliably, just use fileName="$line" (or fileName=$line -- this is one of the few cases where it's safe to leave the double-quotes off).
BTW, shellcheck.net is good at spotting common problems like bad quoting; I recommend running your scripts through it to see what it finds.

Related

Reading lines in a text file by Bash Script

I'm going to read lines in a text file:
$SSH_PRIVATE_FILE="address"
I would like to read and evaluate the line in a way to assign a value to the already defined SSH_PRIVATE_FILE.
The follwing is a docker file's contents
ARG SSH_PRIVATE_FILE
COPY build-params build-params
RUN while IFS='' read -r line || [ -n "$line" ]; do\
echo "Text read from file: $line";\
eval `$line`;\
done < "build-params"
RUN echo $SSH_PRIVATE_FILE
UPDATED
But it returns error: /bin/sh: 1: $SSH_PRIVATE_FILE="~/.ssh/id_rsa": not found
Bourne-type shells have a built-in mechanism to read the contents of a text file and evaluate each line, the . directive. (GNU bash has the same functionality under the name source, but this is not part of the POSIX shell standard and some very-light-weight shells in Docker base images don’t support it.) At a shell level, what you’ve written is equivalent to
. ./build-params
However, each Dockerfile RUN line runs a separate container with a separate shell with a clean shell environment, so this turns out to be a pretty bad way to set environment variables. The Dockerfile ENV directive works better.
Furthermore, since you’re writing the Dockerfile, you have complete control over the filesystem layout inside the Dockerfile, and you don’t really need the locations of things inside the Docker container to be parametrizable. In the case of things like credentials, you’d use the docker run -v option to inject things into the container. If I needed a setting like this, I might make my Dockerfile say
ENV SSH_PRIVATE_FILE=/ssh/id_rsa
and then actually launch the container as
docker run -v $HOME/.ssh:/ssh ...
and not make this a build-time option at all.
Just a wild guess, but I'd try putting a space before every \ at the end of each line.

chpasswd is unknown command when called in cronjob?

I am curious about the following:
I have a bash script, which is executed once in a month through a cronjob. The following line is giving an error "unknown command" when ran through the cronjob:
echo $P | chpasswd
When I execute the bash script directly, it is working properly.
Anyone with an idea?
Converting commentary into an answer.
What is the PATH supplied to your cron job? Where is chpasswd stored? Since the directory where chpasswd is stored is not listed in the path provided by cron, it fails to find it. You get a very limited environment with cron; running anything the least out of the ordinary means great care is required.
Either set PATH more fully in the script run by the cron job, or specify the absolute pathname of the commands that are not in /bin or /usr/bin.
Incidentally, how do you set P for echo to echo it? Doesn't it set the same value each month? Is that wise?
There are numerous other questions on Stack Overflow about difficulties running commands from cron jobs. Amongst others, see Bash script not running in cron correctly and Perl script works but not via cron and Is there a special restriction on commands executed by cron?, to name but three.

running a shell script from another script

I have a script in unix that looks like this:
#!/bin/bash
gcc -osign sign.c
./sign < /usr/share/dict/words | sort | squash > out
Whenever I try to run this script it gives me an error saying that squash is not a valid command. squash is a shell script stored in the same directory as this script and looks like this:
#!/bin/bash
awk -f squash.awk
I have execute permissions set correctly but for some reason it doesn't run. Is there something else I have to do to make it able to run like shown? I am rather new to scripting so any help would be greatly appreciated!
As mentioned in #Biffen's comment, unless . is in your $PATH variable, you need to specify ./squash for the same reason you need to specify ./sign.
When parsing a bare word on the command line, bash checks all the directories listed in $PATH to see if said word is an executable file living inside any of them. Unless . is in $PATH, bash won't find squash.
To avoid this problem, you can tell bash not to go looking for squash by giving bash the complete path to it, namely ./squash.

Cron job not "seeing" a file

I pass the file path, containing variables to be sourced, as an argument to my Bash script.
The file is created on Windows, in case that makes any difference.
The following check is performed:
CONFIG_FILE=$1
if [[ -f ${CONFIG_FILE} ]]; then
echo "Is a file"
. ${CONFIG_FILE}
else
echo "Not a file"
fi
When I run the script manually, from the command line, the check is fine and the variables get sourced.
However, when I set up a Cron job using
*/1 * * * * /full/path/to/script.sh /full/path/to/configfile
I get "Not a file" printed out.
I attempted every single setup I found online to solve this:
setting up environment variables both in crontab and script itself (PATH & SHELL)
sourcing the profile (both . /etc/profile and . /home/user/.bash_profile) both in crontab (before executing the script) and in the script itself.
trying to run crontab with the -u user parameter, but don't have permissions for this (and it doesn't make sense, as I am already logged in as the user who should setup the crontab)
I am setting up the crontab with the proper user under whom the script should be run. The user has access rights to the location of the files (as can be observed through running the script from the command line).
Looking for further advice on what can be attempted next.
What you're doing here is (I think) making sure that there is a separate argument behind your /path/config-file. Your original problem seems to be that on Unix your config file was stated as /path/config-file\r (note the trailing \r). You are doing it by adding an argument -q\r so that the config file itself is "clean" of the carriage return. You could add blabla\r for that matter instead of -q\r. Your script never interprets that extra argument; but if you put it on the cron line then your config file argument is "protected", because there's stuff following it, that's all.
What you also could do, is make sure that your cron defintion is Unix-styled (\n terminted lines) instead of DOS styled (\r\n terminated lines). There's probably a utility dos2unix on your Unix box to accomplish that.
Or you could remove the crontab on Unix using crontab -r and then re-create the crontab using crontab -e. Just don't upload files that were created on MS-DOS (or derived).
Found another attempt and it worked.
I added the -q flag in the cronjob line.
*/1 * * * * /path/script.sh /path/config-file -q
Source: Cron Job error "Could not open input file"
Can someone please explain to me what does it do?
I am not so literate in bash.

Using commands in a bash script with 'which'

Looking at bash scripts sometimes I see a construction like this:
MYSQL=`which mysql`
$MYSQL -uroot -ppass -e "SELECT * FROM whatever"
Where in other scripts the command (mysql in this case) is used directly:
mysql -uroot -ppass -e "SELECT * FROM whatever"
So, why and when should which be used and for which commands – I've never seen echo used with which…
You can just do man which for details:
DESCRIPTION
which returns the pathnames of the files (or links) which would be executed in the current environment,
had its arguments been given as commands in a strictly POSIX-conformant shell. It does this by search‐
ing the PATH for executable files matching the names of the arguments. It does not follow symbolic
links.
So which mysql just returns current path of the mysql command.
However use of which in your examples just makes sure to ignore any alias set for mysql in your current environment.
However there is another clever shortcut to avoid which in shell. You can use call mysql with backslash:
\mysql -uroot -ppass -e "SELECT * FROM whatever"
This will be effectively same as what your 2 commands are doing.
From OP: The only reason to use which is to avoid possible problems with custom aliases (like alias mysql="mysql -upeter -ppaula"). And since it is pretty unlikely somebody would set an alias for say echo, we don't need this construction with echo. But it is very common to set an alias for mysql (nobody wants to memorize and type the 24 chars long password).
Largely they both are same:
Just which returns the absolute path of the binary. Sometimes special conditions when you are working with some third program executing the script or preparing the environment in which this script would run the entire path of the binary comes in handy.
Like in case of a scheduler. If you have scheduled one script then you will like to use the binary with its absolute path.
Hence:
mysql=`which mysql`
or
mysql=$(which mysql)
or even
/usr/bin/mysql <flags>
Your script from scheduler might have run using
mysql ....<flags>
but it wasn't a guarantee as explained in the previous post. Alias may be one of the reasons.
For the kind of problems not using the absolute path can bring, check this link

Resources