chpasswd is unknown command when called in cronjob? - bash

I am curious about the following:
I have a bash script, which is executed once in a month through a cronjob. The following line is giving an error "unknown command" when ran through the cronjob:
echo $P | chpasswd
When I execute the bash script directly, it is working properly.
Anyone with an idea?

Converting commentary into an answer.
What is the PATH supplied to your cron job? Where is chpasswd stored? Since the directory where chpasswd is stored is not listed in the path provided by cron, it fails to find it. You get a very limited environment with cron; running anything the least out of the ordinary means great care is required.
Either set PATH more fully in the script run by the cron job, or specify the absolute pathname of the commands that are not in /bin or /usr/bin.
Incidentally, how do you set P for echo to echo it? Doesn't it set the same value each month? Is that wise?
There are numerous other questions on Stack Overflow about difficulties running commands from cron jobs. Amongst others, see Bash script not running in cron correctly and Perl script works but not via cron and Is there a special restriction on commands executed by cron?, to name but three.

Related

Properly formating a crontab executable on a bash script

I’m having some serious issues with trying to get the proper format for my bash script to be able to run successfully in crontab. The bash script runs successfully when manually prompted from the command line.
Here is the bash script in question (the actual parameters themselves [$1 & $2] have been manually placed in the script):
#!/bin/bash
# Usage: ./s3DeleteByDateVirginia "bucketname" "file type"
past=$(date +"%F" -d "60 days ago")
aws s3api list-objects --bucket $1 --query 'Contents[?LastModified<=`'$past'`][].{Key:Key}' | grep $2 | while read -r line
do
fileName=`echo $line`
aws s3api delete-object --bucket $1 --key "$fileName"
done;
The script is in this bash file: /home/ubuntu/s3DeleteByDateVirginiaSoco1
To set up the script I use: sudo crontab –e
Now I see people online saying you need to give it the proper path which doesn’t make any sense to me especially when it comes to putting it in the right location because I’m seeing a number of various modifications of this online but it consists of this format: SHELL=/bin/sh sPATH=/bin:/sbin:/usr/bin:/usr/sbin but I don't know where to put it.
According to the syslog the cron functionalities that parts working but the script itself doesn’t execute:
In addition to this the script has all of the proper permissions to run.
All in all, I’m more confused that when I started and I’m not seeing that much documentation on how crontab works.
Crontab in question:
Additional Edits based on user's suggestions:
Here's my polished script:
Here's the crontab line:
# m h dom mon dow command
PATH=/usr/local/bin:/usr/bin:/bin:/root/.local/bin/aws
33 20 * * * /home/ubuntu/s3DeleteByDateSoco1
Updated syslog:
Ok, I see several problems here. First, you need to put this in the crontab file for the user you want the script to run as. If you want to run it under your user account, do not use just crontab -e instead of sudo crontab -e (with sudo, it edits the root user's crontab file).
Second, you need to use the correct path & name for the script; it looks like it's /home/ubuntu/s3DeleteByDateVirginiaSoco1, so that's what should be in the crontab entry. Don't add ".sh" if it's not actually part of the filename. It also looks like you tried adding "root" in front of the path; don't do that either, since crontab will try to execute "root" as a command, and it'll fail. bash -c doesn't hurt, but it doesn't help at all either, so don't use it.
Third, the PATH needs to be set appropriately for the executables you use in the script. By default, cron jobs execute with a PATH of just "/usr/bin:/bin", so when you use a command like aws, it'll look for it as /usr/bin/aws, not find it, look for it as /usr/aws, not find it, and give the error "aws: command not found" that you see in the last log entry. First, you need to find out where aws (and any other programs your script depends on) are; you can use which aws in your regular shell to find this out. Suppose it's /usr/local/bin/aws. Then you can either:
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the crontab file, before the line that says to run your script.
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the your script file, before the lines that use aws.
In your script, use an explicit path every time you want to run aws (something like /usr/local/bin/aws s3api list-objects ...)
You can use any (or all) of the above, but you must use at least one or it won't be able to find the aws command (or anything else that isn't in the set of core commands that come with the OS).
Fourth, I don't see where $1 and $2 are supplied. You say they've been manually placed in the script, but I don't know what you mean by that. Since the script expects them as parameters, you need to specify them in the crontab file (i.e. the command in crontab should be something like /home/ubuntu/s3DeleteByDateVirginiaSoco1 bucketname pattern).
Fifth, the script itself doesn't follow good quoting conventions. In general, all variable references should be in double-quotes. For example, use grep "$2" instead of grep $2. Without the double-quotes, variables that contain spaces or certain shell metacharacters can cause weird parsing problems.
Finally, why do you do fileName=echo $line (with backquotes I can't replicate here)? This mostly just copies the value of $line into the variable fileName, but can have those weird parsing problems I mentioned in the last point. If you want to copy a variable reliably, just use fileName="$line" (or fileName=$line -- this is one of the few cases where it's safe to leave the double-quotes off).
BTW, shellcheck.net is good at spotting common problems like bad quoting; I recommend running your scripts through it to see what it finds.

Cronjob fails to run a bash script giving an error "aws : command not found"

I have a script to invalidate Amazon CloudFront in a .sh file and it works fine when I run it with bash (bash /../filename.sh). I need to invalidate my distribution every thursday, so I wrote a cron job, but it is giving the error "aws: command not found".
This is my cron job
45 10 * * 2 /usr/bin/bash /var/www/cms/file.sh
What I am missing? Why the cron job is failing when bash could run the script?
This is because your system depends on an environment variable for places where to search for executable files and that variable is not set in the cron sessions. This variable is named $PATH. You can see it's contents in your current session by just typing echo $PATH in your terminal.
When a cron session is started for a job to be executed, this variable is not set.
To solve this issue, there are a few ways:
Method 1:
Use full path names in your shell script and do not depend on the PATH variable for finding executables.
Method 2:
Add the following at the beginning in your script (adjust if needed):
PATH=/sbin:/bin:/usr/sbin:/usr/bin
Method 3:
(This method is generally discouraged)
Add the following line in your crontab (adjust if needed):
PATH=/sbin:/bin:/usr/sbin:/usr/bin
You can do echo $PATH in your terminal and copy the output in the above variable in the crontab.
Thank you Sakis your answer worked perfectly for me.
I was baffled why my scripts would run just fine from the command line but error out when run from cron. The reason was i was trying to use an executable from within bash that was not a recognized command due to lack of reference to a path. Now it's clear.
Did exactly this - no root or other sudo required.
From your own shell, echo $PATH. Copy this line and enter in your own cron (affects all jobs in that cron) or at the top of your script.
PATH=/home/myuser/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
And voila !

Cron job not "seeing" a file

I pass the file path, containing variables to be sourced, as an argument to my Bash script.
The file is created on Windows, in case that makes any difference.
The following check is performed:
CONFIG_FILE=$1
if [[ -f ${CONFIG_FILE} ]]; then
echo "Is a file"
. ${CONFIG_FILE}
else
echo "Not a file"
fi
When I run the script manually, from the command line, the check is fine and the variables get sourced.
However, when I set up a Cron job using
*/1 * * * * /full/path/to/script.sh /full/path/to/configfile
I get "Not a file" printed out.
I attempted every single setup I found online to solve this:
setting up environment variables both in crontab and script itself (PATH & SHELL)
sourcing the profile (both . /etc/profile and . /home/user/.bash_profile) both in crontab (before executing the script) and in the script itself.
trying to run crontab with the -u user parameter, but don't have permissions for this (and it doesn't make sense, as I am already logged in as the user who should setup the crontab)
I am setting up the crontab with the proper user under whom the script should be run. The user has access rights to the location of the files (as can be observed through running the script from the command line).
Looking for further advice on what can be attempted next.
What you're doing here is (I think) making sure that there is a separate argument behind your /path/config-file. Your original problem seems to be that on Unix your config file was stated as /path/config-file\r (note the trailing \r). You are doing it by adding an argument -q\r so that the config file itself is "clean" of the carriage return. You could add blabla\r for that matter instead of -q\r. Your script never interprets that extra argument; but if you put it on the cron line then your config file argument is "protected", because there's stuff following it, that's all.
What you also could do, is make sure that your cron defintion is Unix-styled (\n terminted lines) instead of DOS styled (\r\n terminated lines). There's probably a utility dos2unix on your Unix box to accomplish that.
Or you could remove the crontab on Unix using crontab -r and then re-create the crontab using crontab -e. Just don't upload files that were created on MS-DOS (or derived).
Found another attempt and it worked.
I added the -q flag in the cronjob line.
*/1 * * * * /path/script.sh /path/config-file -q
Source: Cron Job error "Could not open input file"
Can someone please explain to me what does it do?
I am not so literate in bash.

How to setup a "module" command in unix to add software package to $PATH?

I use a lot of computing clusters and these often use a module system for making software packages available. Basically, you use the module command like module load sample_software and the sample_software path is added to $PATH. On a cluster, this command can be invoked during interactive usage and job submission usage.
I have a linux box with PBS/Torque queueing system installed so that I can sandbox software for later use on clusters. I need a very similar module system on this box. I started by making a file called modules.sh in my `/etc/profile.d/ directory that looks like this:
module()
{
if [ $2 == "softwareX" ]; then
PATH=$PATH:/home/me/dir/softwareX
export PATH
fi
}
I then put the following line in my .bash_profile script:
source /etc/profile.d/modules.sh
Now, this works great for the following usages: 1) If I submit a job and my job script uses module load softwareX, no problem, the job runs perfectly. 2) If I am working interactively on the command line and I type module load softwareX, then the path to softwareX is loaded into my $PATH and everything works great.
However, this doesn't work for the following situation: If I make a simple bash script that contains the line module load softwareX, when the bash script executes I get an error. For example, here is my bash script:
#!/usr/bin/env bash
echo $PATH
module load softwareX
echo $PATH
When I execute this I receive the error script.sh: line 3L module: command not found
...and the $PATH never changes. Does anyone know how I can solve this problem to work in all three situations? Thanks for any help!
A bash script won't invoke your startup files. You have to do that explicitly.
See http://www.gnu.org/software/bash/manual/bashref.html#Bash-Startup-Files
Invoked non-interactively
When Bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following command were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
but the value of the PATH variable is not used to search for the file name.
As noted above, if a non-interactive shell is invoked with the --login option, Bash attempts to read and execute commands from the login shell startup files.
When you create a sub-shell, you create a new environment. When you exit back to your existing shell, you lose that environment.
I suspect this is what is going on with your module function call. If you added echo $PATH to the bottom of your module function, do you see the PATH get changed while inside the function, but changes again when you leave the function? If so, the problem is a sub-shell issue:
What you SHOULD do is have your module function print out the new path, and then do this:
PATH=$(module load softwareX)

Problem with running Ruby with Cron

My ruby file is like this.
`mkdir #{HOST} -p`
It works fine by: ruby mycode.rb
But in a cron job
0 * * * * ruby ~/backup.rb >> backup.log
It will a -p folder. Why?
The #1 problem that anybody runs into with cron jobs is that usually, for security reasons, cron jobs run with a minimal $PATH. So, it could be that your cron job runs with a different path than when you run the script from the shell, which would mean that it is possible that within the cron job a different mkdir comman gets called, which interprets its arguments differently.
Usually, the first filename argument stops option processing and everything that comes after that will be treated as a filename. So, since #{HOST} is a filename, everything after that will also be treated as a filename, which means that the call will be interpreted as "make two directories, one named #{HOST} and the other named -p" If you look for example at the specification of mkdir, it is simply illegal to pass an option after the filenames.
Another possibility is that for some reason #{HOST} will be empty when running under cron. Then the whole call expands to mkdir -p, which again, depending on your implementation of mkdir might be interpreted as "create one directory named -p".
It is not quite clear to me why you are passing the options and operands in the wrong order, instead of mkdir -p #{HOST}. It's also not clear to me why you use the shell at all, instead of just FileUtils.mkdir_p(HOST).
Another problem I've seen is the #! script line fails when /usr/bin/env is used. For instance:
#!/usr/bin/env ruby
doesn't find ruby when running under cron. You have to use
#!/usr/local/bin/ruby
or the equivalent on your platform.

Resources