Using commands in a bash script with 'which' - bash

Looking at bash scripts sometimes I see a construction like this:
MYSQL=`which mysql`
$MYSQL -uroot -ppass -e "SELECT * FROM whatever"
Where in other scripts the command (mysql in this case) is used directly:
mysql -uroot -ppass -e "SELECT * FROM whatever"
So, why and when should which be used and for which commands – I've never seen echo used with which…

You can just do man which for details:
DESCRIPTION
which returns the pathnames of the files (or links) which would be executed in the current environment,
had its arguments been given as commands in a strictly POSIX-conformant shell. It does this by search‐
ing the PATH for executable files matching the names of the arguments. It does not follow symbolic
links.
So which mysql just returns current path of the mysql command.
However use of which in your examples just makes sure to ignore any alias set for mysql in your current environment.
However there is another clever shortcut to avoid which in shell. You can use call mysql with backslash:
\mysql -uroot -ppass -e "SELECT * FROM whatever"
This will be effectively same as what your 2 commands are doing.
From OP: The only reason to use which is to avoid possible problems with custom aliases (like alias mysql="mysql -upeter -ppaula"). And since it is pretty unlikely somebody would set an alias for say echo, we don't need this construction with echo. But it is very common to set an alias for mysql (nobody wants to memorize and type the 24 chars long password).

Largely they both are same:
Just which returns the absolute path of the binary. Sometimes special conditions when you are working with some third program executing the script or preparing the environment in which this script would run the entire path of the binary comes in handy.
Like in case of a scheduler. If you have scheduled one script then you will like to use the binary with its absolute path.
Hence:
mysql=`which mysql`
or
mysql=$(which mysql)
or even
/usr/bin/mysql <flags>
Your script from scheduler might have run using
mysql ....<flags>
but it wasn't a guarantee as explained in the previous post. Alias may be one of the reasons.
For the kind of problems not using the absolute path can bring, check this link

Related

dash '-' after #!/bin/sh -

I have been working on a few scripts on CentOS 7 and sometimes I see:
#!/bin/sh -
on the first line. Looking at the man page for sh I see the following under the Special Parameters
- Expands to the current option flags as specified upon invocation,
by the set builtin command, or those set by the shell
itself (such as the -i option).
What exactly does this mean? When do I need to use this special parameter option??
The documentation you are reading has nothing to do with the command line you're looking at: it's referring to special variables. In this case, if you run echo $- you will see "the current option flags as specified upon invocation...".
If you take a look at the OPTIONS part of the bash man page, you will find:
-- A -- signals the end of options and disables further option processing.
Any arguments after the -- are treated as filenames and arguments. An
argument of - is equivalent to --.
In other words, an argument of - simply means "there are no other options after this argument".
You often see this used in situation in which you want to avoid filenames starting with - accidentally being treated as command options: for example, if there is a file named -R in your current directory, running ls * will in fact behave as ls -R and produce a recursive listing, while ls -- * will not treat the -R file specially.
The single dash when used in the #! line is meant as a security precaution. You can read more about that here.
/bin/sh is an executable representing the system shell. Actually, it is usually implemented as a symbolic link pointing to the executable for whichever shell is the system shell. The system shell is kind of the default shell that system scripts should use. In Linux distributions, for a long time this was usually a symbolic link to bash, so much so that it has become somewhat of a convention to always link /bin/sh to bash or a bash-compatible shell. However, in the last couple of years Debian (and Ubuntu) decided to switch the system shell from bash to dash - a similar shell - breaking with a long tradition in Linux (well, GNU) of using bash for /bin/sh. Dash is seen as a lighter, and much faster, shell which can be beneficial to boot speed (and other things that require a lot of shell scripts, like package installation scripts).
Dash is fairly well compatible with bash, being based on the same POSIX standard. However, it doesn't implement the bash-specific extensions. There are scripts in existence that use #!/bin/sh (the system shell) as their shebang, but which require bash-specific extensions. This is currently considered a bug that should be fixed by Debian and Ubuntu, who require /bin/sh to be able to work when pointed to dash.
Even though Ubuntu's system shell is pointing to dash, your login shell as a user continues to be bash at this time. That is, when you log in to a terminal emulator anywhere in Linux, your login shell will be bash. Speed of operation is not so much a problem when the shell is used interactively, and users are familiar with bash (and may have bash-specific customization in their home directory).

Properly formating a crontab executable on a bash script

I’m having some serious issues with trying to get the proper format for my bash script to be able to run successfully in crontab. The bash script runs successfully when manually prompted from the command line.
Here is the bash script in question (the actual parameters themselves [$1 & $2] have been manually placed in the script):
#!/bin/bash
# Usage: ./s3DeleteByDateVirginia "bucketname" "file type"
past=$(date +"%F" -d "60 days ago")
aws s3api list-objects --bucket $1 --query 'Contents[?LastModified<=`'$past'`][].{Key:Key}' | grep $2 | while read -r line
do
fileName=`echo $line`
aws s3api delete-object --bucket $1 --key "$fileName"
done;
The script is in this bash file: /home/ubuntu/s3DeleteByDateVirginiaSoco1
To set up the script I use: sudo crontab –e
Now I see people online saying you need to give it the proper path which doesn’t make any sense to me especially when it comes to putting it in the right location because I’m seeing a number of various modifications of this online but it consists of this format: SHELL=/bin/sh sPATH=/bin:/sbin:/usr/bin:/usr/sbin but I don't know where to put it.
According to the syslog the cron functionalities that parts working but the script itself doesn’t execute:
In addition to this the script has all of the proper permissions to run.
All in all, I’m more confused that when I started and I’m not seeing that much documentation on how crontab works.
Crontab in question:
Additional Edits based on user's suggestions:
Here's my polished script:
Here's the crontab line:
# m h dom mon dow command
PATH=/usr/local/bin:/usr/bin:/bin:/root/.local/bin/aws
33 20 * * * /home/ubuntu/s3DeleteByDateSoco1
Updated syslog:
Ok, I see several problems here. First, you need to put this in the crontab file for the user you want the script to run as. If you want to run it under your user account, do not use just crontab -e instead of sudo crontab -e (with sudo, it edits the root user's crontab file).
Second, you need to use the correct path & name for the script; it looks like it's /home/ubuntu/s3DeleteByDateVirginiaSoco1, so that's what should be in the crontab entry. Don't add ".sh" if it's not actually part of the filename. It also looks like you tried adding "root" in front of the path; don't do that either, since crontab will try to execute "root" as a command, and it'll fail. bash -c doesn't hurt, but it doesn't help at all either, so don't use it.
Third, the PATH needs to be set appropriately for the executables you use in the script. By default, cron jobs execute with a PATH of just "/usr/bin:/bin", so when you use a command like aws, it'll look for it as /usr/bin/aws, not find it, look for it as /usr/aws, not find it, and give the error "aws: command not found" that you see in the last log entry. First, you need to find out where aws (and any other programs your script depends on) are; you can use which aws in your regular shell to find this out. Suppose it's /usr/local/bin/aws. Then you can either:
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the crontab file, before the line that says to run your script.
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the your script file, before the lines that use aws.
In your script, use an explicit path every time you want to run aws (something like /usr/local/bin/aws s3api list-objects ...)
You can use any (or all) of the above, but you must use at least one or it won't be able to find the aws command (or anything else that isn't in the set of core commands that come with the OS).
Fourth, I don't see where $1 and $2 are supplied. You say they've been manually placed in the script, but I don't know what you mean by that. Since the script expects them as parameters, you need to specify them in the crontab file (i.e. the command in crontab should be something like /home/ubuntu/s3DeleteByDateVirginiaSoco1 bucketname pattern).
Fifth, the script itself doesn't follow good quoting conventions. In general, all variable references should be in double-quotes. For example, use grep "$2" instead of grep $2. Without the double-quotes, variables that contain spaces or certain shell metacharacters can cause weird parsing problems.
Finally, why do you do fileName=echo $line (with backquotes I can't replicate here)? This mostly just copies the value of $line into the variable fileName, but can have those weird parsing problems I mentioned in the last point. If you want to copy a variable reliably, just use fileName="$line" (or fileName=$line -- this is one of the few cases where it's safe to leave the double-quotes off).
BTW, shellcheck.net is good at spotting common problems like bad quoting; I recommend running your scripts through it to see what it finds.

What is the `Cd` command?

I was writing some code, navigating my computer (OSX 10.11.6) via the command line, like I always do, and I made a typo! Instead of typing:
cd USB
I typed
Cd USB
Nothing happened, but it didn't register as an invalid command. Perplexed by this, I did some investigating: I checked the man entry. There was no entry. I found the source file (/usr/bin/Cd) using which Cd, and then cated it:
#!/bin/sh
# $FreeBSD: src/usr.bin/alias/generic.sh,v 1.2 2005/10/24 22:32:19 cperciva Exp $
# This file is in the public domain.
builtin `echo ${0##*/} | tr \[:upper:] \[:lower:]` ${1+"$#"}
What is this, and why is it here? How does it relate to freeBSD?
Any help would be amazing, thanks!
macOS uses a case-insensitive filesystem by default[1]
, which can be misleading at times:
which Cd is effectively the same as which cd and which CD in terms of returning the (effectively) same file path.
Confusingly, even though all 3 command refer to the same file, they do so in a case-preserving manner, misleadingly suggesting that the actual case of the filename is whatever you specified.
As a workaround, you can see the true case of the filename if you employ globbing (filename expansion):
$ ls "$(which Cd)"* # could match additional files, but the one of interest is among them
/usr/bin/cd # true case of the filename
Bash (the macOS default shell) is internally case-sensitive.
That is, it recognizes cd as builtin cd (its built-in directory-changing command).
By contrast, it does NOT recognize Cd as that, due to the difference in case.
Given that it doesn't recognize Cd as a builtin, it goes looking for an external utility (in the $PATH), and that is when it finds /usr/bin/cd.
/usr/bin/cd is implemented as a shell script, which is mostly useless, because as an external utility it cannot affect the shell's state, so its attempts to change the directory are simply quietly ignored.
(Keith Thompson points out in a comment that you can use it as test whether a given directory can be changed to, because the script's exit code will reflect that).
Matt's answer provides history behind the inclusion of the script in FreeBSD and OSX (which mostly builds on FreeBSD), but it's worth taking a closer look at the rationale (emphasis mine):
From the POSIX spec:
However, all of the standard utilities, including the regular built-ins in the table, but not the special built-ins described in Special Built-In Utilities, shall be implemented in a manner so that they can be accessed via the exec family of functions as defined in the System Interfaces volume of POSIX.1-2008 and can be invoked directly by those standard utilities that require it (env, find, nice, nohup, time, xargs).
In essence, the above means: regular built-ins must (also) be callable stand-alone, as executables (whether as scripts or binaries), nut just as built-ins from within the shell.
The cited regular built-ins table comprises these utilities:
alias bg cd command false fc fg getopts jobs kill newgrp pwd read true umask unalias wait
Note: special built-in utilities are by definition shell-internal only, and their behavior differs from regular built-in utilities.
As such, to be formally POSIX-compliant an OS must indeed provide cd as an external utility.
At the same time, the POSIX spec. does have awareness that at least some of these regular built-ins - notably cd - only makes sense as a built-in:
"Since cd affects the current shell execution environment, it is always provided as a shell regular built-in." - http://pubs.opengroup.org/onlinepubs/9699919799/utilities/cd.html
Among the regular built-in utilities listed, some make sense both as a built-in and as an external utility:
For instance kill needs to be a built-in in order to kill jobs (which are a shell-internal concept), but it is also useful as an external utility, so as to kill processes by PID.
However, among the regular built-in utilities listed, the following never make sense as external utilities, as far as I can tell Do tell me if you disagree
, even though POSIX mandates their presence:
alias bg cd command fc fg getopts jobs read umask unalias
Tip of the hat to Matt for helping to complete the list; he also points that the hash built-in, even though it's not a POSIX utility, also has a pointless script implementation.
[1] As Dave Newton points out in a comment, it is possible to format HFS+, the macOS filesystem, in a case-sensitive manner (even though most people stick with the case-insensitive default). Based on the answer Dave links to, the following command will tell you whether your macOS filesystem is case-insensitive or not:
diskutil info / | grep -iq '^\s*Name.*case-sensitive*' && echo "case-SENSITIVE" || echo "case-INsensitive"
What is this?
The script itself is a portable way to convert a command, even with random upper casing, into the equivalent shell builtin based on the exec paths file name, that is any part of the string after the final / in the $0 variable). The script then runs the builtin command with the same arguments.
As OSX file systems are case insensitive by default, /usr/bin/cd converts running Cd, CD, cD and any form of cd with a / fs path (like /usr/bin/cd) back to the shell builtin command cd. This is largely useless in a script as cd only affects the current shell it is running in, which immediately closes when the script ends.
How does it relate to freeBSD?
A similar file exists in FreeBSD, which Apple adapted to do case conversion. Mac file systems by default are case insensitive (but case preserving).
The $FreeBSD: src/usr.bin/alias/generic.sh,v 1.2 2005/10/24 22:32:19 cperciva Exp $ header is the source information in the file.
Most of the underlying OSX system comes directly from FreeBSD or was based on it. The Windowing system on top of this and the Cocoa app layer is where OSX becomes truly Apple. Some of the lower level Apple bits have even made it back into FreeBSD like Clang and LLVM compiler.
Why is it here?
The earlier FreeBSD svn commits shed a bit of light:
A little bit more thought has resulted in a generic script which can
implement any of the useless POSIX-required ``regular shell builtin''
utilities...
Although most builtins aren't very useful when run in a new shell via a script, this compliance script was used for the commands alias bg cd command fc fg getopts hash jobs read type ulimit umask unalias wait. POSIX compliance is fun!
As I recall, MacOS uses a case-insensitive file system by default. The command you saw as /usr/bin/Cd is actually /usr/bin/cd, but it can be referred to by either name.
You can see this by typing
ls /usr/bin/ | grep -i cd
Normally cd is a builtin command in the shell. As you know, it changes the current directory. An external cd command is nearly useless -- but it still exists.
It can be used to detect whether it's possible to change to a specified directory without actually affecting the working directory of your current process.
Your shell (probably bash) tends to assume case-sensitive command names. The builtin command can only be referred to as cd, but since it's able to open the script file named /usr/bin/Cd, it can find and execute it.

Run simple programme using unix shell

I'm new to unix and its developing. In my new.sh script I wrote
$USERNAME=user
$PASSWORD=sekrit
echo $USERNAME
and ran new.sh using bash new.sh
But I get the following errors
new.sh: line 1: =user: command not found
new.sh: line 2: =sekrit: command not found
How do I run that command and print the username variable in terminal?
USERNAME is the name of the variable. $USERNAME is the replacement (aka contents, aka value). Since USERNAME is empty, you effectively try to run a command named =user, which is what the error message tells you.
Remove the $ from $USERNAME=... and it will work.
As Jens notes in his answer, the problem is that an assignment to a variable is not prefixed with a $, so:
USERNAME=user
PASSWORD=sekrit
is the way to write what you wanted. You got an error because USERNAME was not set, so after expansion, the shell looked at the command as:
=user
=sekrit
and it could not find such commands on the system (not very surprisingly). However, be aware that if you have previously written:
USERNAME=archipelago
PASSWORD=anchovy
then the lines:
$USERNAME=user
$PASSWORD=sekrit
would have been equivalent to writing:
archipelago=user
anchovy=sekrit
You could see that by running set with no arguments; it would show you the values of all the variables set in the shell. You could search for words such as USERNAME and archipelago to see what happened.
Now you've learned that, forget it. The number of times you'll need to use it is very limited (but it is handy on those rare — very rare — occasions when you need it).
For all practical purposes, don't write a $ on the left-hand side of a variable assignment in shell.

Problem with running Ruby with Cron

My ruby file is like this.
`mkdir #{HOST} -p`
It works fine by: ruby mycode.rb
But in a cron job
0 * * * * ruby ~/backup.rb >> backup.log
It will a -p folder. Why?
The #1 problem that anybody runs into with cron jobs is that usually, for security reasons, cron jobs run with a minimal $PATH. So, it could be that your cron job runs with a different path than when you run the script from the shell, which would mean that it is possible that within the cron job a different mkdir comman gets called, which interprets its arguments differently.
Usually, the first filename argument stops option processing and everything that comes after that will be treated as a filename. So, since #{HOST} is a filename, everything after that will also be treated as a filename, which means that the call will be interpreted as "make two directories, one named #{HOST} and the other named -p" If you look for example at the specification of mkdir, it is simply illegal to pass an option after the filenames.
Another possibility is that for some reason #{HOST} will be empty when running under cron. Then the whole call expands to mkdir -p, which again, depending on your implementation of mkdir might be interpreted as "create one directory named -p".
It is not quite clear to me why you are passing the options and operands in the wrong order, instead of mkdir -p #{HOST}. It's also not clear to me why you use the shell at all, instead of just FileUtils.mkdir_p(HOST).
Another problem I've seen is the #! script line fails when /usr/bin/env is used. For instance:
#!/usr/bin/env ruby
doesn't find ruby when running under cron. You have to use
#!/usr/local/bin/ruby
or the equivalent on your platform.

Resources