Is there a difference between:
[ -f $FOO ] && do_something
...and:
if [ -f $FOO ]; then
do_something
fi
I thought they were equivalent, as [ is just an alias to test, which exits 0 when the condition passes. However, in a script I have written, I'm checking whether some environment variables are set and, if they are not, to bail out. In which case:
[ -z "${MY_ENV1+x}" -a -z "${MY_ENV2+x}" ] && fail "Oh no!"
...always seems to bail out; even when $MY_ENV1 and $MY_ENV2 are set. However, the if version works correctly:
if [ -z "${MY_ENV1+x}" -a -z "${MY_ENV2+x}" ]; then
fail "Oh no!"
fi
...where fail is defined as:
fail() {
>&2 echo "$#"
exit 1
}
I cannot diagnose the source of your problem but answering the question in your title, this is the only difference between FIRST && SECOND and if FIRST; then SECOND; fi that I know.
If FIRST evaluates to true, there is no difference.
If FIRST evaluates to false, then the result of the entire expression FIRST && SECOND is false. If your shell is has the -e flag set, this will cause it to abort. The version using the if statement, on the other hand, will never produce any result if FIRST evaluates to false so the shell will continue happily even if -e is set.
Therefore, I sometimes write the more verbose if FIRST; then SECOND; fi if I cannot afford a failure status. This can be important, for example, when writing Makefiles or if you make it a general habit to run your scripts with -e.
Okay so this is an assignment so I will not put in the exact script here but I am really desperate at this point because I cannot figure something as basic as if's. So I am basically checking if the two arguments that are written in the command line are appropriate (user needs to type it correctly) or it will echo a specific error message. However, when I put in a command with 100% correct arguments, I get the error echo message from the first conditional ALWAYS (even if I switch around the conditional statements). It seems that the script just runs the first echo and stops no matter what. Please help and I understand it might be hard since my code is more of a skeleton.
if [ ... ]; then
echo "blah"
elif [ ... ]; then
echo "blah2"
else for file; do
#change file to the 1st argument
done
fi
I obviously need the last else to happen in order for my script to actually serve its intended purpose. However, my if-fy problem is getting in the way. The if and elif need to return false in order for the script to run for appropriate arguments. The if and elif check to see if the person typed in the command line correctly.
elif mean else-if. So it only will only be checked if the first statement returns false. So if you want to check if both are correct do.
if [ ... ] then
...
fi
if [ ... ] then
...
fi
When you care about checking both the first and second command line arguments for a single condition (i.e. they must both meet a set of criteria for the condition to be true), then you will need a compound test construct like:
if [ "$1" = somestring -a "$2" = somethingelse ]; then
do whatever
fi
which can also be written
if [ "$1" = somestring ] && [ "$2" = somethingelse ]; then
...
Note: the [ .... -a .... ] syntax is still supported, but it is recommended to use the [ .... ] && [ .... ] syntax for new development.
You can also vary the way they are tested (either true/false) by using -o for an OR condition or || in the second form. You can further vary your test using different test expressions (i.e. =, !=, -gt, etc..)
I have a bash statement to test a command line argument. If the argument passed to the script is "clean", then the script removes all .o files. Otherwise, it builds a program. However, not matter what is passed (if anything), the script still thinks that the argument "clean" is being passed.
#!/bin/bash
if test "`whoami`" != "root" ; then
echo "You must be logged in as root to build (for loopback mounting)"
echo "Enter 'su' or 'sudo bash' to switch to root"
exit
fi
ARG=$1
if [ $ARG == "clean" ] ; then
echo ">>> cleaning up object files..."
rm -r src/*.o
echo ">>> done. "
echo ">>> Press enter to continue..."
read
else
#Builds program
fi
Answer for first version of question
In bash, spaces are important. Replace:
[ $ARG=="clean" ]
With:
[ "$ARG" = "clean" ]
bash interprets $ARG=="clean" as a single-string. If a single-string is placed in a test statement, test returns false if the string is empty and true if it is non-empty. $ARG=="clean" will never be empty. Thus [ $ARG=="clean" ] will always return true.
Second, $ARG should be quoted. Otherwise, if it is empty, then the statement reduces to `[ == "clean" ] which is an error ("unary operator expected").
Third, it is best practices to use lower or mixed case for your local variables. The system uses upper-case shell variables and you don't want to accidentally overwrite one of them.
Lastly, with [...], the POSIX operator for equal, in the string sense, is =. Bash will accept either = or == but = is more portable.
first:
Every string must double quoted or will error absent argument.
second:
for string used only = or != not a == and also -n and -z commands.
third:
you may combine conditions by -a and -o commands but newer used enclose in () yous conditions so not to get error. Logical operators acts through operators presidence, fist calculate -o operator and then -a! For example
[ -n "$1" -a $1 = '-h' -o $1 = '--help' ] && { usage; exit 0; }
will work when passed to script at least 1 argument and is -h or --help. All spaces must be!!! Bush do short cycle logical evaluations. So don't trouble for case when $1 don't exist in second condition because of result of this expression is determined in first one. next don't calculate in this case. But if your argument may contains space symbols you need it double quote. You must do it also in command line too! Else you get error in script or split your arguments in two or more parts.
Operator == isn't used in test. For numbers(not siring) used -eq or -ne commands. See man 1 test for full descriptions. test EXPRESSION... equivalent of [ EXPRESSIONS... ]. More shirt form of test.
I wrote the following shell script, just to see if I understand the syntax to use if statements:
if 0; then
echo yes
fi
This doesn't work. It yields the error
./iffin: line 1: 0: command not found
what am I doing wrong?
use
if true; then
echo yes
fi
if expects the return code from a command. 0 is not a command. true is a command.
The bash manual doesnt say much on the subject but here it is:
http://www.gnu.org/software/bash/manual/bashref.html#Conditional-Constructs
You may want to look into the test command for more complex conditional logic.
if test foo = foo; then
echo yes
fi
AKA
if [ foo = foo ]; then
echo yes
fi
To test for numbers being non-zero, use the arithmetic expression:
if (( 0 )) ; then
echo Never echoed
else
echo Always echoed
fi
It makes more sense to use variables than literal numbers, though:
count_lines=$( wc -l < input.txt )
if (( count_lines )) ; then
echo File has $count_lines lines.
fi
Well, from the bash man page:
if list; then list; [ elif list; then list; ] ... [ else list; ] fi
The if list is executed. If its exit status is zero, the then list is executed.
Otherwise, each elif list is executed in turn, and if its exit status is zero,
the corresponding then list is executed and the command completes.
Otherwise, the else list is executed, if present.
The exit status is the exit status of the last command executed,
or zero if no condition tested true.
Which means that argument to if gets executed to get the return code, so in your example you're trying to execute command 0, which apparently does not exist.
What does exist are the commands true, false and test, which is also aliased as [. It allows to write more complex expressions for ifs. Read man test for more info.
I searched for noop in bash (:), but was not able to find any good information. What is the exact purpose or use case of this operator?
I tried following and it's working like this for me:
[mandy#root]$ a=11
[mandy#root]$ b=20
[mandy#root]$ c=30
[mandy#root]$ echo $a; : echo $b ; echo $c
10
30
Please let me know, any use case of this operator in real time or any place where it is mandatory to use it.
It's there more for historical reasons. The colon builtin : is exactly equivalent to true. It's traditional to use true when the return value is important, for example in an infinite loop:
while true; do
echo 'Going on forever'
done
It's traditional to use : when the shell syntax requires a command but you have nothing to do.
while keep_waiting; do
: # busy-wait
done
The : builtin dates all the way back to the Thompson shell, it was present in Unix v6. : was a label indicator for the Thompson shell's goto statement. The label could be any text, so : doubled up as a comment indicator (if there is no goto comment, then : comment is effectively a comment). The Bourne shell didn't have goto but kept :.
A common idiom that uses : is : ${var=VALUE}, which sets var to VALUE if it was unset and does nothing if var was already set. This construct only exists in the form of a variable substitution, and this variable substitution needs to be part of a command somehow: a no-op command serves nicely.
See also What purpose does the colon builtin serve?.
I use it for if statements when I comment out all the code. For example you have a test:
if [ "$foo" != "1" ]
then
echo Success
fi
but you want to temporarily comment out everything contained within:
if [ "$foo" != "1" ]
then
#echo Success
fi
Which causes bash to give a syntax error:
line 4: syntax error near unexpected token `fi'
line 4: `fi'
Bash can't have empty blocks (WTF). So you add a no-op:
if [ "$foo" != "1" ]
then
#echo Success
:
fi
or you can use the no-op to comment out the lines:
if [ "$foo" != "1" ]
then
: echo Success
fi
If you use set- e then || : is a great way to not exit the script if a failure happens (it explicitly makes it pass).
You would use : to supply a command that succeeds but doesn't do anything. In this example the "verbosity" command is turned off by default, by setting it to :. The 'v' option turns it on.
#!/bin/sh
# example
verbosity=:
while getopts v OPT ; do
case $OPT in
v)
verbosity=/bin/realpath
;;
*)
exit "Cancelled"
;;
esac
done
# `$verbosity` always succeeds by default, but does nothing.
for i in * ; do
echo $i $($verbosity $i)
done
$ example
file
$ example -v
file /home/me/file
One use is as multiline comments, or to comment out part of your code for testing purposes by using it in conjunction with a here file.
: << 'EOF'
This part of the script is a commented out
EOF
Don't forget to use quotes around EOF so that any code inside doesn't get evaluated, like $(foo). It also might be worth using an intuitive terminator name like NOTES, SCRATCHPAD, or TODO.
Ignoring alias arguments
Some times you want to have an alias that doesn't take any argument. You can do it using ::
> alias alert_with_args='echo hello there'
> alias alert='echo hello there;:'
> alert_with_args blabla
hello there blabla
> alert blabla
hello there
Two of mine.
Embed POD comments
A quite funky application of : is for embedding POD comments in bash scripts, so that man pages can be quickly generated. Of course, one would eventually rewrite the whole script in Perl ;-)
Run-time function binding
This is a sort of code pattern for binding functions at run-time.
F.i., have a debugging function to do something only if a certain flag is set:
#!/bin/bash
# noop-demo.sh
shopt -s expand_aliases
dbg=${DBG:-''}
function _log_dbg {
echo >&2 "[DBG] $#"
}
log_dbg_hook=':'
[ "$dbg" ] && log_dbg_hook='_log_dbg'
alias log_dbg=$log_dbg_hook
echo "Testing noop alias..."
log_dbg 'foo' 'bar'
You get:
$ ./noop-demo.sh
Testing noop alias...
$ DBG=1 ./noop-demo.sh
Testing noop alias...
[DBG] foo bar
Somewhat related to this answer, I find this no-op rather convenient to hack polyglot scripts. For example, here is a valid comment both for bash and for vimscript:
":" # this is a comment
":" # in bash, ‘:’ is a no-op and ‘#’ starts a comment line
":" # in vimscript, ‘"’ starts a comment line
Sure, we may have used true just as well, but : being a punctuation sign and not an irrelevant English word makes it clear that it is a syntax token.
As for why would someone do such a tricky thing as writing a polyglot script (besides it being cool): it proves helpful in situations where we would normally write several script files in several different languages, with file X referring to file Y.
In such a situation, combining both scripts in a single, polyglot file avoids any work in X for determining the path to Y (it is simply "$0"). More importantly, it makes it more convenient to move around or distribute the program.
A common example. There is a well-known, long-standing issue with shebangs: most systems (including Linux and Cygwin) allow only one argument to be passed to the interpreter. The following shebang:
#!/usr/bin/env interpreter --load-libA --load-libB
will fire the following command:
/usr/bin/env "interpreter --load-libA --load-libB" "/path/to/script"
and not the intended:
/usr/bin/env interpreter --load-libA --load-libB "/path/to/script"
Thus, you would end up writing a wrapper script, such as:
#!/usr/bin/env sh
/usr/bin/env interpreter --load-libA --load-libB "/path/to/script"
This is where polyglossia enters the stage.
A more specific example. I once wrote a bash script which, among other things, invoked Vim. I needed to give Vim additional setup, which could be done with the option --cmd "arbitrary vimscript command here". However, that setup was substantial, so that inlining it in a string would have been terrible (if ever possible). Hence, a better solution was to write it in extenso in some configuration file, then make Vim read that file with -S "/path/to/file". Hence I ended up with a polyglot bash/vimscript file.
suppose you have a command you wish to chain to the success of another:
cmd="some command..."
$cmd
[ $? -eq 0 ] && some-other-command
but now you want to execute the commands conditionally and you want to show the commands that would be executed (dry-run):
cmd="some command..."
[ ! -z "$DEBUG" ] && echo $cmd
[ -z "$NOEXEC" ] && $cmd
[ $? -eq 0 ] && {
cmd="some-other-command"
[ ! -z "$DEBUG" ] && echo $cmd
[ -z "$NOEXEC" ] && $cmd
}
so if you set DEBUG and NOEXEC, the second command never shows up. this is because the first command never executes (because NOEXEC is not empty) but the evaluation of that fact leaves you with a return of 1, which means the subordinate command never executes (but you want it to because it's a dry run). so to fix this you can reset the exit value left on the stack with a noop:
[ -z "$NOEXEC" ] && $cmd || :
Sometimes no-op clauses can make your code more readable.
That can be a matter of opinion, but here's an example. Let's suppose you've created a function that works by taking two unix paths. It calculates the 'change path' needed to cd from one path to another. You place a restriction on your function that the paths must both start with a '/' OR both must not.
function chgpath() {
# toC, fromC are the first characters of the argument paths.
if [[ "$toC" == / && "$fromC" == / ]] || [[ "$toC" != / && "$fromC" != / ]]
then
true # continue with function
else
return 1 # Skip function.
fi
Some developers will want to remove the no-op but that would mean negating the conditional:
function chgpath() {
# toC, fromC are the first characters of the argument paths.
if [[ "$toC" != / || "$fromC" == / ]] && [[ "$toC" == / || "$fromC" != / ]]
then
return 1 # Skip function.
fi
Now -in my opinion- its not so clear from the if-clause the conditions in which you'd want to skip doing the function. To eliminate the no-op and do it clearly, you would want to move the if-clause out of the function:
if [[ "$toC" == / && "$fromC" == / ]] || [[ "$toC" != / && "$fromC" != / ]]
then
cdPath=$(chgPath pathA pathB) # (we moved the conditional outside)
That looks better, but many times we can't do this; we want the check to be done inside the function.
So how often does this happen? Not very often. Maybe once or twice a year. It happens often enough, that you should be aware of it. I don't shy away from using it when I think it improves the readability of my code (regardless of the language).
I've also used in it scripts to define default variables.
: ${VARIABLE1:=my_default_value}
: ${VARIABLE2:=other_default_value}
call-my-script ${VARIABLE1} ${VARIABLE2}
I sometimes use it on Docker files to keep RUN commands aligned, as in:
RUN : \
&& somecommand1 \
&& somecommand2 \
&& somecommand3
For me, it reads better than:
RUN somecommand1 \
&& somecommand2 \
&& somecommand3
But this is just a matter of preference, of course
null command [:] is actually considered a synonym for the shell builtin true. The ":" command is itself a Bash builtin, and its exit status is true (0).
`
$ :
$ echo $? # 0
while :
do
operation-1
operation-2
...
operation-n
done
# Same as:
while true
do
...
done
Placeholder in if/then test:
if condition
then : # Do nothing and branch ahead
else # Or else ...
take-some-action
fi
$ : ${username=`whoami`}
$ ${username=`whoami`} #Gives an error without the leading :
Source: TLDP
I used the noop today when I had to create a mock sleep function to use in bats testing framework. This allowed me to create an empty function with no side effects:
function sleep() {
:
}