I encountered "unary operator expected" in a Bash script - bash

In my Bash script, I have a function to return 0 or 1 (true or false) for the later main function's condition.
function1 () {
if [[ "${1}" =~ "^ ...some regexp... $" ]] ; then
return 1
else
return 0
fi
}
Then in my main function:
main () {
for arg in ${#} ; do
if [ function1 ${arg} ] ; then
...
elif [ ... ] ; then
...
fi
done
}
However, when I ran this script it always gave me an error message:
[: function1: unary operator expected
How can I fix this?

You are making the common mistake of assuming that [ is part of the if command's syntax. It is not; the syntax of if is simply
if command; then
... things which should happen if command's result code was 0
else
... things which should happen otherwise
fi
One of the common commands we use is [ which is an alias for the command test. It is a simple command for comparing strings, numbers, and files. It accepts a fairly narrow combination of arguments, and tends to generate confusing and misleading error messages if you don't pass it the expected arguments. (Or rather, the error messages are adequate and helpful once you get used to it, but they are easily misunderstood if you're not used.)
In your main function, the call to [ appears misplaced.  You probably mean
if function "$arg"; then
...
elif ... ; then ...
By the way, for good measure, you should also always quote your strings. Use "$1" not $1, and "$arg" instead of $arg.
The historical reasons for test as a general kitchen sink of stuff the authors didn't want to make part of the syntax of if is one of the less attractive designs of the original Bourne shell. Bash and zsh offer alternatives which are less unwieldy (like the [[ double brackets in bash, which you use in your function1 definition), and of course, POSIX test is a lot more well-tempered than the original creation from Bell Labs.
As an additional clarification, your function can be simplified to just
function1 () {
! [[ "$1" =~ "^ ...some regexp... $" ]]
}
That is, perform the test with [[ and reverse its result code. (The "normal" case would be to return 0 for success, but maybe you are attempting to verify that the string doesn't match?)

Related

Bash Script with if, elif, else

Okay so this is an assignment so I will not put in the exact script here but I am really desperate at this point because I cannot figure something as basic as if's. So I am basically checking if the two arguments that are written in the command line are appropriate (user needs to type it correctly) or it will echo a specific error message. However, when I put in a command with 100% correct arguments, I get the error echo message from the first conditional ALWAYS (even if I switch around the conditional statements). It seems that the script just runs the first echo and stops no matter what. Please help and I understand it might be hard since my code is more of a skeleton.
if [ ... ]; then
echo "blah"
elif [ ... ]; then
echo "blah2"
else for file; do
#change file to the 1st argument
done
fi
I obviously need the last else to happen in order for my script to actually serve its intended purpose. However, my if-fy problem is getting in the way. The if and elif need to return false in order for the script to run for appropriate arguments. The if and elif check to see if the person typed in the command line correctly.
elif mean else-if. So it only will only be checked if the first statement returns false. So if you want to check if both are correct do.
if [ ... ] then
...
fi
if [ ... ] then
...
fi
When you care about checking both the first and second command line arguments for a single condition (i.e. they must both meet a set of criteria for the condition to be true), then you will need a compound test construct like:
if [ "$1" = somestring -a "$2" = somethingelse ]; then
do whatever
fi
which can also be written
if [ "$1" = somestring ] && [ "$2" = somethingelse ]; then
...
Note: the [ .... -a .... ] syntax is still supported, but it is recommended to use the [ .... ] && [ .... ] syntax for new development.
You can also vary the way they are tested (either true/false) by using -o for an OR condition or || in the second form. You can further vary your test using different test expressions (i.e. =, !=, -gt, etc..)

How to have function with parameters inside bash test [

Suppose I have a function foo which takes two parameters and returns true or false, I would like to use it within the test [ command in bash, some thing like this
param1=one
param2=two
if [ foo $param1 $param2 ]; then
do something
fi
I know this works with out using [] like this,
if foo $param1 $param2; then .....
But I want to use it with in [].
If you want to test whether the exit status of your command indicates success or failure, leave off the brackets entirely; using the test command in that use case is wrong, full-stop.
In this case,
if foo "$param1" "$param2"; then ...; fi
is the only correct syntax that isn't needlessly inefficient.
It's certainly possible to emit $? inside your subprocess, and check the output of that using test, like so:
if [ "$(foo "$param1" "$param2" >&2; echo "$?")" -eq 2 ]; then ...; fi
...but see again re: "needlessly inefficient"; this would be better written as:
foo "$param1" "$param2"; retval=$?
if [ $retval -eq 2 ]; then ...; fi
...avoiding all the overhead of an unnecessary subshell.
If you want to test whether the output of your command is non-empty (typically an undesirable / unnecessary operation, as most standard UNIX commands can indicate whether they operated correctly, whether they found what they were searching for, etc. via exit status):
if [ -n "$(foo "$param1" "$param2")" ]; then ...; fi
is correct.
One important thing to keep in mind -- any use of $() moves your function from running inside of the parent shell (and thus able to update or modify that shell's state) into a subshell. So -- if your function is supposed to change variables or state inside the shell, using $() will modify its behavior.
test or [ ] evaluates expressions, not the exit status of a function.
To evaluate the exit status of a function / command, use if in the way you're using it:
if foo "$param1" "$param2"; then .....

Why don't I get the return of my function?

In an inline shell, I type echo $(max 15 2) but don't get any answer?
Why is it so?
Code:
function max {
if [ "$1" -eq "$2" ]
then
return $1
else
if [ "$1" -gt "$2" ]
then
return $1
else
return $2
fi
fi
}
Replace return with echo and your code works fine.
From the comments :
Replace return with echo and your code works fine. - Blender
The $(...) syntax is specifically designed to give you the output of a command, even if that command happens to be a function call. return in a function is similar to exit for the script as a whole; it sets its status, which is an integer in the range 0 to 255. (This is quite different from other languages you might be used to, where return is used to return a value from a function.) – Keith Thompson
Bash functions are not like functions in other languages. They behave the same as any other command: they can take command line arguments, read from standard input, write to standard output and standard error, and return with an exit status. They don't--strictly speaking--return a computed value. – chepner

What is the use case of noop [:] in bash?

I searched for noop in bash (:), but was not able to find any good information. What is the exact purpose or use case of this operator?
I tried following and it's working like this for me:
[mandy#root]$ a=11
[mandy#root]$ b=20
[mandy#root]$ c=30
[mandy#root]$ echo $a; : echo $b ; echo $c
10
30
Please let me know, any use case of this operator in real time or any place where it is mandatory to use it.
It's there more for historical reasons. The colon builtin : is exactly equivalent to true. It's traditional to use true when the return value is important, for example in an infinite loop:
while true; do
echo 'Going on forever'
done
It's traditional to use : when the shell syntax requires a command but you have nothing to do.
while keep_waiting; do
: # busy-wait
done
The : builtin dates all the way back to the Thompson shell, it was present in Unix v6. : was a label indicator for the Thompson shell's goto statement. The label could be any text, so : doubled up as a comment indicator (if there is no goto comment, then : comment is effectively a comment). The Bourne shell didn't have goto but kept :.
A common idiom that uses : is : ${var=VALUE}, which sets var to VALUE if it was unset and does nothing if var was already set. This construct only exists in the form of a variable substitution, and this variable substitution needs to be part of a command somehow: a no-op command serves nicely.
See also What purpose does the colon builtin serve?.
I use it for if statements when I comment out all the code. For example you have a test:
if [ "$foo" != "1" ]
then
echo Success
fi
but you want to temporarily comment out everything contained within:
if [ "$foo" != "1" ]
then
#echo Success
fi
Which causes bash to give a syntax error:
line 4: syntax error near unexpected token `fi'
line 4: `fi'
Bash can't have empty blocks (WTF). So you add a no-op:
if [ "$foo" != "1" ]
then
#echo Success
:
fi
or you can use the no-op to comment out the lines:
if [ "$foo" != "1" ]
then
: echo Success
fi
If you use set- e then || : is a great way to not exit the script if a failure happens (it explicitly makes it pass).
You would use : to supply a command that succeeds but doesn't do anything. In this example the "verbosity" command is turned off by default, by setting it to :. The 'v' option turns it on.
#!/bin/sh
# example
verbosity=:
while getopts v OPT ; do
case $OPT in
v)
verbosity=/bin/realpath
;;
*)
exit "Cancelled"
;;
esac
done
# `$verbosity` always succeeds by default, but does nothing.
for i in * ; do
echo $i $($verbosity $i)
done
$ example
file
$ example -v
file /home/me/file
One use is as multiline comments, or to comment out part of your code for testing purposes by using it in conjunction with a here file.
: << 'EOF'
This part of the script is a commented out
EOF
Don't forget to use quotes around EOF so that any code inside doesn't get evaluated, like $(foo). It also might be worth using an intuitive terminator name like NOTES, SCRATCHPAD, or TODO.
Ignoring alias arguments
Some times you want to have an alias that doesn't take any argument. You can do it using ::
> alias alert_with_args='echo hello there'
> alias alert='echo hello there;:'
> alert_with_args blabla
hello there blabla
> alert blabla
hello there
Two of mine.
Embed POD comments
A quite funky application of : is for embedding POD comments in bash scripts, so that man pages can be quickly generated. Of course, one would eventually rewrite the whole script in Perl ;-)
Run-time function binding
This is a sort of code pattern for binding functions at run-time.
F.i., have a debugging function to do something only if a certain flag is set:
#!/bin/bash
# noop-demo.sh
shopt -s expand_aliases
dbg=${DBG:-''}
function _log_dbg {
echo >&2 "[DBG] $#"
}
log_dbg_hook=':'
[ "$dbg" ] && log_dbg_hook='_log_dbg'
alias log_dbg=$log_dbg_hook
echo "Testing noop alias..."
log_dbg 'foo' 'bar'
You get:
$ ./noop-demo.sh
Testing noop alias...
$ DBG=1 ./noop-demo.sh
Testing noop alias...
[DBG] foo bar
Somewhat related to this answer, I find this no-op rather convenient to hack polyglot scripts. For example, here is a valid comment both for bash and for vimscript:
":" # this is a comment
":" # in bash, ‘:’ is a no-op and ‘#’ starts a comment line
":" # in vimscript, ‘"’ starts a comment line
Sure, we may have used true just as well, but : being a punctuation sign and not an irrelevant English word makes it clear that it is a syntax token.
As for why would someone do such a tricky thing as writing a polyglot script (besides it being cool): it proves helpful in situations where we would normally write several script files in several different languages, with file X referring to file Y.
In such a situation, combining both scripts in a single, polyglot file avoids any work in X for determining the path to Y (it is simply "$0"). More importantly, it makes it more convenient to move around or distribute the program.
A common example. There is a well-known, long-standing issue with shebangs: most systems (including Linux and Cygwin) allow only one argument to be passed to the interpreter. The following shebang:
#!/usr/bin/env interpreter --load-libA --load-libB
will fire the following command:
/usr/bin/env "interpreter --load-libA --load-libB" "/path/to/script"
and not the intended:
/usr/bin/env interpreter --load-libA --load-libB "/path/to/script"
Thus, you would end up writing a wrapper script, such as:
#!/usr/bin/env sh
/usr/bin/env interpreter --load-libA --load-libB "/path/to/script"
This is where polyglossia enters the stage.
A more specific example. I once wrote a bash script which, among other things, invoked Vim. I needed to give Vim additional setup, which could be done with the option --cmd "arbitrary vimscript command here". However, that setup was substantial, so that inlining it in a string would have been terrible (if ever possible). Hence, a better solution was to write it in extenso in some configuration file, then make Vim read that file with -S "/path/to/file". Hence I ended up with a polyglot bash/vimscript file.
suppose you have a command you wish to chain to the success of another:
cmd="some command..."
$cmd
[ $? -eq 0 ] && some-other-command
but now you want to execute the commands conditionally and you want to show the commands that would be executed (dry-run):
cmd="some command..."
[ ! -z "$DEBUG" ] && echo $cmd
[ -z "$NOEXEC" ] && $cmd
[ $? -eq 0 ] && {
cmd="some-other-command"
[ ! -z "$DEBUG" ] && echo $cmd
[ -z "$NOEXEC" ] && $cmd
}
so if you set DEBUG and NOEXEC, the second command never shows up. this is because the first command never executes (because NOEXEC is not empty) but the evaluation of that fact leaves you with a return of 1, which means the subordinate command never executes (but you want it to because it's a dry run). so to fix this you can reset the exit value left on the stack with a noop:
[ -z "$NOEXEC" ] && $cmd || :
Sometimes no-op clauses can make your code more readable.
That can be a matter of opinion, but here's an example. Let's suppose you've created a function that works by taking two unix paths. It calculates the 'change path' needed to cd from one path to another. You place a restriction on your function that the paths must both start with a '/' OR both must not.
function chgpath() {
# toC, fromC are the first characters of the argument paths.
if [[ "$toC" == / && "$fromC" == / ]] || [[ "$toC" != / && "$fromC" != / ]]
then
true # continue with function
else
return 1 # Skip function.
fi
Some developers will want to remove the no-op but that would mean negating the conditional:
function chgpath() {
# toC, fromC are the first characters of the argument paths.
if [[ "$toC" != / || "$fromC" == / ]] && [[ "$toC" == / || "$fromC" != / ]]
then
return 1 # Skip function.
fi
Now -in my opinion- its not so clear from the if-clause the conditions in which you'd want to skip doing the function. To eliminate the no-op and do it clearly, you would want to move the if-clause out of the function:
if [[ "$toC" == / && "$fromC" == / ]] || [[ "$toC" != / && "$fromC" != / ]]
then
cdPath=$(chgPath pathA pathB) # (we moved the conditional outside)
That looks better, but many times we can't do this; we want the check to be done inside the function.
So how often does this happen? Not very often. Maybe once or twice a year. It happens often enough, that you should be aware of it. I don't shy away from using it when I think it improves the readability of my code (regardless of the language).
I've also used in it scripts to define default variables.
: ${VARIABLE1:=my_default_value}
: ${VARIABLE2:=other_default_value}
call-my-script ${VARIABLE1} ${VARIABLE2}
I sometimes use it on Docker files to keep RUN commands aligned, as in:
RUN : \
&& somecommand1 \
&& somecommand2 \
&& somecommand3
For me, it reads better than:
RUN somecommand1 \
&& somecommand2 \
&& somecommand3
But this is just a matter of preference, of course
null command [:] is actually considered a synonym for the shell builtin true. The ":" command is itself a Bash builtin, and its exit status is true (0).
`
$ :
$ echo $? # 0
while :
do
operation-1
operation-2
...
operation-n
done
# Same as:
while true
do
...
done
Placeholder in if/then test:
if condition
then : # Do nothing and branch ahead
else # Or else ...
take-some-action
fi
$ : ${username=`whoami`}
$ ${username=`whoami`} #Gives an error without the leading :
Source: TLDP
I used the noop today when I had to create a mock sleep function to use in bats testing framework. This allowed me to create an empty function with no side effects:
function sleep() {
:
}

Operations on boolean variables

In this question it has been shown how to use neat boolean variables in bash. Is there a way of performing logic operations with such variables? E.g. how to get this:
var1=true
var2=false
# ...do something interesting...
if ! $var1 -a $var2; then <--- doesn't work correctly
echo "do sth"
fi
This does work:
if ! $var1 && $var2; then
echo "do sth"
fi
Maybe somebody can explain why -a and -o operators don't work and &&, ||, ! do?
Okay boys and girls, lesson time.
What's happening when you execute this line?
if true ; then echo 1 ; fi
What's happening here is that the if command is being executed. After that everything that happens is part of the if command.
What if does is it executes one or more commands (or rather, pipelines) and, if the return code from the last command executed was successful, it executes the commands after then until fi is reached. If the return code was not successful the then part is skipped and execution continues after fi.
if takes no switches, its behavior is not modifiable in anyway.
In the example above the command I told if to execute was true. true is not syntax or a keyword, it's just another command. Try executing it by itself:
true
It will print nothing, but it set its return code to 0 (aka "true"). You can more clearly see that it is a command by rewriting the above if statement like this:
if /bin/true ; then echo 1 ; fi
Which is entirely equivalent.
Always returning true from a test is not very useful. It is typical to use if in conjunction with the test command. test is sometimes symlinked to or otherwise known as [. On your system you probably have a /bin/[ program, but if you're using bash [ will be a builtin command. test is a more complex command than if and you can read all about it.
help [
man [
But for now let us say that test performs some tests according to the options you supply and returns with either a successful return code (0) or an unsuccessful one. This allows us to say
if [ 1 -lt 2 ] ; then echo one is less than two ; fi
But again, this is always true, so it's not very useful. It would be more useful if 1 and 2 were variables
read -p' Enter first number: ' first
read -p' Enter second number: ' second
echo first: $first
echo second: $second
if [ $first -lt $second ] ; then
echo $first is less than $second
fi
Now you can see that test is doing its job. Here we are passing test four arguments. The second argument is -lt which is a switch telling test that the first argument and third argument should be tested to see if the first argument is less than the third argument. The fourth argument does nothing but mark the end of the command; when calling test as [ the final argument must always be ].
Before the above if statement is executed the variables are evaluated. Suppose that I had entered 20 for first and 25 for second, after evaluation the script will look like this:
read -p' Enter first number: ' first
read -p' Enter second number: ' second
echo first: 20
echo second: 25
if [ 20 -lt 25 ] ; then
echo 20 is less than 25
fi
And now you can see that when test is executed it will be testing is 20 less than 25?, which is true, so if will execute the then statement.
Bringing it back to the question at hand: What's going on here?
var1=true
var2=false
if ! $var1 -a $var2 ; then
echo $var1 and $var2 are both true
fi
When the if command is evaluated it will become
if ! true -a false ; then
This is instructing if to execute true and passing the arguments -a false to the true command. Now, true doesn't take any switches or arguments, but it also will not produce an error if you supply them without need. This means that it will execute, return success and the -a false part will be ignored. The ! will reverse the success in to a failure and the then part will not be executed.
If you were to replace the above with a version calling test it would still not work as desired:
var1=true
var2=false
if ! [ $var1 -a $var2 ] ; then
echo $var1 and $var2 are both true
fi
Because the if line would be evaluated to
if ! [ true -a false ; ] then
And test would see true not as a boolean keyword, and not as a command, but as a string. Since a non-empty string is treated as "true" by test it will always return success to if, even if you had said
if ! [ false -a yourmom ] ; then
Since both are non-empty strings -a tests both as true, returns success which is reversed with ! and passed to if, which does not execute the then statement.
If you replace the test version with this version
if ! $var1 && $var2 ; then
Then it will be evaluated in to
if ! true && false ; then
And will be processed like this: if executes true which returns success; which is reversed by !; because the return code of the first command was failure the && statement short circuits and false never gets executed. Because the final command executed returned a failure, failure is passed back to if which does not execute the then clause.
I hope this is all clear.
It is perhaps worth pointing out that you can use constructs like this:
! false && true && echo 1
Which does not use if but still checks return codes, because that is what && and || are for.
There is kind of a black art to using test without making any mistakes. In general, when using bash, the newer [[ command should be used instead because it is more powerful and does away with lots of gotchas which must, for compatibility reasons, be kept in [.
Since the original poster did not supply a realistic example of what he's trying to accomplish it's hard to give any specific advice as to the best solution. Hopefully this has been sufficiently helpful that he can now figure out the correct thing to do.
You have mixed here two different syntaxes.
This will work:
if ! [ 1 -a 2 ]; then
echo "do sth"
fi
Note brackets around the expressions.
You need the test command ([ in newer syntax) to use these keys (-a, -o and so on).
But test does nut run commands itself.
If you want to check exit codes of commands you must not use test.

Resources