if statement using "sed" to set variable not working - bash

I'm trying to get an if statement to read the top line of a text file (tmp.txt) which has 0 on the last line. the "then" commands basically go into a directory and run a series of commands for DNA sequence analysis before coming back up, removing the top line of tmp.txt and moving onto the next directory listed in tmp.txt. once it gets to the end of all the listed directories the final line will just be a "0" or perhaps "file-end". The issue is, it's just not working and I can't figure out why. I've swapped out the "then" and "else" commands to make testing a bit easier.
#!bin/bash/sh
value=`(sed -n 1p tmp.txt)`
if ($value -eq 0)
then
echo "I wish i could eat cheese again"
else
echo "theres still more barcodes left"
fi

Parenthesis like that in bash create subshells to run the enclosed commands in. You don't need them at all for the sed command, and they aren't what you're looking for to test values in the if command. Rather than using backticks, the preferred way of running a command and storing its value now adays is $(...) syntax.
For arithmetic tests you can use double parens ((...)), or you can use the [ synonym for test or bash has an extended version [[...]]. With [ especially, a space after the bracket is essential, since you are trying to run the command [ in that case.
Putting those things together we can update your snippet like so:
#!/bin/bash
value=$(sed -n 1p tmp.txt)
if [[ $value -eq 0 ]]
then
echo "I wish i could eat cheese again"
else
echo "theres still more barcodes left"
fi
(Also, I fixed the shebang line to point to /bin/bash, instead of an executable apparently named /bin/bash/sh which likely doesn't exist)

Related

What shellenv command does? [duplicate]

After reading the Bash man pages and with respect to this post, I am still having trouble understanding what exactly the eval command does and which would be its typical uses.
For example, if we do:
$ set -- one two three # Sets $1 $2 $3
$ echo $1
one
$ n=1
$ echo ${$n} ## First attempt to echo $1 using brackets fails
bash: ${$n}: bad substitution
$ echo $($n) ## Second attempt to echo $1 using parentheses fails
bash: 1: command not found
$ eval echo \${$n} ## Third attempt to echo $1 using 'eval' succeeds
one
What exactly is happening here and how do the dollar sign and the backslash tie into the problem?
eval takes a string as its argument, and evaluates it as if you'd typed that string on a command line. (If you pass several arguments, they are first joined with spaces between them.)
${$n} is a syntax error in bash. Inside the braces, you can only have a variable name, with some possible prefix and suffixes, but you can't have arbitrary bash syntax and in particular you can't use variable expansion. There is a way of saying “the value of the variable whose name is in this variable”, though:
echo ${!n}
one
$(…) runs the command specified inside the parentheses in a subshell (i.e. in a separate process that inherits all settings such as variable values from the current shell), and gathers its output. So echo $($n) runs $n as a shell command, and displays its output. Since $n evaluates to 1, $($n) attempts to run the command 1, which does not exist.
eval echo \${$n} runs the parameters passed to eval. After expansion, the parameters are echo and ${1}. So eval echo \${$n} runs the command echo ${1}.
Note that most of the time, you must use double quotes around variable substitutions and command substitutions (i.e. anytime there's a $): "$foo", "$(foo)". Always put double quotes around variable and command substitutions, unless you know you need to leave them off. Without the double quotes, the shell performs field splitting (i.e. it splits value of the variable or the output from the command into separate words) and then treats each word as a wildcard pattern. For example:
$ ls
file1 file2 otherfile
$ set -- 'f* *'
$ echo "$1"
f* *
$ echo $1
file1 file2 file1 file2 otherfile
$ n=1
$ eval echo \${$n}
file1 file2 file1 file2 otherfile
$eval echo \"\${$n}\"
f* *
$ echo "${!n}"
f* *
eval is not used very often. In some shells, the most common use is to obtain the value of a variable whose name is not known until runtime. In bash, this is not necessary thanks to the ${!VAR} syntax. eval is still useful when you need to construct a longer command containing operators, reserved words, etc.
Simply think of eval as "evaluating your expression one additional time before execution"
eval echo \${$n} becomes echo $1 after the first round of evaluation. Three changes to notice:
The \$ became $ (The backslash is needed, otherwise it tries to evaluate ${$n}, which means a variable named {$n}, which is not allowed)
$n was evaluated to 1
The eval disappeared
In the second round, it is basically echo $1 which can be directly executed.
So eval <some command> will first evaluate <some command> (by evaluate here I mean substitute variables, replace escaped characters with the correct ones etc.), and then run the resultant expression once again.
eval is used when you want to dynamically create variables, or to read outputs from programs specifically designed to be read like this. See Eval command and security issues for examples. The link also contains some typical ways in which eval is used, and the risks associated with it.
In my experience, a "typical" use of eval is for running commands that generate shell commands to set environment variables.
Perhaps you have a system that uses a collection of environment variables, and you have a script or program that determines which ones should be set and their values. Whenever you run a script or program, it runs in a forked process, so anything it does directly to environment variables is lost when it exits. But that script or program can send the export commands to standard output.
Without eval, you would need to redirect standard output to a temporary file, source the temporary file, and then delete it. With eval, you can just:
eval "$(script-or-program)"
Note the quotes are important. Take this (contrived) example:
# activate.sh
echo 'I got activated!'
# test.py
print("export foo=bar/baz/womp")
print(". activate.sh")
$ eval $(python test.py)
bash: export: `.': not a valid identifier
bash: export: `activate.sh': not a valid identifier
$ eval "$(python test.py)"
I got activated!
The eval statement tells the shell to take eval’s arguments as commands and run them through the command-line. It is useful in a situation like below:
In your script if you are defining a command into a variable and later on you want to use that command then you should use eval:
a="ls | more"
$a
Output:
bash: command not found: ls | more
The above command didn't work as ls tried to list file with name pipe (|) and more. But these files are not there:
eval $a
Output:
file.txt
mailids
remote_cmd.sh
sample.txt
tmp
Update: Some people say one should -never- use eval. I disagree. I think the risk arises when corrupt input can be passed to eval. However there are many common situations where that is not a risk, and therefore it is worth knowing how to use eval in any case. This stackoverflow answer explains the risks of eval and alternatives to eval. Ultimately it is up to the user to determine if/when eval is safe and efficient to use.
The bash eval statement allows you to execute lines of code calculated or acquired, by your bash script.
Perhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order. That's essentially the same behavior as the bash source statement, which is what one would use, unless it was necessary to perform some kind of transformation (e.g. filtering or substitution) on the content of the imported script.
I rarely have needed eval, but I have found it useful to read or write variables whose names were contained in strings assigned to other variables. For example, to perform actions on sets of variables, while keeping the code footprint small and avoiding redundancy.
eval is conceptually simple. However, the strict syntax of the bash language, and the bash interpreter's parsing order can be nuanced and make eval appear cryptic and difficult to use or understand. Here are the essentials:
The argument passed to eval is a string expression that is calculated at runtime. eval will execute the final parsed result of its argument as an actual line of code in your script.
Syntax and parsing order are stringent. If the result isn't an executable line of bash code, in scope of your script, the program will crash on the eval statement as it tries to execute garbage.
When testing you can replace the eval statement with echo and look at what is displayed. If it is legitimate code in the current context, running it through eval will work.
The following examples may help clarify how eval works...
Example 1:
eval statement in front of 'normal' code is a NOP
$ eval a=b
$ eval echo $a
b
In the above example, the first eval statements has no purpose and can be eliminated. eval is pointless in the first line because there is no dynamic aspect to the code, i.e. it already parsed into the final lines of bash code, thus it would be identical as a normal statement of code in the bash script. The 2nd eval is pointless too, because, although there is a parsing step converting $a to its literal string equivalent, there is no indirection (e.g. no referencing via string value of an actual bash noun or bash-held script variable), so it would behave identically as a line of code without the eval prefix.
Example 2:
Perform var assignment using var names passed as string values.
$ key="mykey"
$ val="myval"
$ eval $key=$val
$ echo $mykey
myval
If you were to echo $key=$val, the output would be:
mykey=myval
That, being the final result of string parsing, is what will be executed by eval, hence the result of the echo statement at the end...
Example 3:
Adding more indirection to Example 2
$ keyA="keyB"
$ valA="valB"
$ keyB="that"
$ valB="amazing"
$ eval eval \$$keyA=\$$valA
$ echo $that
amazing
The above is a bit more complicated than the previous example, relying more heavily on the parsing-order and peculiarities of bash. The eval line would roughly get parsed internally in the following order (note the following statements are pseudocode, not real code, just to attempt to show how the statement would get broken down into steps internally to arrive at the final result).
eval eval \$$keyA=\$$valA # substitution of $keyA and $valA by interpreter
eval eval \$keyB=\$valB # convert '$' + name-strings to real vars by eval
eval $keyB=$valB # substitution of $keyB and $valB by interpreter
eval that=amazing # execute string literal 'that=amazing' by eval
If the assumed parsing order doesn't explain what eval is doing enough, the third example may describe the parsing in more detail to help clarify what is going on.
Example 4:
Discover whether vars, whose names are contained in strings, themselves contain string values.
a="User-provided"
b="Another user-provided optional value"
c=""
myvarname_a="a"
myvarname_b="b"
myvarname_c="c"
for varname in "myvarname_a" "myvarname_b" "myvarname_c"; do
eval varval=\$$varname
if [ -z "$varval" ]; then
read -p "$varname? " $varname
fi
done
In the first iteration:
varname="myvarname_a"
Bash parses the argument to eval, and eval sees literally this at runtime:
eval varval=\$$myvarname_a
The following pseudocode attempts to illustrate how bash interprets the above line of real code, to arrive at the final value executed by eval. (the following lines descriptive, not exact bash code):
1. eval varval="\$" + "$varname" # This substitution resolved in eval statement
2. .................. "$myvarname_a" # $myvarname_a previously resolved by for-loop
3. .................. "a" # ... to this value
4. eval "varval=$a" # This requires one more parsing step
5. eval varval="User-provided" # Final result of parsing (eval executes this)
Once all the parsing is done, the result is what is executed, and its effect is obvious, demonstrating there is nothing particularly mysterious about eval itself, and the complexity is in the parsing of its argument.
varval="User-provided"
The remaining code in the example above simply tests to see if the value assigned to $varval is null, and, if so, prompts the user to provide a value.
I originally intentionally never learned how to use eval, because most people will recommend to stay away from it like the plague. However I recently discovered a use case that made me facepalm for not recognizing it sooner.
If you have cron jobs that you want to run interactively to test, you might view the contents of the file with cat, and copy and paste the cron job to run it. Unfortunately, this involves touching the mouse, which is a sin in my book.
Lets say you have a cron job at /etc/cron.d/repeatme with the contents:
*/10 * * * * root program arg1 arg2
You cant execute this as a script with all the junk in front of it, but we can use cut to get rid of all the junk, wrap it in a subshell, and execute the string with eval
eval $( cut -d ' ' -f 6- /etc/cron.d/repeatme)
The cut command only prints out the 6th field of the file, delimited by spaces. Eval then executes that command.
I used a cron job here as an example, but the concept is to format text from stdout, and then evaluate that text.
The use of eval in this case is not insecure, because we know exactly what we will be evaluating before hand.
I've recently had to use eval to force multiple brace expansions to be evaluated in the order I needed. Bash does multiple brace expansions from left to right, so
xargs -I_ cat _/{11..15}/{8..5}.jpg
expands to
xargs -I_ cat _/11/8.jpg _/11/7.jpg _/11/6.jpg _/11/5.jpg _/12/8.jpg _/12/7.jpg _/12/6.jpg _/12/5.jpg _/13/8.jpg _/13/7.jpg _/13/6.jpg _/13/5.jpg _/14/8.jpg _/14/7.jpg _/14/6.jpg _/14/5.jpg _/15/8.jpg _/15/7.jpg _/15/6.jpg _/15/5.jpg
but I needed the second brace expansion done first, yielding
xargs -I_ cat _/11/8.jpg _/12/8.jpg _/13/8.jpg _/14/8.jpg _/15/8.jpg _/11/7.jpg _/12/7.jpg _/13/7.jpg _/14/7.jpg _/15/7.jpg _/11/6.jpg _/12/6.jpg _/13/6.jpg _/14/6.jpg _/15/6.jpg _/11/5.jpg _/12/5.jpg _/13/5.jpg _/14/5.jpg _/15/5.jpg
The best I could come up with to do that was
xargs -I_ cat $(eval echo _/'{11..15}'/{8..5}.jpg)
This works because the single quotes protect the first set of braces from expansion during the parsing of the eval command line, leaving them to be expanded by the subshell invoked by eval.
There may be some cunning scheme involving nested brace expansions that allows this to happen in one step, but if there is I'm too old and stupid to see it.
You asked about typical uses.
One common complaint about shell scripting is that you (allegedly) can't pass by reference to get values back out of functions.
But actually, via "eval", you can pass by reference. The callee can pass back a list of variable assignments to be evaluated by the caller. It is pass by reference because the caller can allowed to specify the name(s) of the result variable(s) - see example below. Error results can be passed back standard names like errno and errstr.
Here is an example of passing by reference in bash:
#!/bin/bash
isint()
{
re='^[-]?[0-9]+$'
[[ $1 =~ $re ]]
}
#args 1: name of result variable, 2: first addend, 3: second addend
iadd()
{
if isint ${2} && isint ${3} ; then
echo "$1=$((${2}+${3}));errno=0"
return 0
else
echo "errstr=\"Error: non-integer argument to iadd $*\" ; errno=329"
return 1
fi
}
var=1
echo "[1] var=$var"
eval $(iadd var A B)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[2] var=$var (unchanged after error)"
eval $(iadd var $var 1)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[3] var=$var (successfully changed)"
The output looks like this:
[1] var=1
errstr=Error: non-integer argument to iadd var A B
errno=329
[2] var=1 (unchanged after error)
[3] var=2 (successfully changed)
There is almost unlimited band width in that text output! And there are more possibilities if the multiple output lines are used: e.g., the first line could be used for variable assignments, the second for continuous 'stream of thought', but that's beyond the scope of this post.
In the question:
who | grep $(tty | sed s:/dev/::)
outputs errors claiming that files a and tty do not exist. I understood this to mean that tty is not being interpreted before execution of grep, but instead that bash passed tty as a parameter to grep, which interpreted it as a file name.
There is also a situation of nested redirection, which should be handled by matched parentheses which should specify a child process, but bash is primitively a word separator, creating parameters to be sent to a program, therefore parentheses are not matched first, but interpreted as seen.
I got specific with grep, and specified the file as a parameter instead of using a pipe. I also simplified the base command, passing output from a command as a file, so that i/o piping would not be nested:
grep $(tty | sed s:/dev/::) <(who)
works well.
who | grep $(echo pts/3)
is not really desired, but eliminates the nested pipe and also works well.
In conclusion, bash does not seem to like nested pipping. It is important to understand that bash is not a new-wave program written in a recursive manner. Instead, bash is an old 1,2,3 program, which has been appended with features. For purposes of assuring backward compatibility, the initial manner of interpretation has never been modified. If bash was rewritten to first match parentheses, how many bugs would be introduced into how many bash programs? Many programmers love to be cryptic.
As clearlight has said, "(p)erhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order". I'm no expert, but the textbook I'm currently reading (Shell-Programmierung by Jürgen Wolf) points to one particular use of this that I think would be a valuable addition to the set of potential use cases collected here.
For debugging purposes, you may want to go through your script line by line (pressing Enter for each step). You could use eval to execute every line by trapping the DEBUG signal (which I think is sent after every line):
trap 'printf "$LINENO :-> " ; read line ; eval $line' DEBUG
I like the "evaluating your expression one additional time before execution" answer, and would like to clarify with another example.
var="\"par1 par2\""
echo $var # prints nicely "par1 par2"
function cntpars() {
echo " > Count: $#"
echo " > Pars : $*"
echo " > par1 : $1"
echo " > par2 : $2"
if [[ $# = 1 && $1 = "par1 par2" ]]; then
echo " > PASS"
else
echo " > FAIL"
return 1
fi
}
# Option 1: Will Pass
echo "eval \"cntpars \$var\""
eval "cntpars $var"
# Option 2: Will Fail, with curious results
echo "cntpars \$var"
cntpars $var
The curious results in option 2 are that we would have passed two parameters as follows:
First parameter: "par1
Second parameter: par2"
How is that for counter intuitive? The additional eval will fix that.
It was adapted from another answer on How can I reference a file for variables using Bash?

bash error, missing bracket after command [duplicate]

I am trying to automate our application backup. Part of the process is to check the exit status of egrep in an if statement:
if [ ! -f /opt/apps/SiteScope_backup/sitescope_configuration.zip ] ||
[ egrep -i -q "error|warning|fatal|missing|critical" "$File" ]
then
echo "testing"
fi
I expected it to output testing because the file exists and egrep returns success, but instead I'm getting an error:
-bash: [: too many arguments
I tried with all kinds of syntax - additional brackets, quotes etc but error still persists.
Please help me in understanding where I am going wrong.
You are making the common mistake of assuming that [ is part of the if statement's syntax. It is not; the syntax of if is simply
if command; then
: # ... things which should happen if command's result code was 0
else
: # ... things which should happen otherwise
fi
One of the common commands we use is [ which is an alias for the command test. It is a simple command for comparing strings, numbers, and files. It accepts a fairly narrow combination of arguments, and tends to generate confusing and misleading error messages if you don't pass it the expected arguments. (Or rather, the error messages are adequate and helpful once you get used to it, but they are easily misunderstood if you're not used.)
Here, you want to check the result of the command egrep:
if [ ! -f /opt/apps/SiteScope_backup/sitescope_configuration.zip ] ||
egrep -i -q "error|warning|fatal|missing|critical" "$File"
then
echo "testing"
fi
In the general case, command can be a pipeline or a list of commands; then, the exit code from the final command is the status which if will examine, similarly to how the last command in a script decides the exit status from the script.
These compound commands can be arbitrarily complex, like
if read thing
case $thing in
'' | 'quit') false;;
*) true;;
esac
then ...
but in practice, you rarely see more than a single command in the if statement (though it's not unheard of; your compound statement with || is a good example!)
Just to spell this out,
if [ egrep foo bar ]
is running [ aka test on the arguments egrep foo bar. But [ without options only accepts a single argument, and then checks whether or not that argument is the empty string. (egrep is clearly not an empty string. Quoting here is optional, but would perhaps make it easier to see:
if [ "egrep" ]; then
echo "yes, 'egrep' is not equal to ''"
fi
This is obviously silly in isolation, but should hopefully work as an illustrative example.)
The historical reasons for test as a general kitchen sink of stuff the authors didn't want to make part of the syntax of if is one of the less attractive designs of the original Bourne shell. Bash and zsh offer alternatives which are less unwieldy (like the [[ double brackets in bash), and of course, POSIX test is a lot more well-tempered than the original creation from Bell Labs.

why in an 'if' statement 'then' has to be in the next line in bash?

if is followed by then in bash but I don't understand why then cannot be used in the same line like if [...] then it has to be used in the next line. Does that remove some ambiguity from the code? or bash is designed like that? what is the underlying reason for it?
I tried to write if and then in the same line but it gave the error below:
./test: line 6: syntax error near unexpected token \`fi'
./test: line 6: \`fi'
the code is:
#!/bin/bash
if [ $1 -gt 0 ] then
echo "$1 is positive"
fi
It has to be preceded by a separator of some description, not necessarily on the next line(a). In other words, to achieve what you want, you can simply use:
if [[ $1 -gt 0 ]] ; then
echo "$1 is positive"
fi
As an aside, for one-liners like that, I tend to prefer:
[[ $1 -gt 0 ]] && echo "$1 is positive"
But that's simply because I prefer to see as much code on screen as possible. It's really just a style thing which you can freely ignore.
(a) The reason for this can be found in the Bash manpage (my emphasis):
RESERVED WORDS: Reserved words are words that have a special meaning to the shell. The following words are recognized as reserved when unquoted and either the first word of a simple command (see SHELL GRAMMAR below) or the third word of a case or for command:
! case coproc do done elif else esac fi for function if in select then until while { } time [[ ]]
Note that, though that section states it's the "first word of a simple command", the manpage seems to contradict itself in the referenced SHELL GRAMMAR section:
A simple command is a sequence of optional variable assignments followed by blank-separated words and redirections, and terminated by a control operator. The first word specifies the command to be executed, and is passed as argument zero.
So, whether you consider it part of the next command or a separator of some sort is arguable. What is not arguable is that it needs a separator of some sort (newline or semicolon, for example) before the then keyword.
The manpage doesn't go into why it was designed that way but it's probably to make the parsing of commands a little simpler.
Here's another way to explain the need for a line break or semicolon before then: the thing that goes between if and then is a command (or sequence of commands); if the then just came directly after the command without a delimiter, it'd be ambiguous whether it should be treated as a shell keyword or just an argument to the command.
For instance, this is a perfectly valid command:
echo This prints a phrase ending with then
...which prints "This prints a phrase ending with then". Now, consider this one:
if echo This prints a phrase ending with then
should that print "This prints a phrase ending with then" and look for a then keyword later on, or should it just print "This prints a phrase ending with" and treat the then as a keyword?
In order to settle this ambiguity, shell syntax says it should treat "then" as an argument to echo, and in order to get it treated as a keyword you need a command delimiter (line break or semicolon) to mark the end of the command.
Now, you might think that your if condition [ $1 -gt 0 ], already has a perfectly good delimiter, namely the ]. But in shell syntax, that's really just an argument to the [ command (yes, that's a command). Try this command:
[ 1 -gt 0 ] then
...and you'll probably get an error like "-bash: [: missing ']'", because the [ command checked its last argument to make sure it was "]", found that it was "then" instead, and panicked.
Perhaps it helps to understand why this is so by way of a few examples. The argument to if is a sequence of commands; so you can say e.g.
if read -r -p "What is your name?" name
[ "$name" -eq "tripleee" ]
then
echo "I kneel before thee"
fi
or even a complex compound like
while read -r -p "Favorite number?" number
case $number in
42) true; break;;
*) false;;
esac
do
echo "Review your preferences, then try again"
done
This extremely powerful but potentially confusing feature of the shell is probably one of its most misunderstood constructs. The ability to pass a sequence of commands to the flow control statements can make for very elegant scripts, but is often missed entirely (see e.g. Why is testing "$?" to see if a command succeeded or not, an anti-pattern?)
If it helps, you can use semi-colons
if [ $1 -gt 0 ]; then
echo "$1 is positive"
fi
# or even
if [ $1 -gt 0 ]; then echo "$1 is positive"; fi
As for why, it helps me to think of if, then, else, and fi as bash commands, and just like all other commands, they need to be at the start of a line (or after a semi-colon).

Checking the success of a command in a bash `if [ .. ]` statement

I am trying to automate our application backup. Part of the process is to check the exit status of egrep in an if statement:
if [ ! -f /opt/apps/SiteScope_backup/sitescope_configuration.zip ] ||
[ egrep -i -q "error|warning|fatal|missing|critical" "$File" ]
then
echo "testing"
fi
I expected it to output testing because the file exists and egrep returns success, but instead I'm getting an error:
-bash: [: too many arguments
I tried with all kinds of syntax - additional brackets, quotes etc but error still persists.
Please help me in understanding where I am going wrong.
You are making the common mistake of assuming that [ is part of the if statement's syntax. It is not; the syntax of if is simply
if command; then
: # ... things which should happen if command's result code was 0
else
: # ... things which should happen otherwise
fi
One of the common commands we use is [ which is an alias for the command test. It is a simple command for comparing strings, numbers, and files. It accepts a fairly narrow combination of arguments, and tends to generate confusing and misleading error messages if you don't pass it the expected arguments. (Or rather, the error messages are adequate and helpful once you get used to it, but they are easily misunderstood if you're not used.)
Here, you want to check the result of the command egrep:
if [ ! -f /opt/apps/SiteScope_backup/sitescope_configuration.zip ] ||
egrep -i -q "error|warning|fatal|missing|critical" "$File"
then
echo "testing"
fi
In the general case, command can be a pipeline or a list of commands; then, the exit code from the final command is the status which if will examine, similarly to how the last command in a script decides the exit status from the script.
These compound commands can be arbitrarily complex, like
if read thing
case $thing in
'' | 'quit') false;;
*) true;;
esac
then ...
but in practice, you rarely see more than a single command in the if statement (though it's not unheard of; your compound statement with || is a good example!)
Just to spell this out,
if [ egrep foo bar ]
is running [ aka test on the arguments egrep foo bar. But [ without options only accepts a single argument, and then checks whether or not that argument is the empty string. (egrep is clearly not an empty string. Quoting here is optional, but would perhaps make it easier to see:
if [ "egrep" ]; then
echo "yes, 'egrep' is not equal to ''"
fi
This is obviously silly in isolation, but should hopefully work as an illustrative example.)
The historical reasons for test as a general kitchen sink of stuff the authors didn't want to make part of the syntax of if is one of the less attractive designs of the original Bourne shell. Bash and zsh offer alternatives which are less unwieldy (like the [[ double brackets in bash), and of course, POSIX test is a lot more well-tempered than the original creation from Bell Labs.

Error when running a script (with a while Loop) in a script created background process

Is it possible to use a while loop in a background job created by a script? This works perfectly fine when I manually run this in command prompt.
Called Script Content (test.sh)
#!/bin/bash
> switch
echo 'Running' >> switch
check=$(more switch)
echo $check
while [ $check = 'Running' ]
do
sleep 5s
check=$(more switch)
echo $check
done
Calling Script Content (start.sh)
#!/bin/bash
$PWD/test.sh &
I am getting an error:
-bash: [: too many arguments
when the calling script runs. The called script run manually without issue. Is while loop not allow in a script created background job?
echo 'Running' >> swtich
check=$(more switch)
You write to a file called swtich and then read to a file called switch. That file doesn't exist, so the variable check ends up empty.
Incidentally, why call more? That's only useful to view a file page by page. If you're piping the output, cat is equivalent. If more doesn't have a controlling terminal, which happens when you call test.sh in the background, more prints an extra line containing the file name, so its output
So you really must use cat and not more. (In bash, you can also write check=$(<switch).)
[ $check = 'Running' ]
Since $check is empty, the command [ receives three arguments: =, Running and ]. This is not valid syntax for the [ command.
Always put double quotes around variable substitution and command substitution: "$check", "$(somecommand)". If a $ substitution occurs outside quotes, the result is split into separate words (0 words in your case, since the result was empty) and the words are interpreted as file glob patterns. This is almost never desirable, so always use double quotes unless you really have a list of glob patterns.
check=$(cat switch) is actually safe, because in an assignment, a single word is expected on the right-hand side, so word splitting doesn't happen. However, you might as well write check="$(cat switch)" for clarity. Also, the quotes are required if you write export check="$(cat switch)".
So we have:
#!/bin/bash
echo 'Running' >> switch
check=$(cat switch)
while [ "$check" = 'Running' ]
do
sleep 5s
check=$(cat switch)
done
In bash (but not in sh), you can write [[ $check = Running ]] instead of [ $check = Running ]. That's because [[ … ]] is special syntax, unlike [ which is a built-in command with no special parsing. However, you would need double quotes if the variable was on the right-hand side of the = operator, because the right-hand side of = inside [[ … ]] is a pattern. Rather than learn such complicated rules, just use double quotes all the time and you'll be fine.
Try this:
nohup yourprogram >>/dev/null 2>>/dev/null &
Your program content should not have & or nohup
The problem is basically that you haven't quoted the argument in the test command. It should be
while [ "$check" = 'Running' ]
with double quotes around the variable. If you do that, you will notice that you have a typo in the first line. Instead of
echo 'Running' >> swtich
it should be
echo 'Running' >> switch
Then it should work as expected
As a side note, it is better not to use the more command to read the contents of the file, since it is intended for humans, but instead to just do:
check=$(<switch)

Resources