SoF2 shell script not running - bash

I've got the following code in my shell script:
SERVER=`ps -ef | grep -v grep | grep -c sof2ded`
if ["$SERVER" != "0"]; then
echo "Already Running, exiting"
exit
else
echo "Starting up the server..."
cd /home/sof2/
/home/sof2/crons/start.sh > /dev/null 2>&1
fi
I did chmod a+x status.sh
Now I try to run the script but it's returning this error:
./status.sh: line 5: [1: command not found
Starting up the server...
Any help would be greatly appreciated.

Could you please try changing a few things in your script as follows and let me know if that helps you?(changed back-tick to $ and changed [ to [[ in code)
SERVER=$(ps -ef | grep -v grep | grep -c sof2ded)
if [[ "$SERVER" -eq 0 ]]; then
echo "Already Running, exiting"
exit
else
echo "Starting up the server..."
cd /home/sof2/
/home/sof2/crons/start.sh > /dev/null 2>&1
fi

The problem is with the test command. "But", I hear you say, "I am not using the test command". Yes you are, it is also known as [.
if statement syntax is if command. The brackets are not part of if syntax.
Commands have arguments separated (tokenized) by whitespace, so:
[ "$SERVER" != "0" ]
The whitespace is needed because the command is [ and then there are 4 arguments passed to it (the last one must be ]).
A more robust way of comparing numerics is to use double parentheses,
(( SERVER == 0 ))
Notice that you don't need the $ or the quotes around SERVER. Also the spacing is less important, but useful for readability.
[[ is used for comparing text patterns.
As a comment, backticks ` ` are considered deprecated because they are difficult to read, they are replaced with $( ... ).

Related

Bash script command and getting rid of shellcheck error SC2181

I have the following bash script:
dpkg-query --show --showformat='${Status}\n' "$i" 2> \
/dev/null | grep "install ok installed" &> /dev/null
if [[ $? -eq 0 ]]; then
l_var_is_desktop="true"
fi
and the ShellCheck utility (https://www.shellcheck.net/) is giving me the following output:
$ shellcheck myscript
Line 17:
if [[ $? -eq 0 ]]; then
^-- SC2181: Check exit code directly with e.g. 'if mycmd;', not indirectly with $?.
$
The link to this warning is the following: https://github.com/koalaman/shellcheck/wiki/SC2181
What is the best way for modifying this. The command is really too long to put into one line. I would like to avoid using ShellCheck ignore rules.
I've tried creating a local variable and storing the output of the command, but this breaks other rules.
The command doesn't really get much longer by putting it directly in if, you're just adding 3 characters.
if dpkg-query --show --showformat='${Status}\n' "$i" 2> \
/dev/null | grep "install ok installed" &> /dev/null
then
l_var_is_desktop="true"
fi

While loop hangs and does not find string

I have a section of code in a bash script that uses a while loop to grep a file until the string I am looking for is there, then exit. Currently, its just hanging using the following code:
hostname="test-cust-15"
VAR1=$(/bin/grep -wo -m1 "HOST ALERT: $hostname;DOWN" /var/log/logfile)
while [ ! "$VAR1" ]
do
sleep 5
done
echo $VAR1 was found
I know the part of the script responsible for inserting this string into the logfile works, as I can grep it out side of the script and find it.
One thing I have tried is to change up the variables. Like this:
hostname="test-cust-15"
VAR1="HOST ALERT: $hostname;DOWN"
while [ ! /bin/grep "$VAR1" /var/log/logfile ]
do
sleep 5
done
echo $VAR1 was found
But i get a binary operator expected message and once I got a too many arguments message when using this:
while [ ! /bin/grep -q -wo "$VAR1" /var/log/logfile ]
What do I need to do to fix this?
while/until can work off of the exit status of a program directly.
until /bin/grep "$VAR1" /var/log/logfile
do
sleep 5
done
echo "$VAR1" was found
You also mentioned that it prints out the match in an above comment. If that's not desirable, use output redirection, or grep's -q option.
until /bin/grep "$VAR1" /var/log/logfile >/dev/null
until /bin/grep -q "$VAR1" /var/log/logfile
No need to bother with command substitution or test operator there. Simply:
while ! grep -wo -m1 "HOST ALERT: $hostname;DOWN" /var/log/logfile; do
sleep 5
done
Don't waste resources, use tail!
#!/bin/bash
while read line
do
echo $line
break
done < <(tail -f /tmp/logfile | grep --line-buffered "HOST ALERT")

grep command exit code for unmatched patterns

i have written a shell scripts which runs crontab - l command
To make it more easy to use i have also given the user an ability to pass a command line argument to the script which will act like a pattern input for the grep command, so that the user can filter out all the stuffs which he/she doesn't need to see.
here's the script:-
1 #!/bin/bash
2 if [[ $1 == "" ]]; then
3 echo -e "No Argument passed:- Showing default crontab\n"
4 command=$(crontab -l 2>&1)
5 echo "$command"
6 else
7 rc=$?
8 command=$(crontab -l | grep -- "$1" 2>&1)
9 echo "$command"
10 if [[ $rc != 0 ]] ; then
11 echo -e "grep command on crontab -l was not successful"
12 fi
13 fi
this is how i run it
$ ./DisplayCrontab.sh
Now if i don't pass any command line argument it'll show me the complete crontab
If i pass any garbage pattern which doesn't exists in the crontab it'll show me the following message :-
grep command on crontab -l was not successful
But even if i pass a pattern which does exist in a couple of lines in crontab, i'm getting this kind of output:-
#matching lines
#matching lines
#matching lines
grep command on crontab -l was not successful
Why am i getting grep command not successful at the bottom?, how can i get rid of it?
Is there anything wrong with the script?
You're capturing the exit code before the execution, should be:
command=$(crontab -l | grep -- "$1" 2>&1)
rc=$?
To test this code use numeric operators:
[[ $rc -ne 0 ]]
Grep man:
Normally, the exit status is 0 if selected lines are found and
1 otherwise. But the exit status is 2 if an error occurred

Continue script if only one instance is running? [duplicate]

This question already has answers here:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
(43 answers)
Closed 5 years ago.
now this is embarrassing. I'm writing quick script and I can't figure out why this statement don't work.
if [ $(pidof -x test.sh | wc -w) -eq 1 ]; then echo Passed; fi
I also tried using back-ticks instead of $() but it still wouldn't work.
Can you see what is wrong with it? pidof -x test.sh | wc -w returns 1 if I run it inside of script, so I don't see any reason why basically if [ 1 -eq 1 ] wouldn't pass.
Thanks a lot!
Jefromi is correct; here is the logic I think you want:
#!/bin/bash
# this is "test.sh"
if [ $(pidof -x test.sh| wc -w) -gt 2 ]; then
echo "More than 1"
exit
fi
echo "Only one; doing whatever..."
Ah, the real answer: when you use a pipeline, you force the creation of a subshell. This will always cause you to get an increased number:
#!/bin/bash
echo "subshell:"
np=$(pidof -x foo.bash | wc -w)
echo "$np processes" # two processes
echo "no subshell:"
np=$(pidof -x foo.bash)
np=$(echo $np | wc -w)
echo "$np processes" # one process
I'm honestly not sure what the shortest way is to do what you really want to. You could avoid it all by creating a lockfile - otherwise you probably have to trace back via ppid to all the top-level processes and count them.
you don't have to pass the result of pidof to wc to count how many there are..use the shell
r=$(pidof -x -o $$ test.sh)
set -- $r
if [ "${##}" -eq 1 ];then
echo "passed"
else
echo "no"
fi
If you use the -o option to omit the PID of the script ($$), then only the PID of the subshell and any other instances of the script (and any subshells they might spawn) will be considered, so the test will pass when there's only one instance:
if [ $(pidof -x -o $$ test.sh | wc -w) -eq 1 ]; then echo Passed; fi
Here's how I would do it:
if [ "`pgrep -c someprocess`" -gt "1" ]; then
echo "More than one process running"
else
echo "Multiple processes not running"
fi
If you don't want to use a lockfile ... you can try this:
#!/bin/bash
if [[ "$(ps -N -p $$ -o comm,pid)" =~ $'\n'"${0##*/}"[[:space:]] ]]; then
echo "aready running!"
exit 1
fi
PS: it might need adjustment for a weird ${0##*/}
Just check for the existence of any one (or more) process identified as test.sh, the return code will be 1 if none are found:
pidof -x test.sh >/dev/null && echo "Passed"

Shell scripting: die on any error

Suppose a shell script (/bin/sh or /bin/bash) contained several commands. How can I cleanly make the script terminate if any of the commands has a failing exit status? Obviously, one can use if blocks and/or callbacks, but is there a cleaner, more concise way? Using && is not really an option either, because the commands can be long, or the script could have non-trivial things like loops and conditionals.
With standard sh and bash, you can
set -e
It will
$ help set
...
-e Exit immediately if a command exits with a non-zero status.
It also works (from what I could gather) with zsh. It also should work for any Bourne shell descendant.
With csh/tcsh, you have to launch your script with #!/bin/csh -e
May be you could use:
$ <any_command> || exit 1
You can check $? to see what the most recent exit code is..
e.g
#!/bin/sh
# A Tidier approach
check_errs()
{
# Function. Parameter 1 is the return code
# Para. 2 is text to display on failure.
if [ "${1}" -ne "0" ]; then
echo "ERROR # ${1} : ${2}"
# as a bonus, make our script exit with the right error code.
exit ${1}
fi
}
### main script starts here ###
grep "^${1}:" /etc/passwd > /dev/null 2>&1
check_errs $? "User ${1} not found in /etc/passwd"
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
check_errs $? "Cut returned an error"
echo "USERNAME: $USERNAME"
check_errs $? "echo returned an error - very strange!"

Resources