I have a bash script that sources contents from another file. The contents of the other file are commands I would like to execute and compare the return value. Some of the commands are have multiple commands separated by either a semicolon (;) or by ampersands (&&) and I can't seem to make this work. To work on this, I created some test scripts as shown:
test.conf is the file being sourced by test
Example-1 (this works), My output is 2 seconds in difference
test.conf
CMD[1]="date"
test.sh
. test.conf
i=2
echo "$(${CMD[$i]})"
sleep 2
echo "$(${CMD[$i]})"
Example-2 (this does not work)
test.conf (same script as above)
CMD[1]="date;date"
Example-3 (tried this, it does not work either)
test.conf (same script as above)
CMD[1]="date && date"
I don't want my variable, CMD, to be inside tick marks because then, the commands would be executed at time of invocation of the source and I see no way of re-evaluating the variable.
This script essentially calls CMD on pass-1 to check something, if on pass-1 I get a false reading, I do some work in the script to correct the false reading and re-execute & re-evaluate the output of CMD; pass-2.
Here is an example. Here I'm checking to see if SSHD is running. If it's not running when I evaluate CMD[1] on pass-1, I will start it and re-evaluate CMD[1] again.
test.conf
CMD[1]=`pgrep -u root -d , sshd 1>/dev/null; echo $?`
So if I modify this for my test script, then test.conf becomes:
NOTE: Tick marks are not showing up but it's the key below the ~ mark on my keyboard.
CMD[1]=`date;date` or `date && date`
My script looks like this (to handle the tick marks)
. test.conf
i=2
echo "${CMD[$i]}"
sleep 2
echo "${CMD[$i]}"
I get the same date/time printed twice despite the 2 second delay. As such, CMD is not getting re-evaluate.
First of all, you should never use backticks unless you need to be compatible with an old shell that doesn't support $() - and only then.
Secondly, I don't understand why you're setting CMD[1] but then calling CMD[$i] with i set to 2.
Anyway, this is one way (and it's similar to part of Barry's answer):
CMD[1]='$(date;date)' # no backticks (remember - they carry Lime disease)
eval echo "${CMD[1]}" # or $i instead of 1
From the couple of lines of your question, I would have expected some approach like this:
#!/bin/bash
while read -r line; do
# munge $line
if eval "$line"; then
# success
else
# fail
fi
done
Where you have backticks in the source, you'll have to escape them to avoid evaluating them too early. Also, backticks aren't the only way to evaluate code - there is eval, as shown above. Maybe it's eval that you were looking for?
For example, this line:
CMD[1]=`pgrep -u root -d , sshd 1>/dev/null; echo $?`
Ought probably look more like this:
CMD[1]='`pgrep -u root -d , sshd 1>/dev/null; echo $?`'
Related
I came across a script that is supposed to set up postgis in a docker container, but it references this "${psql[#]}" command in several places:
#!/bin/sh
# Perform all actions as $POSTGRES_USER
export PGUSER="$POSTGRES_USER"
# Create the 'template_postgis' template db
"${psql[#]}" <<- 'EOSQL'
CREATE DATABASE template_postgis;
UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template_postgis';
EOSQL
I'm guessing it's supposed to use the psql command, but the command is always empty so it gives an error. Replacing it with psql makes the script run as expected. Is my guess correct?
Edit: In case it's important, the command is being run in a container based on postgres:11-alpine.
$psql is supposed to be an array containing the psql command and its arguments.
The script is apparently expected to be run from here, which does
psql=( psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --no-password )
and later sources the script in this loop:
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh)
# https://github.com/docker-library/postgres/issues/450#issuecomment-393167936
# https://github.com/docker-library/postgres/pull/452
if [ -x "$f" ]; then
echo "$0: running $f"
"$f"
else
echo "$0: sourcing $f"
. "$f"
fi
;;
*.sql) echo "$0: running $f"; "${psql[#]}" -f "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${psql[#]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
See Setting an argument with bash for the reason to use an array rather than a string.
The #!/bin/sh and the [#] are incongruous. This is a bash-ism, where the psql variable is an array. This literal quote dollarsign psql bracket at bracket quote is expanded into "psql" "array" "values" "each" "listed" "and" "quoted" "separately." It's the safer way, e.g., to accumulate arguments to a command where any of them might have spaces in them.
psql=(/foo/psql arg arg arg) is the best way to define the array you need there.
It might look obscure, but it would work like so...
Let's say we have a bash array wc, which contains a command wc, and an argument -w, and we feed that a here document with some words:
wc=(wc -w)
"${wc[#]}" <<- words
one
two three
four
words
Since there are four words in the here document, the output is:
4
In the quoted code, there needs to be some prior point, (perhaps a calling script), that does something like:
psql=(psql -option1 -option2 arg1 arg2 ... )
As to why the programmer chose to invoke a command with an array, rather than just invoke the command, I can only guess... Maybe it's a crude sort of operator overloading to compensate for different *nix distros, (i.e. BSD vs. Linux), where the local variants of some necessary command might have different names from the same option, or even use different commands. So one might check for BSD or Linux or a given version, and reset psql accordingly.
The answer from #Barmar is correct.
The script was intended to be "sourced" and not "executed".
I faced the same problem and came to the same answer after I read that it had been reported here and fixed by "chmod".
https://github.com/postgis/docker-postgis/issues/119
Therefore, the fix is to change the permissions.
This can be done either in your git repository:
chmod -x initdb-postgis.sh
or add a line to your docker file.
RUN chmod -x /docker-entrypoint-initdb.d/10_postgis.sh
I like to do both so that it is clear to others.
Note: if you are using git on windows then permission can be lost. Therefore, "chmod" in the docker file is needed.
So we have this script that is supposed to change the IP of a linux machine based on user input. This user input has to be validated.
If the script is run inside the directory in which it lays, everything works as expected, but as soon as it's run with an absolute path, it seems to break on some points.
I already tried to use the debug option set -x but the output stays almost the same.
read -p "Please enter the netmask (CIDR format): " netmask
if [ ! $(echo "$netmask" | egrep "^([12]?[0-9]?)$") ];
then
subnetok=0
fi
if [ "$subnetok" == "0" ];
then
echo -e "\033[41m\033[39m Subnetmask is invalid!\033[0m"
sleep 1
return 1
fi
This is the debug output if the script is run inside the directory:
++ echo 24
++ egrep '^([12]?[0-9]?)$'
+ '[' '!' 24 ']'
+ '[' '' == 0 ']'
and this is the debug output if the script is run with an absolute path
+++ echo 24
+++ egrep --color=auto '^([12]?[0-9]?)$'
++ '[' '!' 24 ']'
++ '[' 0 == 0 ']'
++ echo -e 'Subnetmask is invalid'
I expect the output to be the same with the same numbers
When you run the script with this:
. /usr/local/script/script.sh
This uses the . command, which runs the script in the current shell (equivalent to source). That is, it runs it in your interactive shell rather than forking a new shell to run it. See: What is the difference between ./somescript.sh and . ./somescript.sh
This has (at least) two effects:
The current shell is interactive, and apparently has an alias defined for egrep, which makes things a little weird. Not really a problem, just weird.
The current shell apparently already has a definition for the variable subnetok, and it's "0". It's probably left over from a previous time you ran the script this way. This is what's causing the problem.
The primary solution is that the script needs to explicitly initialize subnetok rather than just assuming that it's undefined:
subnetok=1
if ...
Alternately, if you don't need the variable for anything else, you could just skip it and handle the condition immediately:
if [ ! $(echo "$netmask" | egrep "^([12]?[0-9]?)$") ]; # See below for alternatives
then
echo -e "\033[41m\033[39m Subnetmask is invalid!\033[0m"
...
Other recommendations:
Run the script without the .:
/usr/local/script/script.sh
Give the script a proper shebang line (if it doesn't already have one) that specifies the bash shell (i.e. #!/bin/bash or #!/usr/bin/env bash).
Use a better method to check the subnet for validity, like:
if ! echo "$netmask" | grep -Eq "^[12]?[0-9]?$"
or (bash only):
if ! [[ "$netmask" =~ ^[12]?[0-9]?$ ]]
Don't use echo -e, as it's not portable (even between different versions of the same OS, modes of the shell, etc). Use printf instead (and I'd recommend single-quotes for strings that contain backslashes, because in some cases they'll get pre-parsed when in double-quotes):
printf '\033[41m\033[39m Subnetmask is invalid!\033[0m'
Note that printf is not a drop-in replacement for echo -e, it's considerably more complicated when you're using variables and/or multiple arguments. Read the man page.
Comment:
By running with an absolute path I mean . /usr/local/script/script.sh instead of cd into /usr/local/script/ and then ./script.sh
The difference is that in one case you are executing the script and in another case you are sourcing the script. See What is the difference between executing a Bash script vs sourcing it? for more information.
When you are running ./script.sh without a space between the dot and the slash you are executing the script in a new shell. When you are running . /usr/local/script/script.sh you are sourcing the script in the current shell. This can have implications if you have for example an alias set in your current shell that would not be present in a new shell, such as alias egrep='egrep --color=auto'. That's why there is a difference.
From the linked question:
Both sourcing and executing the script will run the commands in the script line by line, as if you typed those commands by hand line by line.
The differences are:
When you execute the script you are opening a new shell, type the commands in the new shell, copy the output back to your current shell, then close the new shell. Any changes to environment will take effect only in the new shell and will be lost once the new shell is closed.
When you source the script you are typing the commands in your current shell. Any changes to the environment will take effect and stay in your current shell.
Use source if you want the script to change the environment in your currently running shell. use execute otherwise.
Here's my problem, from console if I type the below,
var=`history 1`
echo $var
I get the desired output. But when I do the same inside a shell script, it is not showing any output. Also, for other commands like pwd, ls etc, the script shows the desired output without any issue.
As value of variable contains space, add quotes around it.
E.g.:
var='history 1'
echo $var
I believe all you need is this as follows:
1- Ask user for the number till which user need to print the history in script.
2- Run the script and take Input from user and get the output as follows:
cat get_history.ksh
echo "Enter the line number of history which you want to get.."
read number
if [[ $# -eq 0 ]]
then
echo "Usage of script: get_history.ksh number_of_lines"
exit
else
history "$number"
fi
Added logic where it will check arguments if number of arguments passed is 0 then it will exit from script then.
By default history is turned off in a script, therefore you need to turn it on:
set -o history
var=$(history 1)
echo "$var"
Note the preferred use of $( ) rather than the deprecated backticks.
However, this will only look at the history of the current process, that is this shell script, so it is fairly useless.
While debugging a bash shell script I saw a mistake I have done in some part of my code. This mistake can be a variable name, a variable value or generally line(s) of code.
Is it possible to correct it while running the debugging mode? Or the only option is to exit the debug mode, correct the mistake(s) and rerun the debugging process? It would be very helpful if such an option of correcting mistakes "on the fly" exists. Especially for scripts that require long running times and you have to repeat the whole run process from the beginning multiple times (if there are many mistakes).
For example:
#!/bin/bash
set -x # debugging
trap read debug
a="1" # wrong value, should be 2
b="5"
sum=$(bc <<< "$a + $b")
set +x
The above script has a trap to execute one line of code at a time and continues to the next line after pressing enter.
During the debugging suppose that I realize that a=1 but should be something else lets say a=2. The next command b=5, is not executed yet because of the trap, so I was thinking of something like inserting a=2 just below a=1 and then proceed with enter to continue the debugging.
Something like in the code below:
#!/bin/bash
set -x # debugging
trap read debug
a="1" # wrong value, should be 2
a="2" # <-This is the value that a should have
b="5"
sum=$(bc <<< "$a + $b")
set +x
This approach does not work, I guess because the whole script is called only in the beginning of the run. What would be a good way to handle such an issue in shell scripting?
Thank you
Just flesh out your parser a bit:
#!/bin/bash
function parser {
IFS= read -r input
printf "Going to do: >%s\n" $input
eval "$input"
}
set -x # debugging
trap "parser" debug
a="1"
b="5"
sum=$(bc <<< "$a + $b")
set +x
This of course only works with one liners, but should get you started. When you see the debug is on a=1 (you will see it on screen) you can just type a=3 and return. Hit return on all other lines - you'll see the effect at the sum.
Note in this method (overriding after you see the line in debug) you always run AFTER the faulty command ran, since that's when the debugger outputs it. If you want to see the command before running use
echo $BASH_COMMAND
in your parser. Of course, running a=10 before a=1 ran is somewhat counter-productive.
An improvement for example may be to turn off debugging while parsing, and skipping the echo if no input is received:
function parser {
set +x
IFS= read -r input
if ! [ -z "$input" ]; then
printf "Going to do: >%s\n" $input
eval "$input"
fi
set -x
}
Note this won't modify the script itself. To modify the script itself you would need to add a sed command as well to actually modify the script, or perhaps echo each line to a new file, replacing with new input in debug if received - this is the safer option. This can be done as so (note the parser runs after the line already ran, so we always output the previous command, and override it if needed):
set prev_cmd="#!/bin/bash"
function parser {
set +x
IFS= read -r input
if ! [ -z "$input" ]; then
printf "Going to do: >%s\n" $input
eval "$input"
prev_cmd=$input
fi
echo $prev_cmd >> debug_log.bash
prev_cmd=$BASH_COMMAND
set -x
}
You imagination is the limit here. And the syntax. I would move your parser to a separate file and source it on demand as well.
Answers to questions in comments in order
Note I have echo $prev_cmd >> debug_log.bash. prev_command will be empty on the first call to the parser if you don't set it before. As the shebang for sure was never debugged and so won't be in your new file, it's a good initial choice for a first line to be dumped to the new file - which needs it anyway. You could of course set it empty or to some comment, whatever you prefer.
When you enter the debugging function debug is turned on (by definition). In order to prevent debugging in the debugger, I shut it off. Finally when leaving the function I need to re-activate it so debugging will continue. That's why the order is 'reversed' to the file - it shuts off your initial activation, and re-activates before continuing.
If you want to stop debugging before sum=..., put set +x before - that's what shuts off debug. Just like I did in the parser.
Acknowledgements: Special thanks to Charles Duffy for making the code safer to use. and just better.
I want that a variable from a script to be incremented every time when I run that script. Something like this:
#!/bin/bash
n=0 #the variable that I want to be incremented
next_n=$[$n+1]
sed -i "2s/.*/n=$next_n/" ${0}
echo $n
will do the job, but is not so good if I will add other lines to the script before the line in which the variable is set and I forget to update the line sed -i "2s/.*/n=$next_n/" ${0}.
Also I prefer to not use another file in which to keep the variable value.
Some other idea?
#!/bin/bash
n=0;#the variable that I want to be incremented
next_n=$[$n+1]
sed -i "/#the variable that I want to be incremented$/s/=.*#/=$next_n;#/" ${0}
echo $n
A script is run in a subshell, which means its variables are forgotten once the script ends and are not propagated to the parent shell which called it. To run a command list in the current shell, you could either source the script, or write a function. In such a script, plain
(( n++ ))
would work - but only when called from the same shell. If the script should work from different shells, or even after switching the machine off and on again, saving the value in a file is the simplest and best option. It might be easier, though, to store the variable value in a different file, not the script itself:
[[ -f saved_value ]] || echo 0 > saved_value
n=$(< saved_value)
echo $(( n + 1 )) > saved_value
Changing the script when it runs might have strange consequences, especially when you change the size of the script (which might happen at 9 → 10).