Building an execution string from a text file - shell

I am creating a shell script that retrieves values from the database and spools into a text file. Those values will have variables ($CURR_DATE $SITE etc...) in the database. So when I want to execute the program with those variables I run into an issue where it is using the literal string and not the value from the variable.
for example.
while read line;
do
Unix_Array[$counter]=$line;
let counter=counter+1;
done < parameterfile.txt
echo "Finished putting into array"
while [[ $c -lt ${#Unix_Array[#]} ]]
do
PARAMS="${PARAMS:-}${PARAMS:+ }${Unix_Array[$c]}"
((c=$c+1))
done
echo "Finished creating parameter string"
EXECUTE="$PROGRAM $USERID $PARAMS"
echo $PARAMS
$EXECUTE
I think it is executing like
Program user/id#DB $CURR_DATE $SITE
instead of the actual variables that were declared and already set.
How can i build the execution statement so that it will use the variables declared and not the literal variable.

Once you've collected the array, use it directly:
typeset -a params
while IFS= read -r line; do
params[n++]=$line;
done < parameterfile.txt
"$program" "$userid" "${params[#]}"
As to the lines containing variables, I'd hesitantly recommend using eval. What does the parameter file look like? Who has permission to write to it?
Get out of the habit of using ALL_CAPS_VARS: one day you'll use PATH or LANG and wonder why things "don't work".

Related

How do I pass in an un-evaluated variable to a shell script, and have the script evaluate it after sourcing another script?

I have a script that is sourcing a second script and need to do the following:
I want to pass in a variable name to the first script like this: sh firstScript.sh variable=$variableName
The first script will then source the second script, which contains the value of variableName
I'm then going to print the evaluated variable
I know that I can do something like \$variableName to pass in the variable name, but I can't figure out how to get the first script to then evaluate the variable using the exported variables from the second script. What am I missing here?
Here's what I wound up doing:
I'm passing in an entire string, with the variable embedded in the middle of the string like so:
sh firstScript.sh --message="This is a message \${variableName}"
In the first script, I'm doing these steps:
Extract the entire string in to an array
Pull out the embedded variables
Evaluate the embedded variables against the sourced script
Do a string replace to put the value in the original string
When I was done, it looked like this:
IFS=';' args=(${#//--/;}); unset IFS
source secondScript.sh
case ${arg^^} in
MESSAGE=*)
message="${arg#*=}"
messageVars=$(echo ${message} | grep -o "\${\w*}")
for messageVar in ${messageVars[*]}; do
messageVar=${messageVar#*\{}
messageVar=${messageVar%\}*}
messageVarVal=${!messageVar}
echo "messageVar: ${messageVar}"
echo "messageVarVal: ${messageVarVal}"
message=${message//"\${${messageVar}}"/"${messageVarVal}"}
done
echo "Message: ${message}"
;;
esac
I hope this is what you want:
firstScript.sh
varname="$1"
if [[ -z $varname ]]; then
echo "usage: $0 varname"
exit
fi
source ./secondScript.sh
declare -n ref="$varname"
echo "varname=$varname value=$ref"
secondScript.sh
foo=2
Then execute the script with:
./firstScript.sh foo
Output:
varname=foo value=2
The -n option to the declare creates a reference to another variable.
The bash version should be 4.3 or later to enjoy the functionality.
[Alternative]
You can also write the firstScript.sh as:
firstScript.sh
varname="$1"
if [[ -z $varname ]]; then
echo "usage: $0 varname"
exit
fi
source ./secondScript.sh
echo "varname=$varname value=${!varname}"
which will produce the same result.
The ${!varname} notation is called indirect expansion in which varname is expanded as a name of a variable.
It has been introduced since bash2.
Hope this helps.

How to expand variables using previously defined variables inside a file

I have a property file having multiple key value pairs. Some of the values uses previously defined keys. Following is the sample
xpath=abc/temp.txt
fullpath=$HOME/$xpath
...
I want to parse this file line by line and print the lines along with resolving env variables like $HOME as well as earlier defined variables like $xpath
The expected output is
xpath=abc/temp.txt
fullpath=tempuser/abc/temp.txt
...
How do I expand the variables in this way in a bash script
If you want to parse line by line and interpret the variables, You maybe want to use the eval for evaluating the String, like:
while IFS='=' read -r key value
do
key=$(echo $key | tr '.' '_')
if [[ ! -z $key ]]
then
v=`eval "echo ${value}"`
eval "${key}='${v}'"
echo "${key}=${v}"
fi
done < "my.properties"
In the above code snippet, use the eval with echo to interpret the variables.
Since the assignments are valid bash, source settings.env suffices to evaluate them. However, you also want to print the assignments, so more trickery required:
PS4="\000" source <(echo 'set -x'; cat settings.env; echo '{ set +x; } 2>/dev/null')
This trick uses the bash debugging facility of set -x to print the output of each assignment as it's performed. In words:
PS4="\000" removes the debugging prompt, which by default is +
<() creates a new file, by the pipeline contained within the parentheses
set -x enables debugging
cat settings.env inserts your settings
{ set +x; } 2>dev/null disables debugging, without outputting it doing so

Use variables with dot in unix shell script

How do I use variables with dot in a Unix (Bash) shell script? I have the script below:
#!/bin/bash
set -x
"FILE=system.properties"
FILE=$1
echo $1
if [ -f "$FILE" ];
then
echo "File $FILE exists"
else
echo "File $FILE does not exist"
fi
This is basically what I need ⟶ x=propertyfile, and propertyfile=$1. Can someone please help me?
You can't declare variable names with dots but you can use associative arrays to map keys, which is the more appropriate solution. This requires Bash 4.0.
declare -A FILE ## Declare variable as an associative array.
FILE[system.properties]="somefile" ## Assign a value.
echo "${FILE[system.properties]}" ## Access the value.
Note that the line:
"FILE=system.properties"
tries to execute a command FILE=system.properties which most likely doesn't exist. To assign to a variable, the quote must come after the equals:
FILE="system.properties"
It is a bit hard to tell from the question what you are after, but it sounds as if you might be after indirect variable names. Unfortunately, standard editions of bash don't allow dots in variable names.
However, if you used an underscore instead, then:
FILE="system_properties"
system_properties="$1"
echo "${FILE}"
echo "${!FILE}"
will echo:
system_properties
what-was-passed-as-the-first-argument

Read a Bash variable assignment from other file

I have this test script:
#!/bin/bash
echo "Read a variable"
#open file
exec 6<test.txt
read EXAMPLE <&6
#close file again
exec 6<&-
echo $EXAMPLE
The file test.txt has only one line:
EXAMPLE=1
The output is:
bash-3.2$ ./Read_Variables.sh
Read the variable
EXAMPLE=1
I need just to use the value of $EXAMPLE, in this case 1. So how can I avoid getting the EXAMPLE= part in the output?
Thanks
If the file containing your variables is using bash syntax throughout (e.g. X=Y), another option is to use source:
#!/bin/bash
echo "Read a variable"
source test.txt
echo $EXAMPLE
As an alternative to sourcing the entire file, you can try the following:
while read line; do
[[ $line =~ EXAMPLE= ]] && declare "$line" && break
done < test.txt
which will scan the file until it finds the first line that looks like an assignment to EXAMPLE, then use the declare builtin to perform the assignment. It's probably a little slower, but it's more selective about what is actually executed.
I think the most proper way to do this is by sourcing the file which contains the variable (if it has bash syntax), but if I were to do that, I'd source it in a subshell, so that if there are ever other variables declared there, they won't override any important variables in current shell:
(. test.txt && echo $EXAMPLE)
You could read the line in as an array (notice the -a option) which can then be indexed into:
# ...
IFS='=' read -a EXAMPLE <&6
echo ${EXAMPLE[0]} # EXAMPLE
echo ${EXAMPLE[1]} # 1
# ...
This call to read splits the input line on the IFS and puts the remaining parts into an indexed array.
See help read for more information about read options and behaviour.
You could also manipulate the EXAMPLE variable directly:
# ...
read EXAMPLE <&6
echo ${EXAMPLE##*=} # 1
# ...
If all you need is to "import" other Bash declarations from a file you should just use:
source file

checking if output of a curl commnd is null or not in shell script

I am writing a shell script and the curl command is stored in a variable like this:
str="curl 'http...'"
and I am evaluating this using
eval $str
I want to check whether the output of this is null or not,means it is returning me something or not.How can I check this?
Why cant I store the output in a variable like this:
op=eval $str
Why does wriring this gives error?
Thanks in advance
Use command substitution:
str="curl 'http...'"
op=$(eval "$str")
if [[ -n $op ]]; then
echo "Output is not null."
else
echo "Output is null.
fi
A better approach actually instead of using eval, is to use arrays as it's safer:
cmd=("curl" "http://...")
op=$("${cmd[#]}")
if [[ -n $op ]]; then
echo "Output is not null."
else
echo "Output is null.
fi
When assigning the array, separate every argument as a single element. You can pick any quoting method whether by the use of single quotes or double quotes if quoting is necessary to keep an argument as one complete argument.
By the way you simply had an error since it wasn't the proper way to call a command in which you'd get the output. As explained you have to use command substitution. An advanced method called process substitution could also be applicable but it's not practical for requirement.

Resources