Pipeline Server Eval not returning appropriately - bash

I am fairly new to bash and have been asked to develop a pipeline process for use on gitlab. I have simplified my variable notation below. My problem is that a command that runs fine locally will not run correctly on the server.
I have a dynamically generated $command_string that looks something like this:
command_string="cf service-key ${val1} ${val2}"
Locally, I can run
new_var=$(eval ${command_string})
But when this command is submit to the server and run as a pipeline, it breaks.
I should note that when I remove the new_var definition as the output of the eval function, it seems to run. The problem is that I need the output information so that I can extract information from the output.
Additionally, I have tried new_var=$(bash -c ${command_string}) but this returns a value as if I had submit only the first "cf" portion of $command_string (i.e. it returns the help menu of the cf function rather than the output of the provided arguments).
As I said, bash is a new language for me so I am sure I am missing something fundamental here.

${val1} and ${val2} are expanded at the time of definition, not evaluation, which means that the use of command_string will not be "dynamic" as you intend.
Consider defining a function as follows:
my_command ()
{
cf service-key "$1" "$2"
}
and using the function within your code as
new_var=$(my_command "$val1" "$val2")
. The eval is unnecessary since the $(...) is a command substitution which evaluates the contents of the enclosed command.

Related

Bash command works when I run it myself but fails in the script

My company has a tool that dynamically generates commands to run based on an input json. It works very well when all arguments to the compiled command are single words, but is failing when we attempt multi word args. Here is the minimal example of how it fails.
# Print and execute the command.
print_and_run() { local command=("$#")
if [[ ${command[0]} == "time" ]]; then
echo "Your command: time ${command[#]:1}"
time ${command[#]:1}
fi
}
# How print_and_run is called in the script
print_and_run time docker run our-conainer:latest $generated_flags
# Output
Your command: time docker run our-container:latest subcommand --arg1=val1 --arg2="val2 val3"
Usage: our-program [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]...
Try 'our-program --help' for help.
Error: No such command 'val3"'.
But if I copy the printed command and run it myself it works fine (I've omitted docker flags). Shelling into the container and running the program directly with these arguments works as well, so the parsing logic there is solid (It's a python program that uses click to parse the args).
Now, I have a working solution that uses eval, but my entire team jumped down my throat at that suggestion. I've also proposed a solution using delineating characters for multi-word arguments, but that was shot down as well.
No other solutions proposed by other engineers have worked either. So can I ask someone to perhaps explain why val3 is being treated as a separate command, or to help me find a solution to get bash to properly evaluate the dynamically determined command without using eval?
Your command after expanding $generated_flags is:
print_and_run time docker run our-conainer:latest subcommand --arg1=val1 --arg2="val2 val3"
Your specific problem is that in --arg2="val2 val3" the quotes are literal, not syntactical, because quotes are processed before variables are expanded. This means --arg2="val2 and val3" are being split into two separate arguments. Then, I assume, docker is trying to interpret val3" as some kind of docker command because it's not part of any argument, and it's throwing out an error because it doesn't know what that means.
Normally you'd fix this via an array to properly maintain the string boundary.
generated_flags=( "subcommand" "--arg1=val1" "--arg2=val2 val3" )
print_and_run time docker run our-container:latest "${generated_flags[#]}"
This will maintain --arg2=val2 val3 as a single argument as it gets passed into print_and_run, then you just have to expand your command array correctly inside the function (make sure to quote the expansion).
The question is:
why val3 is being treated as a separate command
Unquoted variable expansion undergo word splitting and filename expansion. Word splitting splits the result of the variable expansion on spcaes, tabs and newlines. Splits it into separate "words".
a="something else"
$a # results in two "words"; 'something' and 'else'
It is irrelevent what you put inside the variable value or how many quotes or escape sequences you put inside. Every consecutive spaces splits it into words. Quotes " ' and escapes \ are parsed when part of the input line, not when part of the result of unquoted expansion.
help me find a solution to
Write a parser that will actually parse the commands and split it according to the rules that you want to use and then execute the command split into separate words. For example, a very crude such parser is included in xargs:
$ echo " 'quotes quotes' not quotes" | xargs printf "'%s'\n"
'quotes quotes'
'not'
'quotes'
For example, python has shlex.split which you can just use, and at the same time introduce python which is waaaaay easier to manage than badly written Bash scripts.
tool that dynamically generates commands to run based on an input json
Overall, the proper way forward would is to upgrade the tool to generate a JSON array that represents the words of the command to be executed. Than you can just execute that array of words, which is, again, trivial to do properly in python with json and subprocess.run, and will require some gymnastics with jq and read and Bash arrays in shell.
Check your scripts with shellcheck.

How can I change the environment of my bash function without affecting the environment it was called from?

I am working on a bash script that needs to operate on several directories. In each directory it needs to source a setup script unique to that directory and then run some commands. I need the environment set up when that script is sourced to only persist inside the function, as if it had been called as an external script with no persistent effects on the calling script.
As a simplified example if I have this script, sourced.sh:
export foo=old
And I have this as my driver.sh:
export foo=young
echo $foo
source_and_run(){
source ./sourced.sh
echo $foo
}
source_and_run
echo $foo
I want to know how to change the invocation of source_and_run in driver so it will print:
young
old
young
I need to be able to collect the return value from the function as well. I'm pretty sure there's a simple way to accomplish this but I haven't been able to track it down so far.
Creating another script like external.sh:
source ./sourced.sh; echo $foo
and defining source_and_run like
source_and_run(){ ./external.sh; return $? }
would work but managing that extra layer of scripts seems like it shouldn't be necessary.
You said
Creating another script like external.sh:
source ./sourced.sh; echo $foo
and defining source_and_run like
source_and_run(){ ./external.sh; return $? }
would work but managing that extra layer of scripts seems like it shouldn't be necessary.
You can get the same behavior by using a subshell. Note the () instead of {}.
But note that return $? is not necessary in your case. By default, a function returns the exit status of its last command. In your case, that command is echo. Therefore, the return value will always be 0.
source_and_run() (
source ./sourced.sh
echo "$foo"
)
By the way: A better solution would be to rewrite sourced.sh such that it prints $foo. That way you could call the script directly instead of having to source it and then using echo $foo.
The very purpose of a bash function is that it runs in the same process as the invoker.
Since the environment is accessible (for instance, using the command printenv),you could, at the entry of the function, save the environment and at the end restore it. However, the easier and more natural approach is to not use a function at all, but make it a separate shell script which is executed ín its own process and hence has its own environment, which does not affect the environment of the caller anymore.

Execute command that results from execution of a script whose name is in a variable

When posting this question originally, I totally misworded it, obtaining another, reasonable but different question, which was correctly answered here.
The following is the correct version of the question I originally wanted to ask.
In one of my Bash scripts, there's a point where I have a variable SCRIPT which contains the /path/to/an/exe which, when executed, outputs a line to be executed.
What my script ultimately needs to do, is executing that line to be executed. Therefore the last line of the script is
$($SCRIPT)
so that $SCRIPT is expanded to /path/to/an/exe, and $(/path/to/an/exe) executes the executable and gives back the line to be executed, which is then executed.
However, running shellcheck on the script generates this error:
In setscreens.sh line 7:
$($SCRIPT)
^--------^ SC2091: Remove surrounding $() to avoid executing output.
For more information:
https://www.shellcheck.net/wiki/SC2091 -- Remove surrounding $() to avoid e...
Is there a way I can rewrite that $($SCRIPT) in a more appropriate way? eval does not seem to be of much help here.
If the script outputs a shell command line to execute, the correct way to do that is:
eval "$("$SCRIPT")"
$($SCRIPT) would only happen to work if the command can be completely evaluated using nothing but word splitting and pathname expansion, which is generally a rare situation. If the program instead outputs e.g. grep "Hello World" or cmd > file.txt then you will need eval or equivalent.
You can make it simple by setting the command to be executed as a positional argument in your shell and execute it from the command line
set -- "$SCRIPT"
and now run the result that is obtained by expansion of SCRIPT, by doing below on command-line.
"$#"
This works in case your output from SCRIPT contains multiple words e.g. custom flags that needs to be run. Since this is run in your current interactive shell, ensure the command to be run is not vulnerable to code injection. You could take one step of caution and run your command within a sub-shell, to not let your parent environment be affected by doing ( "$#" ; )
Or use shellcheck disable=SCnnnn to disable the warning and take the occasion to comment on the explicit intention, rather than evade the detection by cloaking behind an intermediate variable or arguments array.
#!/usr/bin/env bash
# shellcheck disable=SC2091 # Intentional execution of the output
"$("$SCRIPT")"
By disabling shellcheck with a comment, it clarifies the intent and tells the questionable code is not an error, but an informed implementation design choice.
you can do it in 2 steps
command_from_SCRIPT=$($SCRIPT)
$command_from_SCRIPT
and it's clean in shellcheck

Getting groovy syntax issue in my Jenkins pipeline

I have a simple script that i need for my pipeline :-
sh '''podname=$(kubectl get pods -n my-namespace --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep my-pod)
echo "my name is $podname"'''
All i need to invoke through my jenkins declarative pipeline is the value of that kubectl command that i can use later in my script. If i run the same directly on the linux server it works just fine, but somehow through groovy shell invocation always results in syntax errors with respect to unterminated quotes, illegal dollar literal, etc.
How do i fix this?
The syntax looks fine to me, except for this, in the middle of the command:
"\n"
This will be interpreted as an actual new-line, which may not be what you want? Perhaps try this instead, to let the \n sequence be passed to the command:
"\\n"
If that still doesn't help...
according to the docs:
Runs a Bourne shell script, typically on a Unix node. Multiple lines are accepted.
An interpreter selector may be used, for example: #!/usr/bin/perl
So another suggestion is to try adding the appropriate shebang to your script... as there's no reason the result should be different from when you run this on the shell.

what will bash do with an unset variable

I am confused by the behavior about how do bash treat a unset variable used in a shell command, like below:
rm -rf /$TO_BE_REMOVED
what will be done if i have not defined a variable TO_BE_REMOVED.
If you do that, the command executed will effectively try to remove / which is very, very bad. I mean, it will probably mostly fail (unless you're running as root), but still, it will be very bad.
You can avoid many of these sorts of bugs in Bash automatically with one simple command:
set -eu
If you put that at the top of your Bash script, the interpreter will stop and return an error code if your script ever invokes a command which returns an error which is not checked (that's the -e part), or if it uses an undefined variable (the -u part). This makes Bash considerably safer.

Resources