How to delay expansion of variable in bash if command should be executed on an other machine? - bash

I have read several threads how to delay a variable expansion by using the eval command, for example but this does not work in my example.
I want to run the following command:
run_on_compute_farm "/usr/bin/time $MAKE -j$(grep -c ^processor /proc/cpuinfo) all"
As you can see the command is to be executed on a compute farm and thus on another machine. The problem here is that I want to set the number of jobs,
-j$(grep -c ^processor /proc/cpuinfo)
But this will evaluate to the number of cpu cores on the machinve that I am sitting on, not on the target.
Is it even possible in bash to delay the evaluation of the variable or the command? Note that run_on_compute_farm command will evaluate and run the string command that it received as argument.

As Cyrus said, you can replace " with ' which will prevent variable expansion. However, if you require some variable expansion, or multiple levels of expansion, then this will not be the right solution.
If in your example you had wanted to expand $MAKE, but not perform the grep, then the answer I think, is to use \ to escape the $. So,
run_on_compute_farm "/usr/bin/time $MAKE -j\$(grep -c ^processor /proc/cpuinfo) all"
will allow $MAKE to expand on the local machine, but leave the grep for the remote machine to resolve.
You can even nest this strategy, so for example let's get crazy and try this:
run_on_compute_farm "run_on_compute_farm \"/usr/bin/time \$MAKE -j\\\$(grep -c ^processor /proc/cpuinfo) all\""
Notice that some " have been quoted (with \) and I've also sent a quoted backslash over as \\. So, on the first remote machine the command executed will be:
run_on_compute_farm "/usr/bin/time $MAKE -j\$(grep -c ^processor /proc/cpuinfo) all"
So, $MAKE will be resolved on the first remote, but the grep will is still quoted. This then gets sent to a second machine and becomes:
/usr/bin/time $MAKE -j$(grep -c ^processor /proc/cpuinfo) all

Related

Can't run "compgen -c" from perl script

I want to check if a command exists on my machine (RedHat) inside a perl script.
Im trying to check if compgen -c contains the desired command, but running it from inside a script just gives me an empty output. Other commands work fine.
example.pl:
my $x = `compgen -c`;
print $x;
# empty output
my $y = `ls -a`;
print $y;
# .
# ..
# example.pl
Are there possible solutions for this? Or is there a better way to check for commands on my machine?
First, Perl runs external commands using /bin/sh, which is nowadays a link to a shell that is a default-of-sorts on your system. Much of the time that is bash, but not always; on RedHat it is.
This compgen is a bash builtin. One way to discover that is to run man compgen (in bash) -- and the bash manual pops up. Another way is type as Dave shows.
To use builtins we generally need to run an explicit shell for them, and they have a varied behavior in regards to whether the shell is "interactive" or not.† I can't find a discussion of that in bash documentation for this builtin but experimentation reveals that you need
my #completions = qx(bash -c "compgen -c")
The quotes are needed so to pass a complete command to a shell that will be started.
Note that this way you don't catch any STDERR out of those commands. That will come out on the terminal, and it can get missed that way. Or, you can redirect that stream in the command, by adding 2>&1 (redirect to STDOUT) at the end of it.
This is one of the reasons to use one of a number of good libraries for running and managing external commands instead of the builtin "backticks" (the qx I use above is an operator form of it.)
† This can be facilitated with -i
my #output_lines = qx(bash -i -c "command with arguments")
It's because compgen is a bash built-in command, not an external command. And when you run a command using backticks, you get your system's default shell - which is probably going to be /bin/sh, not bash.
The solution is to explicitly run bash, using the -c command-line option to give it a command to run.
my $x = `bash -c compgen -c`;
From a bash prompt, you can use type to see how a command is implemented.
$ type ssh
ssh is /usr/bin/ssh
$ type compgen
compgen is a shell builtin

Run an arbitrary command in a docker container that runs on a remote host after sourcing some environment variables from another command

To show what I am trying to do, this is part of the bash script I have so far:
COMMAND="${#:1}"
CONTAINER_DOCKER_NAME=this-value-is-computed-prior
MY_IP=this-ip-is-computed-prior
ssh user#$MY_IP -t 'bash -c "docker exec -it $( docker ps -a -q -f name='$CONTAINER_DOCKER_NAME' | head -n 1 ) /bin/sh -c "eval $(echo export FOO=$BAR) && $COMMAND""'
So let's break down the long command:
I am ssh-ing into a host where I run a bash which fetches the correct container with docker ps and then I do docker exec to run a shell in the container to load some environment variables that my $COMMAND needs to work. Important to note is that $BAR should be the value of the BAR variable inside the container.
So thats what I'm trying to accomplish in theory. However when running this no matter how I set the braces, quotes or escape characters - I always run into problems, either the shell syntax is not correct or it does not run the correct command (especially when the command has multiple arguments) or it loads $BAR value from my local desktop or the remote host but not from the container.
Is this even possible at all with a single shell one-liner?
I think we can simplify your command quite a bit.
First, there's no need to use eval here, and you don't need the &&
operator, either:
/bin/sh -c "eval $(echo export FOO=$BAR) && $COMMAND"
Instead:
/bin/sh -c "FOO=$BAR $COMMAND"
That sets the environment variable FOO for the duration of
$COMMAND.
Next, you don't need this complex docker ps expression:
docker ps -a -q -f name="$CONTAINER_DOCKER_NAME"
Docker container names are unique. If you have a container name
stored in $CONTAINER_DOCKER_NAME, you can just run:
docker exec -it $CONTAINER_DOCKER_NAME ...
This simplifies the docker command down to:
docker exec -it $CONTAINER_DOCKER_NAME \
/bin/sh -c "FOO=\$BAR $COMMAND"
Note how we're escaping the $ in $BAR there, because we want that
interpreted inside the container, rather than by our current shell.
Now we just need to arrange to run this via ssh. There are a couple
of solutions to that. We can just make sure to protect everything on
the command line against the extra level of shell expansion, like
this:
ssh user#$MY_IP "docker exec -it $CONTAINER_DOCKER_NAME \
/bin/sh -c \"FOO=\\\$BAR $COMMAND\""
We need to wrap the entire command in double quotes, which means we
need to escape any quotes inside the command (we can't use single
quotes because we actually want to expand the variable
$CONTAINER_DOCKER_NAME locally). We're going to lose one level of
\ expansion, so our \$BAR becomes \\\$BAR.
If your command isn't interactive, you can make this a little less
hairy by piping the script to bash rather than including it on the
command line, like this:
ssh user#$MY_IP docker exec -i $CONTAINER_DOCKER_NAME /bin/sh <<EOF
FOO=\$BAR $COMMAND
EOF
That simplifies the quoting and escaping necessary to get things
passed through to the container shell.
thanks to larsks great explanation I got it working, my final one-liner is:
ssh -i $ECS_SSH_KEY ec2-user#$EC2_IP -t "bash -c \"docker exec -it \$( docker ps -a -q -f name=$CONTAINER_DOCKER_NAME | head -n 1 ) /bin/sh -c \\\"eval \\\\\\\$(AWS_ENV_PATH=/\\\\\\\$ENVIRONMENT /bin/aws-env) && $COMMAND\\\"\""
so basically you wrap everything in double quotes and then also use double quotes inside of it because we need some variables ,like $DOCKER_CONTAINER_NAME from the host. to escape the quotes and the $ sign you use \ .
but because we have multiple levels of shells (host, server, container) we also need to use multiple levels of escaping. so the first level is just \$ which will protect that the variable (or the shell command, like docker ps) is not run on the host but on the server.
then the next level of escaping is 7 times \ . every \ escapes the character to the right so in the end it is \\\$ on the second level (server) and \$ on the third level (container). this ensures that the variable is evaluated in the container not on the server.
same principle with the double quotes. Everything between \" is run on the second level and everything between \\\" is run on the third level.

Passing unescaped equals sign to GNU parallel in args

I invoked GNU parallel (on OS X Yosemite, installed using MacPorts, shell is bash 3.2.57) like this:
parallel mycommand -o A=5 -o ::: Y=1 Y=2
with the intent that it would run the following commands, in parallel:
mycommand -o A=5 -o Y=1
mycommand -o A=5 -o Y=2
But it actually runs this:
mycommand -o A=5 -o Y\=1
mycommand -o A=5 -o Y\=2
The backslash causes mycommand not to recognize that argument. This is a problem. And even after scanning the man page and reading the section of the tutorial on quoting, I can't figure out any way to get parallel to run the commands without the backslash getting in there. I've tried putting the Y= options in a file, I've tried single and double quotes with various levels of nesting, but the output of parallel --dry-run always shows Y\=. Is there some way I can get the backslash out?
This should do the trick:
parallel eval mycommand -o A=5 -o ::: Y=1 Y=2

Trouble escaping quotes in a variable held string during a Sub-shell execution call [duplicate]

This question already has answers here:
Why does shell ignore quoting characters in arguments passed to it through variables? [duplicate]
(3 answers)
Closed 6 years ago.
I'm trying to write a database call from within a bash script and I'm having problems with a sub-shell stripping my quotes away.
This is the bones of what I am doing.
#---------------------------------------------
#! /bin/bash
export COMMAND='psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o ${EXPORT_FILE} 2>&1'
PSQL_RETURN=`${COMMAND}`
#---------------------------------------------
If I use an 'echo' to print out the ${COMMAND} variable the output looks fine:
echo ${COMMAND}
screen output:-
#---------------
psql drupal7 -F , -t --no-align -c "SELECT DISTINCT hostname FROM accesslog;" -o /DRUPAL/INTERFACES/EXPORTS/ip_list.dat 2>&1
#---------------
Also if I cut and paste this screen output it executes just fine.
However, when I try to execute the command as a variable within a sub-shell call, it gives an error message.
The error is from the psql client to the effect that the quotes have been removed from around the ${SQL} string.
The error suggests psql is trying to interpret the terms in the sql string as parameters.
So it seems the string and quotes are composed correctly but the quotes around the ${SQL} variable/string are being interpreted by the sub-shell during the execution call from the main script.
I've tried to escape them using various methods: \", \\", \\\", "", \"" '"', \'"\', ... ...
As you can see from my 'try it all' approach I am no expert and it's driving me mad.
Any help would be greatly appreciated.
Charlie101
Instead of storing command in a string var better to use BASH array here:
cmd=(psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o "${EXPORT_FILE}")
PSQL_RETURN=$( "${cmd[#]}" 2>&1 )
Rather than evaluating the contents of a string, why not use a function?
call_psql() {
# optional, if variables are already defined in global scope
DB_NAME="$1"
SQL="$2"
EXPORT_FILE="$3"
psql "$DB_NAME" -F , -t --no-align -c "$SQL" -o "$EXPORT_FILE" 2>&1
}
then you can just call your function like:
PSQL_RETURN=$(call_psql "$DB_NAME" "$SQL" "$EXPORT_FILE")
It's entirely up to you how elaborate you make the function. You might like to check for the correct number of arguments (using something like (( $# == 3 ))) before calling the psql command.
Alternatively, perhaps you'd prefer just to make it as short as possible:
call_psql() { psql "$1" -F , -t --no-align -c "$2" -o "$3" 2>&1; }
In order to capture the command that is being executed for debugging purposes, you can use set -x in your script. This will the contents of the function including the expanded variables when the function (or any other command) is called. You can switch this behaviour off using set +x, or if you want it on for the whole duration of the script you can change the shebang to #!/bin/bash -x. This saves you explicitly echoing throughout your script to find out what commands are being run; you can just turn on set -x for a section.
A very simple example script using the shebang method:
#!/bin/bash -x
ec() {
echo "$1"
}
var=$(ec 2)
Running this script, either directly after making it executable or calling it with bash -x, gives:
++ ec 2
++ echo 2
+ var=2
Removing the -x from the shebang or the invocation results in the script running silently.

How to pass arguments with spaces to MinGW-MSYS shell scripts from the Windows command line?

I have a sh script, myscript.sh, which takes a directory as an input argument and does some recursive processing of the files in that directory. I want to run this script in Windows command line (I use the MinGW/MSYS distribution).
How do I properly provide a path with spaces as an input argument?
For example, I want to give a path, 'dirA\dir B'. I tried many different combinations, including
sh -c 'myscript.sh "dirA/dir B"'
sh -c 'myscript.sh "dirA/dir\ B"'
sh -c "myscript.sh 'dirA/dir\\ B'"
sh -c "myscript.sh \"dirA/dir B\" "
sh -c "myscript.sh dirA/dir\ B "
But on all of them the script understands the path as 'dirA/dir'.
cmd doesn't understand single quotes. If sh does, then make the outer quotes double, so that you're double-quoting the argument at the cmd prompt, and passing the single quotes to sh to interpret.
The third option you listed is the right idea, but it's not clear to me what you're doing with the \\. Does sh require you to escape the space within a double-quoted string? If I'm not mistaken, it's either/or, not both. One of these should work (depending on whether your sh Windows port uses Unix-style forward slash path separators or Windows-style backslash separators or both):
sh -c "myscript.sh 'dirA/dir B'"
or
sh -c "myscript.sh 'dirA\dir B'"
I'm not sure why the fourth option doesn't work. That method works fine for passing double-quoted arguments to PowerShell from cmd. For example, this works:
powershell -noexit "sl \"C:\Program Files\""
This leads me to suspect that it's sh that's having a problem with the path arguments 'dirA/dir B' and "dirA/dir B" -- especially if the suggestions above don't work.
Speaking of PowerShell, you might want to give that a try instead. Any of the following should work:
sh -c 'myscript.sh "dirA/dir B"'
sh -c "myscript.sh 'dirA/dir B'"
sh -c 'myscript.sh ''dirA/dir B'''
sh -c "myscript.sh `"dirA/dir B`""

Resources