Variable doesn't expand while passing as a parameter to docker-compose command inside heredoc block - bash

I was trying to run some docker-compose command over ssh using bash script like below. I mean I have an executable shell script deploy.sh which contains below code snippets
ssh -tt -o StrictHostKeyChecking=no root#142.32.45.2 << EOF
DIR=test
echo \${DIR}
docker-compose --project-name \${DIR} up -d
EOF
But the DIR variable doesn't get expanded while passing as a parameter to docker-compose. It executes like below. While echo \${DIR} gives correct output i.e test.
docker-compose --project-name ${DIR} up -d

ssh -tt -o StrictHostKeyChecking=no root#142.32.45.2 <<'EOF'
DIR=test
echo ${DIR}
docker-compose --project-name ${DIR} up -d
EOF
Get rid of the \$ - it is preventing your variable expansion. But on second review, I see that's your intention. If you want to prevent all variable expansion until your code gets executed on the remote host, try putting the heredoc word in quotes. That way, the $ gets passed to the script being passed to ssh.
As a second suggestion ( as per my comment below ), I would consider just sending a parameterized script to the remote host and then executing it ( after changing its permissions ).
# Make script
cat >compose.sh <<'EOF'
#!/bin/bash
DIR=$1
docker-compose --project-name $DIR
EOF
scp -o StrictHostKeyChecking=no compose.sh root#142.32.45.2:
ssh -o StrictHostKeyChecking=no root#142.32.45.2 chmod +x ./compose.sh \; ./compose.sh test

Related

how can i expect ssh and run shell command?

i want to mkdir at remote machine, but i dont know if the dir exists, how can i do this?
i use spawn ssh username#ip bash -c [ -d $dest_file ] && echo ok || mkdir -p $dest_file
and returns
while executing
"-d $dest_file "
invoked from within
"[ -d $dest_file ] && echo ok || mkdir -p $dest_file"
(file "mkdir.exp" line 22)
i cant use ssh-key because my ip is dynamic
First, you probably don't need bash -c because ssh is already executing the command with your remote shell.
Secondly, you're not sufficiently quoting your ssh arguments. You're writing an expect script, which uses the tcl programming language, and [ is a special character that will attempt to evaluate its contents as a tcl command and return the output (read more here). For this to work properly, you would need to escape the opening [ to get tcl to interpret it literally:
spawn ssh localhost \[ -d $dest_file ] && echo ok || mkdir -p $dest_file
This seems to work correctly on my system, but as I indicate in a comment it would be much easier to drop all the conditionals and just run:
spawn ssh localhost mkdir -p $dest_file
This accomplishes the same thing and doesn't run afoul of any quoting issues.

Passing variables to SSH [duplicate]

This question already has answers here:
Passing external shell script variable via ssh
(2 answers)
Variable issues in SSH
(1 answer)
Closed 1 year ago.
The following code loops through states in a array and passes a state to a server via ssh -
STATES="NY CO"
arr_states=(${STATES//' /'/ })
for i in "${arr_states[#]}"; do
state=$i
ssh -o SendEnv=state jenkins#server sh -s << 'EOF'
sudo su
cd /home/jenkins/report
psql -d db -c "$(sed 's/state_name/'"$state"'/' county.sql)" -U user
echo $state
EOF
done
The output of echo $state in the above is an empty string even if I pass it NY.
When I change the 'EOF' to EOF, the output of echo $state is the string I passed (NY). But then it says, the file county.sql does not exist.
How do I get it to recognize both the variable I pass and the file on the remote I am trying to run.
As an approach that doesn't require you to do any manual escaping of your code (which frequently becomes a maintenance nightmare, since it means that code needs to be changed whenever you modify where it's expected to run) -- consider defining a function, and using declare -f to ask the shell to generate code that will output that function for you.
The same can be done with variables, using declare -p. Thus, passing both a function with the remote code, and the variables that remote code needs to operate that way:
#!/usr/bin/env bash
# This is run on the remote server _as root_ (behind sudo su)
remotePostEscalationFunc() {
cd /home/jenkins/report || return
if psql -d db -U user -c "$(sed -e "s/state_name/${state}/" county.sql)"; then
echo "Success processing $state" >&2
else
rc=$?
echo "Failure processing $state" >&2
return "$rc"
fi
}
# This is run on the remote server as the jenkins user (before sudo).
remoteFunc() {
sudo su -c "$(declare -p state); $(declare -f remotePostEscalationFunc); remotePostEscalationFunc"
}
# Everything below here is run locally.
arr_states=( NY CO )
for state in "${arr_states[#]}"; do
ssh jenkins#server 'bash -s' <<EOF
$(declare -f remoteFunc remotePostEscalationFunc); $(declare -p state); remoteFunc
EOF
done
You were almost right with the change from 'EOF' to EOF. You are just missing a backslash (\) before $(sed. So the following should work:
arr_states=(${STATES//' /'/ })
for i in "${arr_states[#]}"; do
state=$i
ssh -o SendEnv=state jenkins#server sh -s << EOF
sudo su
cd /home/jenkins/report
psql -d db -c "\$(sed 's/state_name/'"$state"'/' county.sql)" -U user
echo $state
EOF
done

Bash Script: How to run command with $myVar as one of the arguments?

I have a bash script that SSHes into 2 machines and runs identical commands.
I'd like to store this in a var, but I'm not sure how to reference the contents of the var when running the command
ssh -o StrictHostKeyChecking=no ubuntu#123.123.123 -i ./travis/id_rsa <<-END
sudo su;
...
echo "Done!";
END
ssh -o StrictHostKeyChecking=no ubuntu#456.456.456 -i ./travis/id_rsa <<-END
sudo su;
...
echo "Done!";
END
I tried something like this but it didn't work:
script=$(cat <<-END
sudo su;
...
echo "Done!";
END
)
ssh -o StrictHostKeyChecking=no ubuntu#123.123.123 -i ./travis/id_rsa $script
ssh -o StrictHostKeyChecking=no ubuntu#456.456.456 -i ./travis/id_rsa $script
If I am at all able to understand what you are asking, you really don't want to put the commands in a variable.
for host in 123.123.123 456.456.456; do
ssh -o StrictHostKeyChecking=no ubuntu#"$host" -i ./travis/id_rsa<<-\____here
sudo -s <<-_________there
: your script goes here
________there
echo "Done."
____here
done
If you really wanted to assign a multi-line variable (but trust me, you don't) the syntax for that is simply
script='sudo -s <<\____there
: your commands
____there
echo "Done."'
But there really is no need to do this, andeit actually complicates things down the line. You see, passing in properly quoted strings as arguments to ssh is extremely tricky - you have the local shell and the remote shell and both require additional quoting or escaping in order to correctly pass through shell metacharacters; and the usual caveats with eval apply, only you are effectively running a hidden eval by way of passing in executable code as a string for the remote shell.
I believe you want to do something like this:
cmds="sudo bash -c 'command1; command2; command3;'"
ssh ... "$cmds"

How to include the `-e` argument in the `$#` bash variable?

I'm creating a bash script which will wrap a docker run command, passing in all arguments. The docker cmd being run has a -e parameter like so:
docker run --rm -it --name some_name -v $(pwd):/some_dir some_image some_command -e -r -t
However, the bash script for some reason is not passing in the -e parameter.
For example I have the following test.sh script.
#!/bin/bash
echo "Echoing the command arguments."
echo $#
The -e parameter is not passed through to $# when in the first position.
$ ./test.sh -e -r -t
Echoing the command arguments.
-r -t
$ ./test.sh -r -e -t
Echoing the command arguments.
-r -e -t
Ultimately I would like to be able to call the docker command as follows, simply passing in all given parameters from the bash script to the docker command.
docker run --rm -it --name some_name -v $(pwd):/some_dir some_image some_command $#
This may be confusing to users if they are expecting the -e parameter to be passed in and the associate activity does not happen.
Is there a way to allow the -e parameter to pass through?
Change your test to:
#!/bin/bash
echo "Echoing the command arguments:"
printf ' - %q\n' "$#"
...and you'll see -e present.
A POSIX-conforming implementation of echo will print -e on output when given that string as its first argument. bash's implementation does not comply unless both set -o posix and shopt -s xpg_echo runtime options are set.

Run 'export' command Over SSH

When I run the following from my bash shell:
bash -c '(export abc=123 && echo $abc)'
The output is "123". But when I run it over ssh:
ssh remote-host "bash -c '(export abc=123 && echo $abc)'"
There is no output. Why is this? Is there a way around this? That is, is there a way to set an environment variable for a command I run over ssh?
Note: When I replace echo $abc with something standard like echo $USER the ssh command prints out the username on the remote machine as expected since it is already set.
I am running RHEL 5 Linux with OpenSSH 4.3
That is because when using
ssh remote-host "bash -c '(export abc=123 && echo $abc)'"
the variable gets expanded by the local shell (as it is the case with $USER) before ssh executes. Escape the $ by using \$ and it should do fine
ssh remote-host "bash -c '(export abc=123 && echo \$abc)'"
On a side note:
You don't need to export just for this.
You don't need to wrap it in ()
Like so:
ssh remote-host "bash -c 'abc=123 && echo \$abc'"
Heck, you can even leave out the bash -c ... stuff, as the ssh manpage states:
If command is specified, it is executed on the remote host instead of a login shell.
But these may be specific to your task ;)

Resources