Nested quoting in bash - bash

I'm connecting to a remote server via SSH:
ssh -i ~/.ssh/pk.pem user#server
and then, on that server, open bash within a Docker container:
docker exec -it $(docker ps | grep ecs-worker-low | cut -d ' ' -f1) bash
This works fine. (Note that I need to get the container ID like this. I'm not able to name the container.)
I would like to combine the two commands, so that I only run one command and get the shell within the container. This can be done with something like this:
ssh -i ~/.ssh/pk.pem user#server -t "bash -c 'docker exec -it $(docker ps | grep ecs-worker-low | cut -d ' ' -f1) bash'"
However this doesn't work because of the nested single quotes. I haven't found any way around this. Can you please help me? Thank you.

You can avoid the use of cut with --filter and --format
ssh -t -i ~/.ssh/pk.pem user#serve 'docker exec -it $(docker ps --filter ancestor=ecs-worker-low --format {{.ID}}) bash'

It's probably easiest to use a heredoc:
ssh -i ~/.ssh/pk.pem user#server -t << \EOF
docker exec -it $(docker ps | grep ecs-worker-low | cut -d ' ' -f1)
EOF
Make sure you use a non-interpolating heredoc. If you omit the backslash on the initial delimiter, the process substitution will be made on the local host.

Swap the quotes:
ssh -i ~/.ssh/pk.pem user#server -t 'bash -c "docker exec -it $(docker ps | grep ecs-worker-low | cut -d " " -f1) bash"'
All the double quotes are literal characters as far as ssh is concerned, and the command substitution creates a new context so that the first inner quote does not close the first outer quote. That said...
... Simplifying matters, you likely don't need the outer bash; ssh can run docker for you directly:
ssh -i ~/.ssh/pk.pem user#server -t 'docker exec -it $(docker ps | grep ecs-worker-low | cut -d " " -f1) bash'

Related

Bash Script fails with error: OCI runtime exec failed

I am running the below script and getting error.
#!/bin/bash
webproxy=$(sudo docker ps -a --format "{{.Names}}"|grep webproxy)
webproxycheck="curl -k -s https://localhost:\${nginx_https_port}/HealthCheckService"
if [ -n "$webproxy" ] ; then
sudo docker exec $webproxy sh -c "$webproxycheck"
fi
Here is my docker ps -a output
$sudo docker ps -a --format "{{.Names}}"|grep webproxy
webproxy-dev-01
webproxy-dev2-01
when i run the command individually it works. For Example:
$sudo docker exec webproxy-dev-01 sh -c 'curl -k -s https://localhost:${nginx_https_port}/HealthCheckService'
HEALTHCHECK_OK
$sudo docker exec webproxy-dev2-01 sh -c 'curl -k -s https://localhost:${nginx_https_port}/HealthCheckService'
HEALTHCHECK_OK
Here is the error i get.
$ sh healthcheck.sh
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"webproxy-dev-01\": executable file not found in $PATH": unknown
Could someone please help me with the error. Any help will be greatly appreciated.
Because the variable contains two tokens (on two separate lines) that's what the variable expands to. You are running
sudo docker exec webproxy-dev-01 webproxy-dev2-01 ...
which of course is an error.
It's not clear what you actually expect to happen, but if you want to loop over those values, that's
for host in $webproxy; do
sudo docker exec "$host" sh -c "$webproxycheck"
done
which will conveniently loop zero times if the variable is empty.
If you just want one value, maybe add head -n 1 to the pipe, or pass a more specific regular expression to grep so it only matches one container. (If you have control over these containers, probably run them with --name so you can unambiguously identify them.)
Based on your given script, you are trying to "exec" the following
sudo docker exec webproxy-dev2-01
webproxy-dev-01 sh -c "curl -k -s https://localhost:${nginx_https_port}/HealthCheckService"
As you see, here is your error.
sudo docker exec webproxy-dev2-01
webproxy-dev-01 [...]
The problem is this line:
webproxy=$(sudo docker ps -a --format "{{.Names}}"|grep webproxy)
which results in the following (you also posted this):
webproxy-dev2-01
webproxy-dev-01
Now, the issue is, that your docker exec command now takes both images names (coming from the variable assignment $webproxy), interpreting the second entry (which is webproxy-dev-01 and sepetrated by \n) as the exec command. This is now intperreted as the given command which is not valid and cannot been found: That's what the error tells you.
A workaround would be the following:
webproxy=$(sudo docker ps -a --format "{{.Names}}"| grep webproxy | head -n 1)
It only graps the first entry of your output. You can of course adapt this to do this in a loop.
A small snippet:
#!/bin/bash
webproxy=$(sudo docker ps -a --format "{{.Names}}"| grep webproxy )
echo ${webproxy}
webproxycheck="curl -k -s https://localhost:\${nginx_https_port}/HealthCheckService"
while IFS= read -r line; do
if [ -n "$line" ] ; then
echo "sudo docker exec ${line} sh -c \"${webproxycheck}\""
fi
done <<< "$webproxy"

running a for loop over ssh remotely

team, I have below cord that does ssh login and stays connected. I want to run some commands in for loop but getting some syntax errors. The same exact commands work when i manually login to node and sudo bash and just copy paste.
code
read -p "specify just the list of nodes " nodes
for node in $nodes
do
ssh -q -F $HOME/.ssh/ssh_config -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -t $node.team.net \
"for line in `docker ps | grep test | awk '{print $1}'`;
do
POD_ID=$(docker inspect $line --format='{{ index .Config.Labels "io.kubernetes.pod.uid" }}')
POD_NAME=$(docker inspect $line --format='{{ index .Config.Labels "io.kubernetes.pod.name"}}')
POD_VOL="/var/lib/kubelet/pods/$POD_ID/volumes"
POD_DU=$(du -sh $POD_VOL < /dev/null)
HOSTNAME=$(hostname)
AGENT_SHA=$(docker inspect $line --format='{{ index .Config.Image }}' | cut -d ':' -f2)
STARTED_AT=$(docker inspect $line --format='{{ .State.StartedAt }}')
echo $HOSTNAME, $POD_NAME, $POD_DU, $AGENT_SHA, $STARTED_AT
done"
printf "\n"
done;
output
Your new SSH certificate is ready for use!
specify just the list of nodes node1
"docker inspect" requires at least 1 argument.
See 'docker inspect --help'.
Usage: docker inspect [OPTIONS] NAME|ID [NAME|ID...]
Return low-level information on Docker objects
"docker inspect" requires at least 1 argument.
See 'docker inspect --help'.
expected
container1 45GB
..
..
Is better send a file with script content and run. Something like:
Copy script
for a in {server1,server2,serverN};
do
scp your_script.sh root#$a:/path/to/your_script.sh
done
Exec script
for a in {server1,server2,serverN};
do
ssh root#$a "sh /path/to/your_script.sh par1 par2 parn";
done

Direct group of commands into `docker exec`

I have the following command that works fine and prints foo before returning:
docker exec -i <id> /bin/sh < echo "echo 'foo'"
I want to direct multiple commands into the container with one pipe, for example echo 'foo' and ls /. I have tried the following:
This fails because it runs the commands on the host and pipes the output into the container:
{
echo "foo"
ls /
} | docker exec -i <id> /bin/sh
This fails because it has bad syntax. It also runs on the host:
{
echo "foo"
ls /
} | docker exec -i <id> /bin/sh
This one fails, but I would like to not use an array of strings anyway:
for COMMAND in 'echo "foo"' 'ls /'
do
docker exec -i <id> /bin/sh < echo $COMMAND
done
I've also tried several other methods like piping commands into tee or echo but haven't had any luck. If you would like to know why I want to do this seemingly ridiculous thing, it's because:
This is a short script that I would like to keep all in one place
I would like to use syntax highlighting, so I don't want to store it all in a list of strings
The container has the programs the script should run and the host does not
This is an automatic process that I would like to trigger with crontab on the host
You can run a group of commands in the below fashion
docker exec -i <id> /bin/sh -c 'echo "foo"; ls -l'
OR
docker exec -i 996eee5d121d /bin/sh -c 'echo 'foo'; ls -l'
OR
docker exec -i 996eee5d121d /bin/sh -c 'echo foo; ls -l'
If you want to run more than 2 commands, just append ; after each command like
docker exec -i 996eee5d121d /bin/sh -c 'echo "foo"; ls -l; ls -a'
Use a here document.
docker run -i --rm alpine /bin/sh <<EOF
echo abc
ls /
EOF
Note the difference between quoted and unquoted here document delimiter.
docker exec -i <id> /bin/sh < echo "echo 'foo'"
I think you meant to do:
docker exec -i <id> /bin/sh < <(echo "echo 'foo'")
which is just the same as:
docker exec -i <id> /bin/sh <<<"echo 'foo'"
#edit There is a cool little trick. The idea is to pipe the script itself except first lines to another subprocess, it's sometimes used by installer scripts:
#!/bin/sh
# output this script except first 4 lines to docker
tail -n+5 "$0" | docker run -i --rm alpine /bin/sh -x
exit # we exit original script
#!/bin/sh
# inside docker now
echo abc
ls /
Execution:
$ bash -x ./script.sh
+ tail -n+5 ./script.sh
+ docker run -i --rm alpine /bin/sh -x
+ echo abc
+ ls /
abc
bin
...
var
+ exit
In a similar fashion you could use sed or another parsing tool to extract the only the relevant part between some marks for example.
I found a gist that explained how to pipe commands into docker exec:
echo "echo foo" | docker exec -i <id> /bin/sh -
Now we need a way to pipe multiple commands. Command groups won't work because they run on the host and semicolon separated commands can get messy. I thought of writing a function and getting just its body, it turns out you can do that with a simple declare and sed call.
You can combine all these pieces to pipe a command into the container:
function func {
echo "foo"
ls /
}
declare -f func | sed '1,2d;$d' | docker exec -i <id> /bin/bash -
Syntax highlighting still works in the function and it is easy to read.
If you want to use environment variables that are on the host in the container you have to list them manually in docker exec like so:
... | docker exec -i -e VAR=$VAR <id> /bin/bash -
Edit: I'm leaving this here as a possible solution, but the accepted answer is the proper solution I am using.

Read variables in nested quotes

I want to ssh into a host and start a container and run some commands. So the code will be like this:
ssh $host 'screen -L -d -m bash -c "docker run "\
"--network=host -v ~/data:/data myimage:${TAG_NAME}"\
" /bin/bash -c \" some command.... \""'
The question is simple, since I was using single quote, I can't read the ${TAG_NAME}. Is there any way to write this kind of nested quotes and also pass the variable?
You can stop and start your single quotes to include the environment variable, like so:
echo 'foo'"$HOME"'foo'
For your example, the way to include an env var (from your local system) in the command that runs on $host would be:
ssh $host 'screen -L -d -m bash -c "docker run'\
' --network=host -v ~/data:/data myimage:'"$TAG_NAME"\
' /bin/bash -c \" some command.... \""'

Shell : How to get a container's name containing some string

I have a list of containers where names are like following :
container 1: myApp_ihm.dfgdfgdfgdfvdfdfbvdfvdfv
container 2: myApp_back.uirthjhiliszfhjuioomlui
...
container 3: myApp_database.piyrjfhjyukyujfkgft
I have to execute some string on the container where the name contains ihm (the first one in my example)
In order to exec my commands , I'm used to do:
docker exec -it ihm bash
so ihm should by replaced by some test to get the first one name :
myApp_ihm.dfgdfgdfgdfvdfdfbvdfvdfv
Suggestions?
docker exec -it $(docker ps | grep myApp_ihm | awk '{print $1}') /bin/bash
docker exec -it $(docker ps --format "{{.Names}}" | grep "ihm") bash
This worked for me, added that to a bash script and saved myself 30-60 seconds of typing/copy-pasting every time I want to go into my container.
docker exec -it $(docker ps --format "{{.ID}} {{.Command}}" | grep /home/app/ | awk '{print $1}') /bin/bash

Resources