How to forward psql shell from kubectl - bash

I'm trying to make my life easier and am coding a bash script. One of these allows me to kube into a pod with postgres access, get the credentials I need, and run the interactive psql shell.
However, upon running
kubectl <flags> exec $podname -- bash -c ' get_credentials && psql <psql args> -i -t
the terminal hangs.
I can't directly connect to the database, and the process to get the credentials is kinda cumbersome. Is there some bash concept I'm not understanding?

kubectl <flags> exec $podname
That exec is missing its -i and -t for --stdin=true and --tty=true to describe to kubernetes that you wish your terminal and the remote terminal to be associated with one another:
kubectl exec -it $podname -- etc etc
If you are intending the -i and -t present at the end of your cited example above to be passed to exec, be aware that the double dashes explicitly switch off argument parsing from kubectl, so there is no way it will see them

Related

How to pass ALL environment variables to container with docker exec

It's possible to set one or more environment variables in the container while doing docker exec, for example:
docker exec -ti -e VAR=1 -e HOME container_name command
But I would like to pass all the shell's environment variables without explicitly specifying them individually. Essentially the equivalent of sudo -E, although it's a different thing.
According to the documentation, there is no such option. But one hack would be something like:
env > env_vars && docker exec -ti --env-file ./env_vars container_name command
Which works, but I'm looking for a simple one step solution that doesn't involve creating a temporary file. Perhaps a bash trick I don't know or haven't thought of yet. Thanks.
Please note: Passing all environment variables is not recommended and defeats the purpose of container process isolation. This question is for knowledge, not about what should be done. Also, the question is specifically about running a temporary command in an existing container with docker exec, not about docker run.
With Bash it seems using process substitution work:
docker run --rm -ti --env-file <(env) alpine sh
Note, this creates a temporary fifo file behind the scenes anyway.
Note, this will not work properly with variables containing newlines, they are cutoff on newlines. You should do something along, I tried to make it short:
readarray -d '' -t args < <(env -0 | sed -z 's/^/--env\x00/')
docker run --rm -ti "${args[#]}" alpine sh

Docker run to execute script in mount without exiting container automatically?

I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'

Docker run bash --init-file

I'm trying to create an alias to help debug my docker containers.
I discovered bash accepts a --init-file option which ought to let us run some commands before passing over to interactive mode.
So I thought I could do
docker-bash() {
docker run --rm -it "$1" bash --init-file <(echo "ls; pwd")
}
But those commands don't appear to be running:
% docker-bash c7460dfcab50
root#9c6f64a9db8c:/#
Is it an escaping issue or.. what's going on?
bash --init-file <(echo "ls; pwd")
Alone in a terminal on my host machine works as expected (runs the command starts a new bash instance).
In points:
The <(...) is a bash extension process subtitution.
From the manual above: Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files..
The process substitution works like this:
bash creates a fifo in /tmp or creates a new file descriptor in /dev/fd.
The filename, either the /tmp/.something or /dev/fd/<number> is substituted for <(...) when command is executed.
So for example echo <(echo 1) outputs /dev/fd/63.
Docker works by creating a new environment that is separated from the host. That means that:
Processes inside docker do not inherit file descriptors from the host process:
So /dev/fd/* files are not inherited.
Processes inside docker are accessing isolated filesystem tree.
So processes can't access /tmp/* files from the host.
So summarizing docker run -ti --rm alpine cat <(echo 1) will not work, because the filename substituted by <(...) is not available from docker environment.
An easy workaround would be to just:
docker run -ti --rm alpine sh -c 'ls; pwd; exec sh'
Or use a temporary file:
echo "ls; pwd" > /tmp/tempfile
docker run -v /tmp/tempfile:/tmp/tempfile bash bash --init-file /tmp/tempfile
For my use-case I wanted to set an alias which won't persist if we re-exec the shell. However, aliases can be written to ~/.bashrc which will be reloaded on the subsequent exec. Ergo,
docker-bash() {
docker run --rm -it "$1" bash -c $'set -o xtrace; echo "alias ll=\'ls -lAhtrF --color=always\'" >> ~/.bashrc; exec "$0"'
}
Works. --rm should clean up any files we create anyway if I understand properly how docker works.
Or perhaps this is a nicer way to write it:
docker-bash() {
read -r -d '' BASHRC << EOM
alias ll='ls -lAhtrF --color=always'
EOM
docker run --rm -it "$1" bash -c "echo \"$BASHRC\" >> ~/.bashrc; exec \"\$0\""
}

Source script on interactive shell inside Docker container

I want to open a interactive shell which sources a script to use the bitbake environment on a repository that I bind mount:
docker run --rm -it \
--mount type=bind,source=$(MY_PATH),destination=/mnt/bb_repoistory \
my_image /bin/bash -c "cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh"
The problem is that the -it argument does not seem to have any effect, since the shell exits right after executing cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh
I also tried this:
docker run --rm -it \
--mount type=bind,source=$(MY_PATH),destination=/mnt/bb_repoistory \
my_image /bin/bash -c "cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh && bash"
Which spawns an interactive shell, but none of the macros defined in set_bb_env.sh
Would there be a way to provide a tty with the script properly sourcered ?
The -it flag is conflicting with the command to run in that you're telling docker to create the pseudo-terminal (ptty), and then running a command in that terminal (bash -c ...). When that command finishes, then the run is done.
What some people have done to work around this is to only have export variables in their sourced environment, and the last command would be exec bash. But if you need aliases or other items that aren't inherited like that, then your options are a bit more limited.
Instead of running the source in a parent shell, you could run it in the target shell. If you modified your .bash_profile to include the following line:
[ -n "$DOCKER_LOAD_EXTRA" -a -r "$DOCKER_LOAD_EXTRA" ] && source "$DOCKER_LOAD_EXTRA”
and then had your command be:
... /bin/bash -c "cd /mnt/bb_repository/oe-core && DOCKER_LOAD_EXTRA=build/conf/set_bb_env.sh exec bash"
that may work. This tells your .bash_profile to load this file when the env variable is already set, but not otherwise. (There can also be the -e flag on the docker command line, but I think that sets it globally for the entire container, which is probably not what you want.)

Exec sed command to a docker container

I'm trying to change a config file that is inside a docker container.
docker exec container_name sed -ire '/URL_BASE = /c\api.myapiurl' tmp/config.ini
Executing this sed command locally works just fine, but when I try to execute this in the container I receive the following error message.
sed: cannot rename tmp/config.ini: Operation not permitted
What I need to do is replace the 'URL_BASE =' from the 'config.ini' before deploy the container to my server.
I don't know why the sed command is trying to rename the file when its not suppose to.
Any ideas?
What I've tried
I tried to execute with the --privileged flag, but didn't worked. I tried to change the file permissions with chmod but I couldn't for the same reason of permission.
docker exec --privileged container_name sed -ire '/URL_BASE = /c\api.myapiurl' tmp/config.ini
Result: sed: cannot rename tmp/config.ini: Operation not permitted
Chmod
docker exec --privileged container_name chmod 755 tmp/config.ini
Result: chmod: changing permissions of 'tmp/config.ini': Operation not permitted
I also have tried execute with sudo before docker but didn't work either.
Nehal is absolutely right, sed works creating a local file so you just need a different approach, which is commonly used on Linux: heredocs.
Taking just the first lines from the documentation, a here document is a special-purpose code block. It uses a form of I/O redirection to feed a command list to an interactive program.
It can help us with docker exec as follows:
docker exec -i container_name bash <<EOF
sed -ire '/URL_BASE = /c\api.myapiurl' /tmp/config.ini
grep URL_BASE /tmp/config.ini
# any other command you like
EOF
Be aware of the -t, which is commonly used running bash, because it allocates a pseudo-TTY, and we don't really need that.
Also, to be safe always use absolute paths like /tmp/config.ini.
docker exec -i <container name> sed -i 's/xxx/${yyy}/g' path/filename.yaml
This is working for me.

Resources