Exec sed command to a docker container - bash

I'm trying to change a config file that is inside a docker container.
docker exec container_name sed -ire '/URL_BASE = /c\api.myapiurl' tmp/config.ini
Executing this sed command locally works just fine, but when I try to execute this in the container I receive the following error message.
sed: cannot rename tmp/config.ini: Operation not permitted
What I need to do is replace the 'URL_BASE =' from the 'config.ini' before deploy the container to my server.
I don't know why the sed command is trying to rename the file when its not suppose to.
Any ideas?
What I've tried
I tried to execute with the --privileged flag, but didn't worked. I tried to change the file permissions with chmod but I couldn't for the same reason of permission.
docker exec --privileged container_name sed -ire '/URL_BASE = /c\api.myapiurl' tmp/config.ini
Result: sed: cannot rename tmp/config.ini: Operation not permitted
Chmod
docker exec --privileged container_name chmod 755 tmp/config.ini
Result: chmod: changing permissions of 'tmp/config.ini': Operation not permitted
I also have tried execute with sudo before docker but didn't work either.

Nehal is absolutely right, sed works creating a local file so you just need a different approach, which is commonly used on Linux: heredocs.
Taking just the first lines from the documentation, a here document is a special-purpose code block. It uses a form of I/O redirection to feed a command list to an interactive program.
It can help us with docker exec as follows:
docker exec -i container_name bash <<EOF
sed -ire '/URL_BASE = /c\api.myapiurl' /tmp/config.ini
grep URL_BASE /tmp/config.ini
# any other command you like
EOF
Be aware of the -t, which is commonly used running bash, because it allocates a pseudo-TTY, and we don't really need that.
Also, to be safe always use absolute paths like /tmp/config.ini.

docker exec -i <container name> sed -i 's/xxx/${yyy}/g' path/filename.yaml
This is working for me.

Related

How to pass ALL environment variables to container with docker exec

It's possible to set one or more environment variables in the container while doing docker exec, for example:
docker exec -ti -e VAR=1 -e HOME container_name command
But I would like to pass all the shell's environment variables without explicitly specifying them individually. Essentially the equivalent of sudo -E, although it's a different thing.
According to the documentation, there is no such option. But one hack would be something like:
env > env_vars && docker exec -ti --env-file ./env_vars container_name command
Which works, but I'm looking for a simple one step solution that doesn't involve creating a temporary file. Perhaps a bash trick I don't know or haven't thought of yet. Thanks.
Please note: Passing all environment variables is not recommended and defeats the purpose of container process isolation. This question is for knowledge, not about what should be done. Also, the question is specifically about running a temporary command in an existing container with docker exec, not about docker run.
With Bash it seems using process substitution work:
docker run --rm -ti --env-file <(env) alpine sh
Note, this creates a temporary fifo file behind the scenes anyway.
Note, this will not work properly with variables containing newlines, they are cutoff on newlines. You should do something along, I tried to make it short:
readarray -d '' -t args < <(env -0 | sed -z 's/^/--env\x00/')
docker run --rm -ti "${args[#]}" alpine sh

Docker run to execute script in mount without exiting container automatically?

I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'

bash: C:/Program: No such file or directory

I am new to Docker, Debezium, Bash, and Kafka. I am attempting to run the Debezium tutorial/example for MSSQL Server on Windows 10 here:
https://github.com/debezium/debezium-examples/blob/master/tutorial/README.md#using-sql-server
I am able to start the topology, per step one. However, when I go to step two and execute the following command:
cat debezium-sqlserver-init/inventory.sql | docker exec -i tutorial_sqlserver_1 bash -c '/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD'
I get the following error:
bash: C:/Program: No such file or directory
I do not have the foggiest idea why it would even drag C:/Program in to this. I do not see it in the command nor do I see it in the *.sql file. Does anyone know why this is happening and what the fix is?
Note 1: I am already in the current directory where this command should be runnable and there are no spaces in the folder/file path
Note 2: I am running the commands in Git Bash
When using set -x to log how the command is run, there's still no C:/Program anywhere in it, as can be seen by the following log:
$ cat debezium-sqlserver-init/inventory.sql | docker exec -i tutorial_sqlserver_1 bash -c '/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD'
+ cat debezium-sqlserver-init/inventory.sql
+ docker exec -i tutorial_sqlserver_1 bash -c '/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD'
bash: C:/Program: No such file or directory
I had a similar problem yesterday, the solution was adding a backslash before the absolute path, like :
cat debezium-sqlserver-init/inventory.sql | docker exec -i tutorial_sqlserver_1 bash -c '\/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD'
\/opt/mssql-tools/bin/sqlcmd prevents conversion to Windows path.

How to forward psql shell from kubectl

I'm trying to make my life easier and am coding a bash script. One of these allows me to kube into a pod with postgres access, get the credentials I need, and run the interactive psql shell.
However, upon running
kubectl <flags> exec $podname -- bash -c ' get_credentials && psql <psql args> -i -t
the terminal hangs.
I can't directly connect to the database, and the process to get the credentials is kinda cumbersome. Is there some bash concept I'm not understanding?
kubectl <flags> exec $podname
That exec is missing its -i and -t for --stdin=true and --tty=true to describe to kubernetes that you wish your terminal and the remote terminal to be associated with one another:
kubectl exec -it $podname -- etc etc
If you are intending the -i and -t present at the end of your cited example above to be passed to exec, be aware that the double dashes explicitly switch off argument parsing from kubectl, so there is no way it will see them

inotifywait with Docker command and variable

I am trying to create a shell script that will check for a new file then cp to a Docker Container. The code I have so far is...
#!/bin/sh
source="/var/www/html/"
dest="dev_ubuntu:/var/www/html/"
inotifywait -m "/var/www/html" -e create -e moved_to |
while read file; do
sudo docker cp /var/www/html/$file dev_ubuntu:/var/www/html
done
But this code gives the following error:
Setting up watches.
Watches established.
"docker cp" requires exactly 2 argument(s).
See 'docker cp --help'.
Usage: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Copy files/folders between a container and the local filesystem
What am I doing wrong?
Do you have spaces in your file names? Use double quotes to avoid separating filenames by words:
echo $file
sudo docker cp "$file" dev_ubuntu:"$file"
I've also echoed the file name to see what is happening.

Resources