I'm trying to execute shell command in docker-compose.yml. Code is:
command: bash -c mkdir /opt/wa/usr/install_templates
When I do:
sudo docker-compose up serviceA
it gives me :
serviceA | mkdir: missing operand
When you use bash -c, it runs the first string after the -c flag. See this SO answer for more information. Docker is reading your command bash -c mkdir /path/ and your bash command is just running mkdir in a bash subshell, causing that error.
You don't need to put bash -c before your commands in a docker-compose file. The Docker engine will handle running it in a shell for you and you can simply write the command you want to be run. I'd suggest replacing your command with this:
command: mkdir /opt/wa/usr/install_templates
Alternatively, you could try putting the entire command into a string if you want to force the command to be run in bash:
command: bash -c "mkdir /opt/wa/usr/install_templates"
I am new to Docker, Debezium, Bash, and Kafka. I am attempting to run the Debezium tutorial/example for MSSQL Server on Windows 10 here:
https://github.com/debezium/debezium-examples/blob/master/tutorial/README.md#using-sql-server
I am able to start the topology, per step one. However, when I go to step two and execute the following command:
cat debezium-sqlserver-init/inventory.sql | docker exec -i tutorial_sqlserver_1 bash -c '/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD'
I get the following error:
bash: C:/Program: No such file or directory
I do not have the foggiest idea why it would even drag C:/Program in to this. I do not see it in the command nor do I see it in the *.sql file. Does anyone know why this is happening and what the fix is?
Note 1: I am already in the current directory where this command should be runnable and there are no spaces in the folder/file path
Note 2: I am running the commands in Git Bash
When using set -x to log how the command is run, there's still no C:/Program anywhere in it, as can be seen by the following log:
$ cat debezium-sqlserver-init/inventory.sql | docker exec -i tutorial_sqlserver_1 bash -c '/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD'
+ cat debezium-sqlserver-init/inventory.sql
+ docker exec -i tutorial_sqlserver_1 bash -c '/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD'
bash: C:/Program: No such file or directory
I had a similar problem yesterday, the solution was adding a backslash before the absolute path, like :
cat debezium-sqlserver-init/inventory.sql | docker exec -i tutorial_sqlserver_1 bash -c '\/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD'
\/opt/mssql-tools/bin/sqlcmd prevents conversion to Windows path.
If I run the following from the command line.
docker run -t repo:tag ls -l
the command succeeds just fine. However, if I invoke the same from within a bash script I get the following ERROR:
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process caused "exec: \"ls
-l\": executable file not found in $PATH": unknown.
What about the bash script causes this error?
"exec: \"ls -l\": executable file not found in $PATH"
From the error I can tell that when you invoke docker, you somehow invoke with ls -l including space as one argument. Something like,
docker run -t repo:tag "ls -l" # wrong
or perhaps
cmd="ls -l"
docker run -t repo:tag "$cmd" # wrong
The shell to parse the docker command must see ls and -l as separate parameters so that the argument -l is distinguished from the ls executable name.
cmd="ls -l"
docker run -t repo:tag $cmd #works
I want to open a interactive shell which sources a script to use the bitbake environment on a repository that I bind mount:
docker run --rm -it \
--mount type=bind,source=$(MY_PATH),destination=/mnt/bb_repoistory \
my_image /bin/bash -c "cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh"
The problem is that the -it argument does not seem to have any effect, since the shell exits right after executing cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh
I also tried this:
docker run --rm -it \
--mount type=bind,source=$(MY_PATH),destination=/mnt/bb_repoistory \
my_image /bin/bash -c "cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh && bash"
Which spawns an interactive shell, but none of the macros defined in set_bb_env.sh
Would there be a way to provide a tty with the script properly sourcered ?
The -it flag is conflicting with the command to run in that you're telling docker to create the pseudo-terminal (ptty), and then running a command in that terminal (bash -c ...). When that command finishes, then the run is done.
What some people have done to work around this is to only have export variables in their sourced environment, and the last command would be exec bash. But if you need aliases or other items that aren't inherited like that, then your options are a bit more limited.
Instead of running the source in a parent shell, you could run it in the target shell. If you modified your .bash_profile to include the following line:
[ -n "$DOCKER_LOAD_EXTRA" -a -r "$DOCKER_LOAD_EXTRA" ] && source "$DOCKER_LOAD_EXTRA”
and then had your command be:
... /bin/bash -c "cd /mnt/bb_repository/oe-core && DOCKER_LOAD_EXTRA=build/conf/set_bb_env.sh exec bash"
that may work. This tells your .bash_profile to load this file when the env variable is already set, but not otherwise. (There can also be the -e flag on the docker command line, but I think that sets it globally for the entire container, which is probably not what you want.)
I'm trying to run MULTIPLE commands like this.
docker run image cd /path/to/somewhere && python a.py
But this gives me "No such file or directory" error because it is interpreted as...
"docker run image cd /path/to/somewhere" && "python a.py"
It seems that some ESCAPE characters like "" or () are needed.
So I also tried
docker run image "cd /path/to/somewhere && python a.py"
docker run image (cd /path/to/somewhere && python a.py)
but these didn't work.
I have searched for Docker Run Reference but have not find any hints about ESCAPE characters.
To run multiple commands in docker, use /bin/bash -c and semicolon ;
docker run image_name /bin/bash -c "cd /path/to/somewhere; python a.py"
In case we need command2 (python) will be executed if and only if command1 (cd) returned zero (no error) exit status, use && instead of ;
docker run image_name /bin/bash -c "cd /path/to/somewhere && python a.py"
You can do this a couple of ways:
Use the -w option to change the working directory:
-w, --workdir="" Working directory inside the container
https://docs.docker.com/engine/reference/commandline/run/#set-working-directory--w
Pass the entire argument to /bin/bash:
docker run image /bin/bash -c "cd /path/to/somewhere; python a.py"
You can also pipe commands inside Docker container, bash -c "<command1> | <command2>" for example:
docker run img /bin/bash -c "ls -1 | wc -l"
But, without invoking the shell in the remote the output will be redirected to the local terminal.
bash -c works well if the commands you are running are relatively simple. However, if you're trying to run a long series of commands full of control characters, it can get complex.
I successfully got around this by piping my commands into the process from the outside, i.e.
cat script.sh | docker run -i <image> /bin/bash
Just to make a proper answer from the #Eddy Hernandez's comment and which is very correct since Alpine comes with ash not bash.
The question now referes to Starting a shell in the Docker Alpine container which implies using sh or ash or /bin/sh or /bin/ash/.
Based on the OP's question:
docker run image sh -c "cd /path/to/somewhere && python a.py"
If you want to store the result in one file outside the container, in your local machine, you can do something like this.
RES_FILE=$(readlink -f /tmp/result.txt)
docker run --rm -v ${RES_FILE}:/result.txt img bash -c "grep root /etc/passwd > /result.txt"
The result of your commands will be available in /tmp/result.txt in your local machine.
For anyone else who came here looking to do the same with docker-compose you just need to prepend bash -c and enclose multiple commands in quotes, joined together with &&.
So in the OPs example docker-compose run image bash -c "cd /path/to/somewhere && python a.py"
If you don't mind the commands running in a subshell, just put a set of outer parentheses around the multiple commands to run:
docker run image (cd /path/to/somewhere && python a.py)
TL;DR;
$ docker run --entrypoint /bin/sh image_name -c "command1 && command2 && command3"
A concern regarding the accepted answer is below.
Nobody has mentioned that docker run image_name /bin/bash -c just appends a command to the entrypoint. Some popular images are smart enough to process this correctly, but some are not.
Imagine the following Dockerfile:
FROM alpine
ENTRYPOINT ["echo"]
If you try building it as echo and running:
$ docker run echo /bin/sh -c date
You will get your command appended to the entrypoint, so that result would be echo "/bin/sh -c date".
Instead, you need to override the entrypoint:
$ docker run --entrypoint /bin/sh echo -c date
Docker run reference
In case it's not obvious, if a.py always needs to run in a particular directory, create a simple wrapper script which does the cd and then runs the script.
In your Dockerfile, replace
CMD [ 'python', 'a.py' ]
or whatever with
CMD [ '/wrapper' ]
and create a script wrapper in your root directory (or wherever it's convenient for you) with contents like
#!/bin/sh
set -e
cd /path/to/somewhere
python a.py
In many situations, perhaps also consider rewriting a.py so that it doesn't need a wrapper. Either make it os.chdir() where it needs to be, or have it look for its data files in a directory you configure in its environment or similar.