How could I do for example run a command every time the docker/laravel container is started?
Command example:
Log::info('Service is runining');
This command is just an example, currently I have an entrypoint.sh that runs some commands when the docker container starts, but I want to pass this responsibility of running commands to Laravel.
How can I make the commands run only once?
Related
What I am trying to do is setup a local development database and to prevent everyone having to go through all the steps I thought it would be useful to create a script.
What I have below stop once it is in the terminal, which looks like:
output
./dbSetup.sh
hash of container 0d1b182aa6f1
/ #
At which point I have to manually enter exit.
script
#!/bin/bash
command=$(docker ps | grep personal)
set $command
echo "hash of container ${1}"
docker exec -it ${1} sh
Is there a way I can inject a command via a script into a dockers container terminal?
In order to execute command inside a container, you can use something like this:
docker exec -ti my_container sh -c "echo a && echo b"
More information available at: https://docs.docker.com/engine/reference/commandline/exec/
Your script finds a running Docker container and opens a shell to it. The "-it" makes it interactive and allocates a tty which is why it continues to wait for input, e.g. "exit". If the plan is to execute some commands to initialize a local development database, I'd recommend looking at building an image with a Dockerfile instead. i.e. Once you figure out the commands to run, they would become RUN commands and the container after docker run would expose a local development database.
If you really want some commands to run within the shell after it is started and maintain the session, depending on the base image, you might be able to mount a bash profile that has the required commands, e.g. -v db_profile:/etc/profile.d where db_profile is a folder with the shell scripts you want to run. To get them to run you'd exec sh -l to have the login startup scripts to run.
I need to run a bash script continuously for indefinite time inside a docker container in Azure via Azure Container Instance service (ACI). My bash script has a while loop that keeps it running and Azure container has OnFailure Property to restart container if fails.
I see that after running Container for about 2 days, Container status is Running. However, the bash script that was running in foreground and sending logs in azure container console seems to be died and no longer sending logs to console. I also see it's not doing what it supposed to do.
How can I reliably keep this bash script running for indefinite time in Azure container?
The bash script which has internal while loop runs as below:
Commands
bash
my-while-loop-script.sh
To solve this issue, I replaced while loop inside my-while-loop-script.sh with a crond to execute a python application as a cron job. below is the line that executes a cron inside my-while-loop-script.sh. this line will execute my-cron.cron contents show below:
./busybox crond -f
To achieve that, I used busybox 1.30.1 tools. To install busybox in your docker:
ADD busybox-1.30.1/ /busybox
WORKDIR /busybox
RUN make defconfig
RUN make
And, you also need to add cron settings to crontabs dir.
RUN mkdir -p /var/spool/cron/crontabs/
# Copy cron settings
ADD my-cron.cron /var/spool/cron/crontabs/root
Sample my-cron.cron looks like just a normal cron file:
* * * * * python my-app.py
I am just wondering is that possible to run one script (e.g. shell script, python script, etc.) in different environments?
For example, I want to run my script from Linux shell to docker container shell (which the container is created by the script)? In other words, keep the script executing the rest of commands on container (after into the container).
run.sh (#shell script)
sudo docker exec -it some_containers bash #this command will lead me to docker container environment
apt-get install curl # I want to also execute this command inside the docker container after I enter the docker container environment
# this is just one script
Your question is not very clear, but it sounds like this is a job requiring two scripts - the first script runs in your "Linux shell", and needs to cause the second script to be placed into the container (perhaps by way of the dockerfile), at which point you can have the first script use docker exec.
Please see the answers on this question for more information.
How can I include my own shell script CMD on container start/restart/attach, without removing the CMD used by an inherited image?
I am using this, which does execute my script fine, but appears to overwrite the PHP CMD:
FROM php
COPY start.sh /usr/local/bin
CMD ["/usr/local/bin/start.sh"]
What should I do differently? I am avoiding the prospect of copy/pasting the ENTRYPOINT or CMD of the parent image, and maybe that's not a good approach.
As mentioned in the comments, there's no built-in solution to this. From the Dockerfile, you can't see the value of the current CMD or ENTRYPOINT. Having a run-parts solution is nice if you control the upstream base image and include this code there, allowing downstream components to make their changes. But docker there's one inherent issue that will cause problems with this, containers should only run a single command that needs to run in the foreground. So if the upstream image kicks off, it would stay running without giving your later steps a chance to run, so you're left with complexities to determine the order to run commands to ensure that a single command does eventually run without exiting.
My personal preference is a much simpler and hardcoded option, to add my own command or entrypoint, and make the last step of my command to exec the upstream command. You will still need to manually identify the script name to call from the upstream Dockerfile. But now in your start.sh, you would have:
#!/bin/sh
# run various pieces of initialization code here
# ...
# kick off the upstream command:
exec /upstream-entrypoint.sh "$#"
By using an exec call, you transfer pid 1 to the upstream entrypoint so that signals get handled correctly. And the trailing "$#" passes through any command line arguments. You can use set to adjust the value of $# if there are some args you want to process and extract in your own start.sh script.
If the base image is not yours, you unfortunately have to call the parent command manually.
If you own the parent image, you can try what the people at camptocamp suggest here.
They basically use a generic script as an entry point that calls run-parts on a directory. What that does is run all scripts in that directory in lexicographic order. So when you extend an image, you just have to put your new scripts in that same folder.
However, that means you'll have to maintain order by prefixing your scripts which could potentially get out of hand. (Imagine the parent image decides to add a new script later...).
Anyway, that could work.
Update #1
There is a long discussion on this docker compose issue about provisioning after container run. One suggestion is to wrap you docker run or compose command in a shell script and then run docker exec on your other commands.
If you'd like to use that approach, you basically keep the parent CMD as the run command and you place yours as a docker exec after your docker run.
Using mysql image as an example
Do docker inspect mysql/mysql-server:5.7 and see that:
Config.Cmd="mysqld"
Config.Entrypoint="/entrypoint.sh"
which we put in bootstrap.sh (remember to chmod a+x):
#!/bin/bash
echo $HOSTNAME
echo "Start my initialization script..."
# docker inspect results used here
/entrypoint.sh mysqld
Dockerfile is now:
FROM mysql/mysql-server:5.7
# put our script inside the image
ADD bootstrap.sh /etc/bootstrap.sh
# set to run our script
ENTRYPOINT ["/bin/sh","-c"]
CMD ["/etc/bootstrap.sh"]
Build and run our new image:
docker build --rm -t sidazhou/tmp-mysql:5.7 .
docker run -it --rm sidazhou/tmp-mysql:5.7
Outputs:
6f5be7c6d587
Start my initialization script...
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...
You'll see this has the same output as the original image:
docker run -it --rm mysql/mysql-server:5.7
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...
I have a container that is running with no issues. I added a bash script to compliment a couple other scripts already in the container. The docker image copy 2 scripts to /usr/local/bin and they can be accessed with docker exec -c container-name existingscript.
I added my own script to the same directory and when running the same command I get an error that exec cannot run the script: no file or directory,script not located in $PATH. I check path and sure enough, /usr/local/bin is listed. I checked permissions and the script is 755.
I then open an interactive shell with docker exec -it mycontainer bash and run /usr/local/bin/myscript and it runs with no problem.
Why can I not run the script from outside the container like I can the other two (that were included in the image). All three have almost the same functions a day do not use any special programs, one lists files, one adds files, one reads the file.
The base is Ubuntu.
EDIT: Found where I was running into the issue. Provided the answer in case anyone else happens to make the same mistake.
EDIT-2: So the script that came with the docker image to perform a couple common functions calls the image not the container so my adding the scripts to the container had no effect on the script which was why I kept getting the no file or directory error.
The line in the script in question was:
docker run --rm -v "$(pwd)/config":/path/to/file -ti image_name:latest" mynewscript $#
Of course that ran against the image and NOT the container.
Once I noticed that I tried running it with exec instead of run and it ran without error, like so:
docker exec -it container_name mynewscript
The reason is "/usr/local/bin" not in your script's $PATH, you can use /usr/local/bin/myscript explicitly in your script. Or export $PATH first in the script.
While I was adding snippets to help explain the issue I found the problem and the solution.
So I access the scripts inside the container from the host with another script that allows you to do different things based on switch case. The scripts are called against the docker image and not the container so the script I added does not actually exist in the image.
I modified the script to call the container instead of the image and it works as expected.
EDIT: I updated the question with the answer but I am adding it here as well:
So the script that came with the docker image to perform a couple common functions calls the image not the container so my adding the scripts to the container had no effect on the script which was why I kept getting the no file or directory error.
The line in the script in question was:
docker run --rm -v "$(pwd)/config":/path/to/file -ti image_name:latest" mynewscript $#
Of course that ran against the image and NOT the container.
Once I noticed that I tried running it with exec instead of run and it ran without error, like so:
docker exec -it container_name mynewscript