I was wondering if it would be possible to list all running docker-compose files with a bash function? Something like docker ps or docker container ls, but I want it to show the files only.
I got started with this - lsof | egrep 'REG|DIR' | awk '{print $9}' but this provides me tons of unwanted information as well.
What would be the best approach here?
Thanks in advance.
This bash oneliner shows the working dir of each container's compose file.
for c in `docker ps -q`; do echo $c; docker inspect $c --format '{{index .Config.Labels "com.docker.compose.project.working_dir"}}' ; done
I edited kvelev's command to get this. kvelev's command was just printing "docker-compose.yaml" for each of my loaded containers, so I edited the filter to show the working dir, which works for me.
So I played around with docker inspect and came up with that:
for c in `docker ps -q`; do echo $c; docker inspect $c --format '{{index .Config.Labels "com.docker.compose.project.config_files"}}' ; done
So it is possible ;)
It is not possible to backtrack and find the docker-compose file that was used in container deployment. To overcome such issues, a project pipeline is recommended using tools like maven, jenkins, gradle etc along with a repository platform like github. If its a personal project you can organize your project by wrapping the docker deployment commands and source files in a script and only use them to create deployments. This way it will be organized to some extent.
Related
I want to programmatically fetch the id of a running container and stop it. However, I'm a little lost
Here's the command I use to fetch the id of the running container:
docker ps -q --no-trunc --format="{{.ID}}" --filter "ancestor=<repo-name>"
How do I pass the output of this command to
docker stop <id>
I'm new to docker, thus the question. TIA
If you are using bash, you can use back ticks to evaluate a command and substitute in the command output, in your case:
docker stop `docker ps -q --no-trunc --format="{{.ID}}" --filter "ancestor=<repo-name>"`
Please, consider read this related Unix Stackexchange question, it may be of help as well.
As suggested in the linked question, you can use the $(...) syntax as well:
docker stop $(docker ps -q --no-trunc --format="{{.ID}}" --filter "ancestor=<repo-name>")
Although I don't know in this specific use case, the $(...) syntax has the further advantage of working with PowerShell too.
I have my containers scattered into multiple docker-compose.yml files (https://docs.docker.com/compose/extends/). They are separated based on the project, but a few containers are common to all projects. I have a neat shell script which lets me easily start a few projects at a time:
./myscript.sh up project1 project2
This executes:
docker-compose up -d -f shared/docker-compose.yml -f project1/docker-compose.yml -f project2/docker-compose.yml project1 project2
This starts the containers project1, project2 & a few that are defined in the shared compose file, e.g. shared-db, shared-apache.
I now want to add to my shell script the option to kill the containers:
./myscript.sh kill
Should execute:
docker kill project1 project2 shared-db shared-apache
The problem is getting the list of my containers. My current approach is to use docker ps --format '{{.Names}}', which isn't ideal as it can list also containers that are not a part of these projects.
I've also tried using docker-compose kill, which needs to be executed for each docker-compose.yml file separately. I looped through all the files and it worked for the first one, but threw an error for the second:
ERROR: Service 'project1' depends on service 'shared-db' which is undefined.
The error is thrown because project1/docker-compose.yml has dependencies from shared/docker-compose.yml and they are unmet because shared was already killed.
The only way that comes to mind is somehow go through all the docker-compose.yml files and get a list of all the container names that are defined there, but I didn't find any proper way to parse yml files in bash.
services:
db:
image: ...
container_name: shared-db
apache:
image: ...
container_name: shared-apache
From the above yml, I'd have to get the names shared-db and shared-apache.
As long as you are happy with myscript.sh kill killing any container started with docker-compose, you can use the labels that docker-compose applies to containers to identify targets.
To find all containers started using docker-compose:
docker ps --filter 'label=com.docker.compose.project'
So you could do something as simple as:
docker ps --filter 'label=com.docker.compose.project' -q |
xargs docker kill
See the docker ps documenation on filtering for more information.
Just use docker-compose kill:
docker-compose -f shared/docker-compose.yml -f project1/docker-compose.yml -f project2/docker-compose.yml kill
The option -f should come before the command to docker-compose, that way it will be parsed as include files. You can use all possible docker-compose commands that way.
grep container_name: */docker-compose.yml | awk '{print $3}'
or:
grep container_name: */docker-compose.yml | sed 's/^.*: //'
In my CI build, I'd like to print a formatted string which is built from some nested commands by Docker like:
docker run -t --rm -v /mysrc:/src -w /src node:lts echo "My project uses `npm list aLibrary`"
On bash, the command echo "My project uses `npm list aLibrary`" just runs perfectly, but when passing to Docker, neither backtick`nor $() can be interpolated.
Anyone could help?
I've thought about making a .sh file to mount into the docker container, but a file would need a place to be stored, I think this simple CI script shouldn't be in a file.
Try:
bash -c 'echo "My project uses `npm list aLibrary`"'
this will work :
echo "My project uses `/usr/local/bin/npm list aLibrary`"
you need to supply the full path
I have a simple Dockerfile that copies over a template which I used sed to replace some of the variables. Pretty straight forward. Looks very doable and from what I've seen/read for all intents and purposes, it should do it.
COPY /my-dir/my-textfile.conf /to/my/docker/path.conf
RUN sed -i s:TEXTTOREPLACE:my-new-text:g /to/my/docker/path.conf
I then run docker build.... then docker run ... bash
then I cat my file and TEXTTOREPLACE is still there.
Run the same sed command in the bash and it works no problem.
Any thoughts? What am I doing wrong/not seeing?
Thanks!
EDIT per request: base image is debian:7.11, work station is MAC OSX
Just to recap.
I have the file my-textfile.conf in my working directory. Its content is:
I need to change TEXTTOREPLACE with my-new-text
My test system is Ubuntu Linux 16.04 running Docker version 18.09.0, build
4d60db4.
This is the Dockerfile
FROM debian:7.11
COPY my-textfile.conf /tmp/path.conf
RUN sed -i s:TEXTTOREPLACE:my-new-text:g /tmp/path.conf
I run the following commands:
docker build -t mytestimage .
docker run -ti -d --name mytestcontainer mytestimage
docker exec -ti mytestcontainer /bin/bash
Then, inside the container, I run:
cat /tmp/path.conf
and I get this result:
I need to change my-new-text with my-new-text
So it seems it works as expected.
I am trying to dynamically set the image name and tag for AWS Elastic Beanstalk in my Dockerrun.aws.json file:
"Image": {
"Name": "IMAGETAG",
"Update": "true"
}
with the following sed command as a script in my GitLab CI file:
sed -i.bak "s|IMAGETAG|$CONTAINER_TEST_IMAGE|" Dockerrun.aws.json && rm Dockerrun.aws.json.bak; eb deploy Production
Where $CONTAINER_TEST_IMAGE is a verified good environment variable (tested by doing echo $CONTAINER_TEST_IMAGE as a script). $CONTAINER_TEST_IMAGE contains the structure of the following content (where ... is the full id):
gitlab.company.com:123/my-group/my-project:core_7de09851...8f_testing
The problem I am facing is that sed does not work during the CI pipeline. I am failing to understand why considering if I set the environment variable locally and run the same command, it will successfully replace the value of Name to the same structure URL. This testing was done on a Macbook.
I know that it is not updating the file because the Gitlab CI log reports
WARN: Failed to pull Docker image IMAGETAG:latest, retrying...
I've tried a few things that did not work:
Running the sed and eb deploy commands as separate scripts (two different lines in the CI file)
Switch the variable that I am seeking to replace in Dockerrun.aws.json to <IMAGE>
While it was at <IMAGE>, running sed -i='' "s|<IMAGE>|$CONTAINER_RELEASE_IMAGE|" Dockerrun.aws.json instead of doing the .bak and then rm'ing it (I read somewhere that sed has inconsistencies on OSX with the -i='' version)
Does anyone have any thoughts on what the issue might be and how it can be resolved?
There were two aspects of this that were going wrong:
The sed command was not executing correctly on the runner, but was working locally
eb deploy was ignoring the updated file
For part 1, he working sed command is:
sed -ri "s|\"IMAGETAG\"|\"$1\"|" Dockerrun.aws.json
where the line in Dockerrun.aws.json is "Name": "IMAGETAG",. sed still confuses me here so I can't explain why this one works vs the original command.
For part 2, apparently eb deploy will always look at the latest commit if it can, rather than the current working directory. Makes sense, I guess. To get around this, run the command as eb deploy --staged. You can read more about this flag on AWS's site.
Also, note that my .gitlab-ci.yml simply calls a script to run all of this rather than doing it there.
- chmod +x ./scripts/ebdeploy.sh
- ./scripts/ebdeploy.sh $CONTAINER_TEST_IMAGE