I spend some time with the Vagrant & CoreOS and Docker, There's so much to learn...
I work in a development environment and constantly UP and DESTROY operation So I do not want to download the docker images every time... It takes too much time, images are very heavy.
Well, I pull the images what I use most frequently and save them.
core#core-01 ~ $ docker save ubuntu:latest > /home/core/share/ubuntu.tar
core#core-01 ~ $ docker save mysql > /home/core/share/mysql.tar
core#core-01 ~ $ docker save wordpress:latest > /home/core/share/wordpress.tar
I'm loading them again if required.
core#core-03 ~ $ docker load -i=/home/core/share/wordpress.tar
core#core-04 ~ $ docker load -i=/home/core/share/mysql.tar
so far everything is OK.
But I'm having problems when I try to build the cluster.
I have two simple service database and web
database.1.service
[Unit]
Description=Run database_1
After=docker.service
Requires=docker.service
[Service]
Restart=always
RestartSec=10s
ExecStartPre=/usr/bin/docker ps -a -q | xargs docker rm
ExecStart=/usr/bin/docker run --rm --name database_1 -e "MYSQL_DATABASE=demo" -e "MYSQL_ROOT_PASSWORD=password" -p 3306:3306 mysql
ExecStartPost=/usr/bin/docker ps -a -q | xargs docker rm
ExecStop=/usr/bin/docker kill database_1
ExecStopPost=/usr/bin/docker ps -a -q | xargs docker rm
[Install]
WantedBy=local.target
web.1.service
[Unit]
Description=Run web_1
After=database.1.service
Requires=database.1.service
[Service]
Restart=always
RestartSec=10s
ExecStartPre=/usr/bin/docker ps -a -q | xargs docker rm
ExecStart=/usr/bin/docker run --rm --name web_1 --link database_1:database_1 -e "DB_USER=root" -e "DB_PASSWORD=password" -p 80:80 wordpress
ExecStartPost=/usr/bin/docker ps -a -q | xargs docker rm
ExecStop=/usr/bin/docker kill web_1
ExecStopPost=/usr/bin/docker ps -a -q | xargs docker rm
[Install]
WantedBy=local.target
How do I load mysql image (/home/core/share/mysql.tar) before the service start.
if the service starts then download the images again.
$ fleetctl start database.1.service
$ fleetctl start web.1.service
Can I Load the images as follows?
ExecStartPre=/usr/bin/docker load -i=/home/core/share/mysql.tar
The question is;
How do I create a development environment to work without an internet connection?
I think you might be over-complicating things. You should not have to explicitly ask for an image to be saved and/or reused.
According to the CoreOS documentation
The overlay filesystem works similar to git: our image now builds off of the ubuntu base and adds another layer with Apache on top. These layers get cached separately so that you won't have to pull down the ubuntu base more than once.
While this still requires an internet connection for the initial image download, subsequent launches of the container should reuse the cached image.
If you require more control, you might want to look into maintaining a private Docker registry within your CoreOS cluster. The best way I've found to do this is using Deis, which comes with a load of goodies, including a cluster-wide fault-tolerant file-system and a private Docker registry as standard.
Related
I try to deploy my container from gitlab registry to EC2 Instance, I arrived to deploy my container, but when I change something, and want to deploy again, It is required to remove the old container and the old images and deploy again, for that I create this script to remove every thing and deploy again.
...
deploy-job:
stage: deploy
only:
- master
script:
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ssh -i ~/.ssh/id_rsa ec2-user#$DEPLOY_SERVER "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com &&
docker stop $(docker ps -a -q) &&
docker rm $(docker ps -a -q) &&
docker pull registry.gitlab.com/doesntmatter/demo:latest &&
docker image tag registry.gitlab.com/doesntmatter/demo:latest doesntmatter/demo &&
docker run -d -p 80:8080 doesntmatter/demo"
When I try this script, I got this error:
"docker stop" requires at least 1 argument. <<-------------------- error
See 'docker stop --help'.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop one or more running containers
Running after script
00:01
Uploading artifacts for failed job
00:01
ERROR: Job failed: exit code 1
if you look closer, I use $(docker ps -a -q) after the the docker stop.
Questions:
I know this is not the wonderful way to make my deploys (a developer here), can you please suggest other ways, just with using gitlab and ec2.
Is there any way to avoid this error, when I have or not containers in my machine?
Probably no containers were running when the job was executed.
To avoid this behavior, you can change a bit you command to have :
docker ps -a -q | xargs -r sudo docker stop
docker ps -a -q | xargs -r sudo docker rm
These will not produce errors if no containers are running.
Afterwards, indeed there are other way to deploy a container on AWS where there are services handling containers very well like ECS, EKS or Fargate. Think also about terraform to deploy your infrastructure using IaC principle (even for you ec2 instance).
I need to dynamically delete all docker images in a server, except for the postgres image and container.
Now I need a dynamic way to get the id of that docker image so i will know to avoid it, using:
docker rmi $(docker images -q | grep -v $<id_of_postgres_container>)
For the container part, i managed to find this:
docker ps -aqf "name=postgres"
which returns only the id of the postgres container. Is there any way to do the same with images without getting too deep into bash scripting?
or any better suggestions?
docker images --format="{{.Repository}} {{.ID}}" |
grep "^postgres " |
cut -d' ' -f2
Get docker images in the format repository<space>id, then filter lines starting with postgres<space>, then leave only id.
docker images --format="{{.Repository}} {{.ID}}" |
grep "^postgres " |
cut -d' ' -f2 |
xargs docker rmi
But, if the postgres container and image is currently running or used, you can just:
docker system prune --force --all
You can just use:
$ docker images -q [image_name]
Where image_name can contain tags (appended after :), registry username with / (if applicable), etc.
The image has to be downloaded for this to work, for example:
$ docker pull hello-world
...
Status: Downloaded newer image for hello-world:latest
docker.io/library/hello-world:latest
$ docker images -q hello-world
d1165f221234
$ docker images -q hello-world:latest
d1165f221234
If the image is not locally available, the only alternative I can think of is to manually query the registry, e.g. like this.
docker rmi will never delete an image that corresponds to a running container. So if you have a container based on postgres running, and you want to delete every other image on your system, the age-old incantations will do what you want; I’m too old-school for docker system but the “get all of the image IDs, then try to delete them all” I know is
docker images -q | xargs docker rmi
Which will print out some errors, but will delete all of the images it can.
I am trying to create a tensorflow serving docker container but I am getting the following error while running the docker create command
I am unable to figure out if its because of any location error or my /bin/bash file is broken. What can I do to fix this issue ? Thanks in advance.
What base image are you using for your container image? I checked busybox and alpine. They have ash by default but not bash. Once you create your image you can run it as follows:
docker run -it my-image-name "sh"
This should get you into an interactive shell. The cd into /bin and check which commands are available using ls.
I got this in alpine
/ # ls /bin
ash df getopt linux64 mpstat rev sync
base64 dmesg grep ln mv rm tar
bbconfig dnsdomainname gunzip login netstat rmdir touch
busybox dumpkmap gzip ls nice run-parts true
cat echo hostname lzop pidof sed umount
chgrp ed ionice makemime ping setpriv uname
chmod egrep iostat mkdir ping6 setserial usleep
chown false ipcalc mknod pipe_progress sh watch
conspy fatattr kbd_mode mktemp printenv sleep zcat
cp fdflush kill more ps stat
date fgrep link mount pwd stty
dd fsync linux32 mountpoint reformime su
A container is an instance created from a container-image. In your case your container tf_container_gpu has been created from the image you specified. You can give your container a name only the time you create it. After that you just need to start it with that name.
docker start tf_container_gpu should do.
if you want to recreate your container (say after you re-build your image) first remove the earlier container instance
docker container rm tf_container_gpu. Then run the container again
docker run --name=tf_container_gpu <image-name>
To just start and stop the container
docker start tf_container_gpu
docker stop tf_container_gpu
I want to get rid of huge container log files on my docker env.
I have problem finding them when running native Docker on a Mac. I am not using docker-machine (virtualbox) thing. My docker version is 1.13.1.
When I do
docker inspect <container-name>
I see there is
"LogPath": "/var/lib/docker/containers/<container-id>/<container-id>-json.log
But there is not even directory /var/lib/docker on my mac (host).
I have also looked in
~/Library/Containers/com.docker.docker/
but didn't find any container specific loggings there.
I could use tail, but it is not that convenient always to me.
So the question is, how can I clear the log files of my containers on my native Docker Mac environment.
Docker daemon runs in a separate VM, so in order to clear logs you should do the following steps:
First, you can find the log path inside the VM, with:
docker inspect --format='{{.LogPath}}' NAME|ID
You can connect to the VM with screen
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Here you can simply use output redirection to clear the log
> /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log
And finally you can detach the screen with hitting Control+a d
I added the following to my bash_profile.
it gets the logpath for the docker container, opens a screen to the docker machine and deletes the logfile.
clearDockerLog(){
dockerLogFile=$(docker inspect $1 | grep -G '\"LogPath\": \"*\"' | sed -e 's/.*\"LogPath\": \"//g' | sed -e 's/\",//g')
rmCommand="rm $dockerLogFile"
screen -d -m -S dockerlogdelete ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
screen -S dockerlogdelete -p 0 -X stuff $"$rmCommand"
screen -S dockerlogdelete -p 0 -X stuff $'\n'
screen -S dockerlogdelete -X quit
}
use as follows:
clearDockerLog <container_name>
This will remove all your docker logs in macOS.
echo "rm /var/lib/docker/containers/*/*.log" | nc -U -w 0 ~/Library/Containers/com.docker.docker/Data/debug-shell.sock
This is the only solution that worked for macOS 10.14
docker run -it --rm --privileged --pid=host NAME nsenter -t 1 -m -u -n -i -- sh -c 'truncate -s0 /var/lib/docker/containers/*/*-json.log'
Replace NAME with your container name
Hope this helps
This worked for me, at least from the commandline: screen $(cat ~/Library/Containers/com.docker.docker/Data/vms/0/tty)
This might work better with the script if the above doesn't: screen /dev/ttys000
gist with more things to try
This question explains how to stop Docker containers started from an image.
But if there are no running containers I get the error docker stop requires a minimum of one argument. Which means I can't run this command in a long .sh script without it breaking.
How do I change these commands to work even if no results are found?
docker stop $(docker ps -q --filter ancestor="imagname")
docker rm `docker ps -aq` &&
(I'm looking for a pure Docker answer if possible, not a bash test, as I'm running my script over ssh so I don't think I have access to normal script tests)
Putting this in case we can help others:
To stop containers using specific image:
docker ps -q --filter ancestor="imagename" | xargs -r docker stop
To remove exited containers:
docker rm -v $(docker ps -a -q -f status=exited)
To remove unused images:
docker rmi $(docker images -f "dangling=true" -q)
If you are using a Docker > 1.9:
docker volume rm $(docker volume ls -qf dangling=true)
If you are using Docker <= 1.9, use this instead:
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes
Docker 1.13 Update:
To remove unused images:
docker image prune
To remove unused containers:
docker container prune
To remove unused volumes:
docker volume prune
To remove unused networks:
docker network prune
To remove all unused components:
docker system prune
IMPORTANT: Make sure you understand the commands and backup important data before executing this in production.