Can Travis CI cache docker images? - caching

Is it possible to add a setting to cache my docker image anywhere in the travis configuration ? Mine is a bigger docker image and it takes a while for it to download.
Any suggestions ?

Simplest solution today (October 2019) is to add the following to .travis.yml:
cache:
directories:
- docker_images
before_install:
- docker load -i docker_images/images.tar || true
before_cache:
- docker save -o docker_images/images.tar $(docker images -a -q)

See Caching Docker Images on Build #5358
for the answer(s). For Docker 1.12 available now on Travis, it is recommended to manually cache the images. For the Docker 1.13, you could use its --cache-from when it is on Travis.
Save:
before_cache:
# Save tagged docker images
- >
mkdir -p $HOME/docker && docker images -a --filter='dangling=false' --format '{{.Repository}}:{{.Tag}} {{.ID}}'
| xargs -n 2 -t sh -c 'test -e $HOME/docker/$1.tar.gz || docker save $0 | gzip -2 > $HOME/docker/$1.tar.gz'
Load:
before_install:
# Load cached docker images
- if [[ -d $HOME/docker ]]; then ls $HOME/docker/*.tar.gz | xargs -I {file} sh -c "zcat {file} | docker load"; fi
Also need to declare a cache folder:
cache:
bundler: true
directories:
- $HOME/docker

Docker images are not recommended to be cached regard to Travis document here
https://docs.travis-ci.com/user/caching/#things-not-to-cache

I just found the following approach as discussed in this article.
services:
- docker
before_script:
- docker pull myorg/myimage || true
script:
- docker build --pull --cache-from myorg/myimage --tag myorg/myimage .
- docker run myorg/myimage
after_script:
- docker images
before_deploy:
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
deploy:
provider: script
script: docker push myorg/myimage
on:
branch: master

This works for me:
Update the desired docker image name instead of <IMAGE_NAME_HERE> (3 places).
You can also use the same configuration for multiple images, docker save can handle multiple images, just make sure to pull them before trying to save them.
services:
- docker
cache:
directories:
- docker-cache
before_script:
- |
filename=docker-cache/saved_images.tar
if [[ -f "$filename" ]]; then docker load < "$filename"; fi
mkdir -p docker-cache
docker pull <IMAGE_NAME_HERE>
docker save -o "$filename" <IMAGE_NAME_HERE>
script:
- docker run <IMAGE_NAME_HERE>...

Related

Go get "get" unexpected EOF

Thank you for visiting here.
First of all, I apologize for my bad English, maybe a little wrong, hope you can help me.
Then I had a little problem when deploying a new CI/CD system on k8s platform (v1.23.5+1) with Gitlab runner (14.9.0) and dind (docker:dind)
When deploying CI to Golang apps with private repositories at https://gitlab.domain.com, (I did the go env -w GOPRIVATE configuration), I had a problem with the go mod tidy command. Specifically getting the unexpected EOF error. I've tried go mod tidy -v but it doesn't seem to give any more info.
I did a lot of work to figure out the problem. Specifically, I have done wget and git clone commands with my private repository and they are still able to download successfully. I tried adding a private repository at https://gitlab.com in go.mod, they can still be retrieved without any errors.
And actually, without using my new runner, I can still git clone and go mod tidy in another vps.
All of this leaves me wondering where am I actually getting the error? Is it my gitlab or my k8s gitlab runner
This is runner output
go: downloading gitlab.domain.com/nood/fountain v0.0.12
unexpected EOF
Cleaning up project directory and file based variables
ERROR: Job failed: command terminated with exit code 1
This is my .gitlab-ci.yml
image: docker:latest
stages:
- build
- deploy
variables:
GTV_ECR_REPOSITORY_URL: repo.domain.com
PROJECT: nood
APP_NAME: backend-super-system
APP_NAME_ECR: backend-super-system
IMAGE_TAG: $GTV_ECR_REPOSITORY_URL/$PROJECT/$APP_NAME_ECR
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh", "--tls=false"]
build:
stage: build
allow_failure: false
script:
- echo "Building image."
- docker pull $IMAGE_TAG || echo "Building runtime from scratch"
- >
docker build
--cache-from $IMAGE_TAG
-t $IMAGE_TAG --network host .
- docker push $IMAGE_TAG
Dockerfile
FROM golang:alpine3.15
LABEL maintainer="NoodExe <nood.pr#gmail.com>"
WORKDIR /app
ENV BIN_DIR=/app/bin
RUN apk add --no-cache gcc build-base git
ADD . .
RUN chmod +x scripts/env.sh scripts/build.sh \
&& ./scripts/env.sh \
&& ./scripts/build.sh
# stage 2
FROM alpine:latest
WORKDIR /app
ENV BIN_DIR=/app/bin
ENV SCRIPTS_DIR=/app/scripts
ENV DATA_DIR=/app/data
# Build Args
ARG LOG_DIR=/var/log/nood
# Create log directory
RUN mkdir -p ${BIN_DIR} \
mkdir -p ${SCRIPTS_DIR} \
mkdir -p ${DATA_DIR} \
mkdir -p ${LOG_DIR} \
&& apk update \
&& addgroup -S nood \
&& adduser -S nood -G nood \
&& chown nood:nood /app \
&& chown nood:nood ${LOG_DIR}
USER nood
COPY --chown=nood:nood --from=0 ${BIN_DIR} /app
COPY --chown=nood:nood --from=0 ${DATA_DIR} ${DATA_DIR}
COPY --chown=nood:nood --from=0 ${SCRIPTS_DIR} ${SCRIPTS_DIR}
RUN chmod +x ${SCRIPTS_DIR}/startup.sh
ENTRYPOINT ["/app/scripts/startup.sh"]
scripts/env.sh
#!/bin/sh
go env -w GOPRIVATE=gitlab.domain.com/*
git config --global --add url."https://nood_deploy:rvbsosecret_Hizt97zQSn#gitlab.domain.com".insteadOf "https://gitlab.domain.com"
scripts/build.sh
#!/bin/sh
grep -v "replace\s.*=>.*" go.mod > tmpfile && mv tmpfile go.mod
go mod tidy
set -e
BIN_DIR=${BIN_DIR:-/app/bin}
mkdir -p "$BIN_DIR"
files=`ls *.go`
echo "****************************************"
echo "******** building applications **********"
echo "****************************************"
for file in $files; do
echo building $file
go build -o "$BIN_DIR"/${file%.go} $file
done
Thank you for still being here :3
This is a known issue with installing go modules from gitlab in nested locations. The issue describes several workarounds/solutions. One solution is described as follows:
create a gitlab Personal Access Token with at least read_api and read_repository scopes.
create a .netrc file:
machine gitlab.com
login yourname#gitlab.com
password yourpersonalaccesstoken
use go get --insecure to get your module
do not use the .gitconfig insteadOf workaround
For self-hosted instances of GitLab, there is also the additional option of using the go proxy, which is what I do to resolve this problem.
For additional context, see this answer to What's the proper way to "go get" a private repository?

Correct way to deploy deploy a container from GitLab to EC2

I try to deploy my container from gitlab registry to EC2 Instance, I arrived to deploy my container, but when I change something, and want to deploy again, It is required to remove the old container and the old images and deploy again, for that I create this script to remove every thing and deploy again.
...
deploy-job:
stage: deploy
only:
- master
script:
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ssh -i ~/.ssh/id_rsa ec2-user#$DEPLOY_SERVER "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com &&
docker stop $(docker ps -a -q) &&
docker rm $(docker ps -a -q) &&
docker pull registry.gitlab.com/doesntmatter/demo:latest &&
docker image tag registry.gitlab.com/doesntmatter/demo:latest doesntmatter/demo &&
docker run -d -p 80:8080 doesntmatter/demo"
When I try this script, I got this error:
"docker stop" requires at least 1 argument. <<-------------------- error
See 'docker stop --help'.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop one or more running containers
Running after script
00:01
Uploading artifacts for failed job
00:01
ERROR: Job failed: exit code 1
if you look closer, I use $(docker ps -a -q) after the the docker stop.
Questions:
I know this is not the wonderful way to make my deploys (a developer here), can you please suggest other ways, just with using gitlab and ec2.
Is there any way to avoid this error, when I have or not containers in my machine?
Probably no containers were running when the job was executed.
To avoid this behavior, you can change a bit you command to have :
docker ps -a -q | xargs -r sudo docker stop
docker ps -a -q | xargs -r sudo docker rm
These will not produce errors if no containers are running.
Afterwards, indeed there are other way to deploy a container on AWS where there are services handling containers very well like ECS, EKS or Fargate. Think also about terraform to deploy your infrastructure using IaC principle (even for you ec2 instance).

How to delete docker images in Jenkins Job

I want to delete the remains of some Docker-operations from within Jenkins.
But somehow the following line does not work...
The issue seems to be with the parenthesis.
Any advice?
if [ docker images -f dangling=true -q|wc -l > 0 ]; then docker rmi --force $(docker images -f dangling=true -q);fi
Newer versions of Docker now have the system prune command.
To remove dangling images:
$ docker system prune
To remove dangling as well as unused images:
$ docker system prune --all
To prune volumes:
$ docker system prune --volumes
To prune the universe:
$ docker system prune --force --all --volumes
docker image prune deletes all dangling images. Docker image prune -a deletes unused images too. This thread explains what dangling and unused images are.
In short: Dangling image --> No tag, unused images --> no container attached.
Remove Dangling Images
Use -xargs will need --no-run-if-empty (-r) to bypass executing docker rmi with no arguments
docker images --quiet --filter=dangling=true | xargs --no-run-if-empty docker rmi
Use normar bash comand to check and delete
if docker images -f "dangling=true" | grep ago --quiet; then
docker rmi -f $(docker images -f "dangling=true" -q)
fi
I would store the output of the docker images command and then use it:
images=$(docker images -f dangling=true -q); if [[ ${images} ]]; then docker rmi --force ${images}; fi

Bash parse docker status to check if local image is up to date

I have a starting docker script here:
#!/usr/bin/env bash
set -e
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker pull my-example-registry.com:5050/web-client:latest
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latest
The fact is this script has umproper result. It deletes the old container everytime the script is run.
The "starting new container" section will pull the most recent image. Here is an example output of docker pull if the image locally is up to date:
Status: Image is up to date for
my-example-registry:5050/web-client:latest
Is there any way to improve my script by adding a condition:
Before anything, check via docker pull the local image is the most recent version available on registry. Then if it's the most recent version, proceed the stop and delete old container action and docker run the new pulled image.
In this script, how to parse the status to check the local image corresponds to the most up to date available on registry?
Maybe a docker command can do the trick, but I didn't manage to find a useful one.
Check the string "Image is up to date" to know whether the local image was updated:
sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)
So change your script to:
#!/usr/bin/env bash
set -e
sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latest
Simple use docker-compose and you can remove all the above.
docker-compose pull && docker-compose up
This will pull the image, if it exists, and up will only recreate the container, if it actually has a newer image, otherwise it will do nothing
If you're using docker compose, here's my solution where I keep put my latest docker-compose.yml into an image right after I've pushed all of the needed images that are in docker-compose.yml
The server runs this as a cron job:
#!/usr/bin/env bash
docker login --username username --password password
if (( $? > 0 )) ; then
echo 'Failed to login'
exit 1
fi
# Grab latest config, if the image is different then we have a new update to make
pullContents=$(docker pull my/registry:config-holder)
if (( $? > 0 )) ; then
echo 'Failed to pull image'
exit 1
fi
if echo $pullContents | grep "Image is up to date" ; then
echo 'Image already up to date'
exit 0
fi
cd /srv/www/
# Grab latest docker-compose.yml that we'll be needing
docker run -d --name config-holder my/registry:config-holder
docker cp config-holder:/home/docker-compose.yml docker-compose-new.yml
docker stop config-holder
docker rm config-holder
# Use new yml to pull latest images
docker-compose -f docker-compose-new.yml pull
# Stop server
docker-compose down
# Replace old yml file with our new one, and spin back up
mv docker-compose-new.yml docker-compose.yml
docker-compose up -d
Config holder dockerfile:
FROM bash
# This image exists just to hold the docker-compose.yml. So when remote updating the server can pull this, get the latest docker-compose file, then pull those
COPY docker-compose.yml /home/docker-compose.yml
# Ensures that the image is subtly different every time we deploy. This is required we want the server to find this image has changed to trigger a new deployment
RUN bash -c "touch random.txt; echo $(echo $RANDOM | md5sum | head -c 20) >> random.txt"
# Wait forever
CMD exec bash -c "trap : TERM INT; sleep infinity & wait"

Stopping docker containers by image name, and don't error if no containers are running

This question explains how to stop Docker containers started from an image.
But if there are no running containers I get the error docker stop requires a minimum of one argument. Which means I can't run this command in a long .sh script without it breaking.
How do I change these commands to work even if no results are found?
docker stop $(docker ps -q --filter ancestor="imagname")
docker rm `docker ps -aq` &&
(I'm looking for a pure Docker answer if possible, not a bash test, as I'm running my script over ssh so I don't think I have access to normal script tests)
Putting this in case we can help others:
To stop containers using specific image:
docker ps -q --filter ancestor="imagename" | xargs -r docker stop
To remove exited containers:
docker rm -v $(docker ps -a -q -f status=exited)
To remove unused images:
docker rmi $(docker images -f "dangling=true" -q)
If you are using a Docker > 1.9:
docker volume rm $(docker volume ls -qf dangling=true)
If you are using Docker <= 1.9, use this instead:
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes
Docker 1.13 Update:
To remove unused images:
docker image prune
To remove unused containers:
docker container prune
To remove unused volumes:
docker volume prune
To remove unused networks:
docker network prune
To remove all unused components:
docker system prune
IMPORTANT: Make sure you understand the commands and backup important data before executing this in production.

Resources