No such image in gitlab-ci with docker-in-docker - ruby

I'm use docker-api in my rails project.
I'm need create containers with my custom image.
.gitlab-ci.yml:
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
DB_HOST: postgres
RAILS_ENV: test
build:
stage: build
image: docker:19.03.0
services:
- docker:19.03.0-dind
script:
- docker build -t my_image docker/my_image/.
rspec:
image: ruby:2.6.3
services:
- postgres:10.1
- docker:19.03.0-dind
script:
- export DOCKER_URL=tcp://docker:2375
- cp config/database.yml.gitlab_ci config/database.yml
- gem install bundler
- bundle install
- rails db:create
- rails db:migrate
- rspec
I'm got error:
Failure/Error:
container = Docker::Container.create('Cmd' => ['tail', '-f', '/dev/null'],
'Image' => 'my_image')
Docker::Error::NotFoundError:
No such image: my_image:latest
How solve this problem?

Each job you specified will use a different instance of dind.
You probably need to push the image from first job and pull it in the second job.

.gitlab-ci.yml:
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
DB_HOST: postgres
RAILS_ENV: test
rspec:
image: ruby:2.6.3
services:
- postgres:10.1
- docker:19.03.0-dind
script:
- cp config/database.yml.gitlab_ci config/database.yml
- gem install bundler
- bundle install
- export DOCKER_URL=tcp://docker:2375
- rails db:create
- rails db:migrate
- rake create_image
- rspec
rubocop:
image: ruby:2.6.3
script:
- gem install bundler
- bundle install
- bundle exec rubocop
artifacts:
expire_in: 10 min
create_image.rake
task :create_image do
image = Docker::Image.build_from_dir('./docker/my_image/.')
image.tag('repo' => 'my_image', 'tag' => 'latest', force: true)
end

Related

PHP Fatal error on CI/CD run php artisan test

I use docker-compose with laravel and postgresql and all works fine in local system. The problem is in the CI/CD.
I have changed the CI/CD yml file over and over but I am stuck!
CI/CD
name: CI/CD
on:
pull_request:
branches: ['master']
push:
branches: ['master']
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: shivammathur/setup-php#v2
with:
php-version: '7.4'
- uses: actions/checkout#v2
- name: Run Containers
run: docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
# - name: Run composer install
# run: cd companyname_app_dir && composer install
# - name: Run composer update
# run: cd companyname_app_dir&& composer update
# - name: Setup Project
# run: |
# cd companyname_app_dir
# composer update
# composer install
# php artisan config:clear
# php artisan cache:clear
- name: Run test
run: cd companyname_app_dir && php artisan test
env:
APP_KEY: base64:x06N/IsV5iJ+R6TKlr6sC6Mr4riGgl8Rg09XHHnRZQw=
APP_ENV: testing
DB_CONNECTION: companyname-postgres
DB_DATABASE: db_test
DB_USERNAME: root
DB_PASSWORD: 1234
deploy:
needs: test
runs-on: ubuntu-latest
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action#v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v2
- name: Login to Docker Hub
uses: docker/login-action#v2
with:
username: secret
password: secret
- name: Build and push
uses: docker/build-push-action#v3
with:
push: true
file: ./companyname_app_dir/Dockerfile
tags: company_image:latest
build-args: |
"NODE_ENV=production"
There are line comments, I tried using these but I couldn't run a test successfully.
docker-compose
version: '3'
networks:
companyname_network:
driver: bridge
services:
nginx:
image: nginx:stable-alpine
container_name: companyname-nginx
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
restart: always
depends_on:
- companyname_app
networks:
- companyname_network
companyname_app:
restart: 'always'
image: 'companyname_laravel'
container_name: companyname-app
build:
context: .
dockerfile: ./Dockerfile
networks:
- companyname_network
depends_on:
- companyname_db
companyname_db:
image: 'companyname_multiple_db'
container_name: companyname-postgres
build:
context: .
dockerfile: ./DockerfileDB
restart: 'always'
volumes:
- local_pgdata:/docker-entrypoint-initdb.d
environment:
- POSTGRES_MULTIPLE_DATABASES=db,db_test
- POSTGRES_USER=root
- POSTGRES_PASSWORD=1234
ports:
- 15432:5432
networks:
- companyname_network
companyname_dbadmin:
image: adminer
container_name: companyname-dbadmin
restart: 'always'
depends_on:
- companyname_db
ports:
- 5051:8080
networks:
- companyname_network
volumes:
local_pgdata:
docker-compose.dev
version: '3'
services:
nginx:
ports:
- 9000:80
companyname_app:
build:
args:
- NODE_ENV=development
volumes:
- ./companyname_app_dir:/app
- /app/vendor
With this file, I get an error:
Run cd companyname_app_dir && php artisan test
PHP Warning: require(/home/runner/work/companyname_app/companyname_app /companyname_app_dir/vendor/autoload.php): failed to open stream: No such file or directory in /home/runner/work/companyname_app/companyname_app/companyname_app_dir/artisan on line 18
PHP Fatal error: require(): Failed opening required '/home/runner/work/companyname_app/companyname_app/companyname_app_dir/vendor/autoload.php' (include_path='.:/usr/share/php') in /home/runner/work/companyname_app/companyname_app/companyname_app_dir/artisan on line 18
Error: Process completed with exit code 255.
if I use:
- name: Run composer install
run: cd companyname_app_dir && composer install
- name: Run composer update
run: cd companyname_app_dir && composer update
In CI/CD yml and remove Run Containers part, composer install and update successfully, but php artisan test throws this error:
postgresql can not connect
You must use composer install, else you will have no vendor folder at all, so you have nothing to run. That is why you are getting an error if you don't run composer install
You should not run composer update, because you are updating packages to new versions, you never do that in production, you just run composer install --no-dev
You are mixing running docker with a command OUTSIDE the docker container.
Related to point 3., if you are using docker-compose, you cannot execute:
- name: Run test
run: cd companyname_app_dir && php artisan test
env:
APP_KEY: base64:x06N/IsV5iJ+R6TKlr6sC6Mr4riGgl8Rg09XHHnRZQw=
APP_ENV: testing
DB_CONNECTION: companyname-postgres
DB_DATABASE: db_test
DB_USERNAME: root
DB_PASSWORD: 1234
Because you are outside docker, so you should execute docker-compose exec companyname_app php artisan test, that will execute the tests INSIDE the docker container, where you correctly have everything setup.
So your code (if I am not missing anything) should be:
- name: Run test
run: docker-compose exec companyname_app php artisan test
env:
APP_KEY: base64:x06N/IsV5iJ+R6TKlr6sC6Mr4riGgl8Rg09XHHnRZQw=
APP_ENV: testing
DB_CONNECTION: companyname-postgres
DB_DATABASE: db_test
DB_USERNAME: root
DB_PASSWORD: 1234
But I am not certaing what will you get back from that execution, I have no idea if the test fails, if the CI/CD (I am assuming you are using GitHub Actions or Bitbucket Pipelines), will truly identify that it has failed or not.
What I usually do, is just install everything on the machine (CI/CD machine), instead of using a docker file or docker-compose yaml. But that is my preference (at least for PHP/Laravel)

User in docker can't create folders and files

I have a problem with creating folders and files within a docker container. I have a Ruby and Hanami web app deployed using Dockerfile and docker-compose yaml. Here is the content of both files for reference.
Dockerfile:
FROM ruby:2.7.5-bullseye
RUN apt-get update && apt-get install vim -y
RUN bundle config --global frozen 1
RUN adduser --disabled-login app_owner
USER app_owner
WORKDIR /usr/src/app
COPY --chown=app_owner Gemfile Gemfile.lock ./
COPY --chown=app_owner . ./
RUN gem install bundler:1.17.3
RUN bundle install
ENV HANAMI_HOST=0.0.0.0
ENV HANAMI_ENV=production
EXPOSE 2300
docker-compose.yml:
version: '3'
services:
postgres:
image: postgres
restart: unless-stopped
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
ports:
- 5432
volumes:
- postgres_staging:/var/lib/postgresql/data
web:
build: .
restart: unless-stopped
command: >
bash -c "bundle exec hanami db migrate
&& bundle exec rake initial_settings:add_default_language
&& bundle exec rake initial_settings:add_session_validity
&& bundle exec rake import_user:create
&& bundle exec rake super_admin:create
&& bundle exec rake create_parser_rules:start
&& bundle exec hanami assets precompile
&& cp -r apps/myapp/assets/webfonts public/webfonts
&& cp -r apps/myapp/assets/webfonts public/assets/webfonts
&& cp -r apps/myapp/assets/images/sort*.png public/assets
&& cp -r apps/myapp/assets/images/sort*.png public
&& cp -r apps/myapp/assets/images/ui-icons*.png public/assets/wordrocket
&& mkdir public/assets/images
&& cp -r apps/myapp/assets/images/sort*.png public/assets/images
&& bundle exec hanami server"
volumes:
- ./hanami_log:/usr/src/app/hanami_log
links:
- postgres
depends_on:
- postgres
nginx:
image: nginx:alpine
restart: unless-stopped
tty: true
ports:
- "${NGINX_PORT}:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- web
volumes:
postgres_staging:
web:
nginx:
The docker-compose has some initial commands that need to be run and the ones creating folders and files are failing. For example bundle exec hanami assets precompile fails with: Permission denied # dir_s_mkdir - /usr/src/app/public
The command created the folder and then copies over some files. I've confirmed this error by omitting the command from docker-compose and then tried running the command manually in the running container. I get the same error.
Is my configuration incorrect?
EDIT: Forgot one important thing: the problem occurs on the client's server running Ubuntu 20.04, while it works without issues on my dev laptop.
Thank you.
Seba

Circle-ci: No workflow when I set filters to only tags and ignore all branches

I’m trying to run a workflow only on tags, but I’m getting no workflow
And when I remove ignore branches, it runs on every branch and tag.
Do I miss something? Or what exactly can I achieve with this usecase?
This is a screenshot, and I’m expecting the workflow to run on instable-2.7.31.
Screenshot- noworkflow
Thanks.
My .circleci/config.yml
only-deploy-unstable: &only-deploy-unstable
context: Unstable-context
filters:
tags:
only: /^unstable-.*/
branches:
ignore: /.*/
version: 2.1
jobs:
build_unstable:
docker:
- image: docker:20.10.8
environment:
DOCKER_IMAGE_BASE_URL: **********
steps:
- checkout
- setup_remote_docker
- run: apk update
- run: apk add git
- run: docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- run:
name: build and push unstable docker image
no_output_timeout: 15m
command: |
export TAG_NAME=$(git describe --tags --abbrev=0)
echo ${DOCKER_IMAGE_BASE_URL}:$TAG_NAME
docker build --build-arg STAGING=test --rm -t $DOCKER_IMAGE_BASE_URL:$TAG_NAME -t $DOCKER_IMAGE_BASE_URL:latest .
docker push $DOCKER_IMAGE_BASE_URL:$TAG_NAME
docker push $DOCKER_IMAGE_BASE_URL:latest
deploy_unstable:
docker:
- image: docker:20.10.8
steps:
- checkout
- setup_remote_docker
- run: command -v ssh-agent >/dev/null || ( apk add --update openssh )
- run: eval $(ssh-agent -s)
- run: ********************
workflows:
# build unstable-version
build_and_push_unstable:
jobs:
- build_unstable: *only-deploy-unstable
- hold:
<<: *only-deploy-unstable
type: approval
requires:
- build_unstable
- deploy_unstable:
<<: *only-deploy-unstable
requires:
- hold
Could be the syntax when you reference your YAML alias. Have you tried:
- build_unstable:
<<: *only-deploy-unstable
- hold:
<<: *only-deploy-unstable
type: approval
requires:
- build_unstable
- deploy_unstable:
<<: *only-deploy-unstable
requires:
- hold

continuous integration with drone and github: build is not triggering on commit

I'm using open source edition of drone.
docker-compose.yml:
version: '2'
services:
drone-server:
image: drone/drone:0.5
ports:
- 80:8000
volumes:
- ./drone:/var/lib/drone/
restart: always
environment:
- DRONE_OPEN=true
- DRONE_ADMIN=khataev
- DRONE_GITHUB_CLIENT=github-client-string
- DRONE_GITHUB_SECRET=github-secret-string
- DRONE_SECRET=drone-secret-string
drone-agent:
image: drone/drone:0.5
command: agent
restart: always
depends_on: [ drone-server ]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=ws://drone-server:8000/ws/broker
- DRONE_SECRET=drone-secret-string
Application is registered and authorized on Github and secret/client strings are provided.
I placed .drone.yml file into my project repository:
pipeline:
build:
image: rails-qna
commands:
- bundle exec rake db:drop
- bundle exec rake db:create
- bundle exec rake db:migrate
- bundle exec rspec
Screenshots of project settings and builds status
1) What I've missed, why build is not triggering by commit to repo?
2) How to trigger build manually?
3) What is Timeout in project settings?
Found an issue - github webhooks could not reach my drone server due network settings.

Missing Gemfile when using Docker on Heroku

I am trying out the docker plugin for Heroku, just locally to start with. When I run docker-compose up web I get the following error:
Building web
Step 1 : FROM heroku/ruby
# Executing 7 build triggers...
Step 1 : COPY Gemfile Gemfile.lock /app/user/
ERROR: Service 'web' failed to build: lstat Gemfile: no such file or directory
This is my docker-compose.yml:
web:
build: .
command: 'bash -c ''bundle exec puma -C config/puma.rb'''
working_dir: /app/user
environment:
PORT: 8080
DATABASE_URL: 'postgres://postgres:#herokuPostgresql:5432/postgres'
ports:
- '8080:8080'
links:
- herokuPostgresql
shell:
build: .
command: bash
working_dir: /app/user
environment:
PORT: 8080
DATABASE_URL: 'postgres://postgres:#herokuPostgresql:5432/postgres'
ports:
- '8080:8080'
links:
- herokuPostgresql
volumes:
- '.:/app/user'
herokuPostgresql:
image: postgres
Why is the Gemfile missing, but most importantly how should it look like for my docker?

Resources