Unable to start MySQL service in docker during gitlab-ci - laravel

I have the following .gitlab-ci taken from the example of Laravel Dusk CI:
stages:
- build
- test
# Variables
variables:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DATABASE: test
DB_HOST: mysql
DB_CONNECTION: mysql
build:
stage: build
services:
- mysql:5.7
image: chilio/laravel-dusk-ci:stable
script:
- composer install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts
# - npm install # if you need to install additional modules from your projects package.json
# - npm run dev # if you need to run dev scripts for example laravel mix
cache:
key: ${CI_COMMIT_REF_NAME}
paths:
# these are only examples, you should modify them according to your project,
# or remove cache routines entirely, if they are causing any problems on your next builds..
# below are 2 safe ones if you use composer install and npm install in your stage script
- vendor
- node_modules
# - /resources/assets/vendors # for example if you put your vendor node-libraries there
test:
stage: test
cache:
key: ${CI_COMMIT_REF_NAME}
paths:
- vendor
- node_modules
policy: pull
services:
- mysql:5.7
image: chilio/laravel-dusk-ci:stable
script:
- cp .env.example .env
# - cp phpunit.xml.ci phpunit.xml # if you are using custom config for your phpunit tests in CI
- configure-laravel
- start-nginx-ci-project
- ./vendor/phpunit/phpunit/phpunit -v --coverage-text --colors --stderr
# - phpunit -v --coverage-text --colors --stderr # if you want to use preinstalled phpunit
- php artisan dusk --colors --debug
artifacts:
paths:
- ./storage/logs # for debugging
- ./tests/Browser/screenshots
- ./tests/Browser/console
expire_in: 7 days
when: always
However, when the runner executes the job, I keep getting the following warning:
Using Docker executor with image chilio/laravel-dusk-ci:stable ...
Starting service mysql:5.7 ...
Pulling docker image mysql:5.7 ...
Using docker image sha256:66bc0f66b7af6ba3ea96582685d3afcd6dff93c2f8999da0ffadd67b280db548 for mysql:5.7 ...
Waiting for services to be up and running...
*** WARNING: Service runner-237f18d2-project-23-concurrent-0-mysql-0 probably didn't start properly.
Health check error:
ContainerStart: Error response from daemon: Cannot link to a non running container: /runner-237f18d2-project-23-concurrent-0-mysql-0 AS /runner-237f18d2-project-23-concurrent-0-mysql-0-wait-for-service/service
Service container logs:
2018-07-11T19:49:03.214991318Z
2018-07-11T19:49:03.215062485Z ERROR: mysqld failed while attempting to check config
2018-07-11T19:49:03.215067480Z command was: "mysqld --verbose --help"
2018-07-11T19:49:03.215070774Z
2018-07-11T19:49:03.215073778Z mysqld: error while loading shared libraries: libpthread.so.0: cannot stat shared object: Permission denied
I've tried to set the runner to privileged in the config.toml:
privileged = true

To solve the Question:
mysqld: error while loading shared libraries: libpthread.so.0: cannot stat shared object: Permission denied
Step1: update your software and kernel(maybe):
apt-get update && apt-get upgrade
Step2: install the docker dependency package:
(ubuntu/debian): apt-get install apt-transport-https ca-certificates curl gnupg2 software properties-common
(centos/redhat):yum-utils device-mapper-persistent-data lvm2
Step3: reboot your server & restart your docker-ce:
reboot
systemctl restart docker-ce

Related

PHP Fatal error on CI/CD run php artisan test

I use docker-compose with laravel and postgresql and all works fine in local system. The problem is in the CI/CD.
I have changed the CI/CD yml file over and over but I am stuck!
CI/CD
name: CI/CD
on:
pull_request:
branches: ['master']
push:
branches: ['master']
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: shivammathur/setup-php#v2
with:
php-version: '7.4'
- uses: actions/checkout#v2
- name: Run Containers
run: docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
# - name: Run composer install
# run: cd companyname_app_dir && composer install
# - name: Run composer update
# run: cd companyname_app_dir&& composer update
# - name: Setup Project
# run: |
# cd companyname_app_dir
# composer update
# composer install
# php artisan config:clear
# php artisan cache:clear
- name: Run test
run: cd companyname_app_dir && php artisan test
env:
APP_KEY: base64:x06N/IsV5iJ+R6TKlr6sC6Mr4riGgl8Rg09XHHnRZQw=
APP_ENV: testing
DB_CONNECTION: companyname-postgres
DB_DATABASE: db_test
DB_USERNAME: root
DB_PASSWORD: 1234
deploy:
needs: test
runs-on: ubuntu-latest
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action#v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v2
- name: Login to Docker Hub
uses: docker/login-action#v2
with:
username: secret
password: secret
- name: Build and push
uses: docker/build-push-action#v3
with:
push: true
file: ./companyname_app_dir/Dockerfile
tags: company_image:latest
build-args: |
"NODE_ENV=production"
There are line comments, I tried using these but I couldn't run a test successfully.
docker-compose
version: '3'
networks:
companyname_network:
driver: bridge
services:
nginx:
image: nginx:stable-alpine
container_name: companyname-nginx
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
restart: always
depends_on:
- companyname_app
networks:
- companyname_network
companyname_app:
restart: 'always'
image: 'companyname_laravel'
container_name: companyname-app
build:
context: .
dockerfile: ./Dockerfile
networks:
- companyname_network
depends_on:
- companyname_db
companyname_db:
image: 'companyname_multiple_db'
container_name: companyname-postgres
build:
context: .
dockerfile: ./DockerfileDB
restart: 'always'
volumes:
- local_pgdata:/docker-entrypoint-initdb.d
environment:
- POSTGRES_MULTIPLE_DATABASES=db,db_test
- POSTGRES_USER=root
- POSTGRES_PASSWORD=1234
ports:
- 15432:5432
networks:
- companyname_network
companyname_dbadmin:
image: adminer
container_name: companyname-dbadmin
restart: 'always'
depends_on:
- companyname_db
ports:
- 5051:8080
networks:
- companyname_network
volumes:
local_pgdata:
docker-compose.dev
version: '3'
services:
nginx:
ports:
- 9000:80
companyname_app:
build:
args:
- NODE_ENV=development
volumes:
- ./companyname_app_dir:/app
- /app/vendor
With this file, I get an error:
Run cd companyname_app_dir && php artisan test
PHP Warning: require(/home/runner/work/companyname_app/companyname_app /companyname_app_dir/vendor/autoload.php): failed to open stream: No such file or directory in /home/runner/work/companyname_app/companyname_app/companyname_app_dir/artisan on line 18
PHP Fatal error: require(): Failed opening required '/home/runner/work/companyname_app/companyname_app/companyname_app_dir/vendor/autoload.php' (include_path='.:/usr/share/php') in /home/runner/work/companyname_app/companyname_app/companyname_app_dir/artisan on line 18
Error: Process completed with exit code 255.
if I use:
- name: Run composer install
run: cd companyname_app_dir && composer install
- name: Run composer update
run: cd companyname_app_dir && composer update
In CI/CD yml and remove Run Containers part, composer install and update successfully, but php artisan test throws this error:
postgresql can not connect
You must use composer install, else you will have no vendor folder at all, so you have nothing to run. That is why you are getting an error if you don't run composer install
You should not run composer update, because you are updating packages to new versions, you never do that in production, you just run composer install --no-dev
You are mixing running docker with a command OUTSIDE the docker container.
Related to point 3., if you are using docker-compose, you cannot execute:
- name: Run test
run: cd companyname_app_dir && php artisan test
env:
APP_KEY: base64:x06N/IsV5iJ+R6TKlr6sC6Mr4riGgl8Rg09XHHnRZQw=
APP_ENV: testing
DB_CONNECTION: companyname-postgres
DB_DATABASE: db_test
DB_USERNAME: root
DB_PASSWORD: 1234
Because you are outside docker, so you should execute docker-compose exec companyname_app php artisan test, that will execute the tests INSIDE the docker container, where you correctly have everything setup.
So your code (if I am not missing anything) should be:
- name: Run test
run: docker-compose exec companyname_app php artisan test
env:
APP_KEY: base64:x06N/IsV5iJ+R6TKlr6sC6Mr4riGgl8Rg09XHHnRZQw=
APP_ENV: testing
DB_CONNECTION: companyname-postgres
DB_DATABASE: db_test
DB_USERNAME: root
DB_PASSWORD: 1234
But I am not certaing what will you get back from that execution, I have no idea if the test fails, if the CI/CD (I am assuming you are using GitHub Actions or Bitbucket Pipelines), will truly identify that it has failed or not.
What I usually do, is just install everything on the machine (CI/CD machine), instead of using a docker file or docker-compose yaml. But that is my preference (at least for PHP/Laravel)

sh: 1: nest: Permission denied in GitHub Action

For some reason the build step for my NestJS project in my GitHub Action fails for a few days now. I use Turborepo with pnpm in a monorepo and try to run the build with turbo run build. This works flawlessly on my local machine, but somehow in GitHub it fails with sh: 1: nest: Permission denied. ELIFECYCLE  Command failed with exit code 126. I'm not sure how this is possible, since I couldn't find any meaningful change I made to the code in the meantime. It just stopped working unexpectedly. I actually think it is an issue with GH Actions, since it actually works in my local Docker build as well.
Has anyone else encountered this issue with NestJS in GH Actions?
This is my action yml:
name: Test, lint and build
on:
push:
jobs:
test-lint-build:
runs-on: ubuntu-latest
services:
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_HOST: localhost
POSTGRES_USER: test
POSTGRES_PASSWORD: docker
POSTGRES_DB: financing-database
ports:
# Maps tcp port 5432 on service container to the host
- 2345:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Install pnpm
uses: pnpm/action-setup#v2.2.2
with:
version: latest
- name: Install
run: pnpm i
- name: Lint
run: pnpm run lint
- name: Test
run: pnpm run test
- name: Build
run: pnpm run build
env:
VITE_SERVER_ENDPOINT: http://localhost:8000/api
- name: Test financing-server (e2e)
run: pnpm --filter #project/financing-server run test:e2e
I found out what was causing the problem. I was using node-linker = hoisted to mitigate some issues the pnpm way of linking modules was causing with my jest tests. Removing this from my project suddenly made the action work again.
I still don't know why this only broke the build recently, since I've had this option activated for some time now.

Gitlab pipeline error With CD/CI for AWS ec2 debian instance: This job is stuck because you don't have any active runners online

I want to create a CI/CD pipeline between gitlab and aws ec2 deployment.
My repository is nodejs/express web server project.
And I created a gitlab-ci.yaml
image: node:latest
cache:
paths:
- node_modules/
stages:
- build
- test
- staging
- openMr
- production
before_script:
- apt-get update -qq && apt-get install
Build:
stage: build
tags:
- node
before_script:
- yarn config set cache-folder .yarn
- yarn install
script:
- npm run build
Test:
stage: test
tags:
- node
before_script:
- yarn config set cache-folder .yarn
- yarn install --frozen-lockfile
script:
- npm run test
Deploy to Production:
stage: production
tags:
- node
before_script:
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash ./gitlab-deploy/.gitlab-deploy.prod.sh
environment:
name: production
url: http://ec2-url.compute.amazonaws.com:81
When I push a new commit pipeline failed on build step. And I get a warning as :
This job is stuck because you don't have any active runners online or
available with any of these tags assigned to them: node
I checked my runner on gitlab settings/CI/CD
After that I checkked server
admin#ip-111.222.222.111:~$ gitlab-runner
statusRuntime platform arch=amd64 os=linux pid=18787 revision=98daeee0 version=14.7.0
FATAL: The --user is not supported for non-root users
You need to remove the tag node from your jobs. Runner tags are used to define which runner should pick up your jobs (https://docs.gitlab.com/ee/ci/runners/configure_runners.html#use-tags-to-control-which-jobs-a-runner-can-run). As there is no runner available which supports the tag node, your job gets stuck.
It doesn't look like your pipeline has any special requirements so you can just remove the tag so it can be picked up by every runner.
The runner that can be seen in your screenshot supports the tag shop_service_runner. So another option would be to change the tag node to shop_service_runner which would lead to this runner (and every runner with the same tags) being able to pick up this job.

Can someone look at my yaml file for code deployment using Bitbucket Pipelines?

This is my first attempt at setting up pipelines or even using any CI/CD tool. So, reading the documentation at Bitbucket, I added the bitbucket-pipelines.yml file in the root of my Laravel application for a build. Here is the file.
image: php:7.4-fpm
pipelines:
default:
- step:
name: Build and test
caches:
- composer
script:
- apt-get update && apt-get install -qy git curl libmcrypt-dev mariadb-client ghostscript
- yes | pecl install mcrypt-1.0.3
- docker-php-ext-install pdo_mysql bcmath exif
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --file name=composer
- composer install
- ln -f -s .env.pipelines .env
- php artisan migrate
- ./vendor/bin/phpunit
services:
- mysql
- redis
definitions:
services:
mysql:
image: mysql:5.7
environment:
MYSQL_DATABASE: "laravel-pipeline"
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_USER: "homestead"
MYSQL_PASSWORD: "secret"
redis:
image: redis
The above works fine in building the application, running tests,etc. But when I add the below to deploy, using the scp pipe, I get a notice saying either I need to include an image or at times the notice says there is a bad indentation of a mapping entry.
- step:
name: Deploy to test
deployment: test
# trigger: manual # Uncomment to make this a manual deployment.
script:
- pipe: atlassian/scp-deploy:0.3.13
variables:
USER: '${remoteUser}'
SERVER: '${server}'
REMOTE_PATH: '${remote}'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'
I don't really know yaml, and this is my first time working with a CI/CD tool so I am lost. Can someone guide me in what I am doing wrong?
Your indentation for name and deployment is not the same as for the script. Try putting it all on the same indentation like this.
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/scp-deploy:0.3.13
variables:
USER: '${remoteUser}'
SERVER: '${server}'
REMOTE_PATH: '${remote}'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'

docker-compose deployment configuration for Circle CI

I am using Circle CI to deploy a microservice to a Digital Ocean droplet and had a few questions about whether my approach is the right one.
My microservice is built using docker-compose, and therefore requires a docker-compose.yml file to pull, start the images that constitute it.
In a nutshell, my deployment approach would be:
Merge branch to master will kick off a CircleCI build
CircleCI will run unit tests
Upon all tests passing, docker-compose build and docker-compose push to Docker Hub
Stop all running images of that service on remote server.
Remove dangling images, and local networks.
Download the relevant docker-compose.yml, Dockerfile and docker-compose.env files.
Pull using docker-compose pull
Start images using docker-compose up
I am using this configuration in CircleCI:
version: 2.1
jobs:
build:
docker:
- image: "circleci/node:10.16.0"
steps:
- checkout
- run:
name: Update to latest npm version
command: "sudo npm install -g npm#latest"
- restore_cache:
key: dependency-cache-{{ checksum "package-lock.json" }}
- run:
name: Install dependencies
command: npm install
- run:
name: Install `docker-compose`
command: |
curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker:
docker_layer_caching: false
- run:
name: Build using `docker-compose`
command: |
docker-compose build
- run:
name: Login for Docker Hub
command: |
echo "$DOCKER_PASSWORD" | docker login --username $DOCKER_USERNAME --password-stdin
- run:
name: Push to Docker Hub
command: |
docker-compose push
- run: ssh-keyscan $DIGITALOCEAN_HOST >> ~/.ssh/known_hosts
- add_ssh_keys:
fingerprints:
- fo:of:fe:ef:af
- run:
name: Remove currently running containers
command: |
ssh root#$DIGITALOCEAN_HOST ./deploy_image.sh
I am planning on creating a bash script to handle steps 4 to 8 from my list above.
Is it a good idea to have a script take care of the Docker steps?
Or is there a better way to have a more "native" CircleCI configuration?

Resources