Can someone look at my yaml file for code deployment using Bitbucket Pipelines? - laravel

This is my first attempt at setting up pipelines or even using any CI/CD tool. So, reading the documentation at Bitbucket, I added the bitbucket-pipelines.yml file in the root of my Laravel application for a build. Here is the file.
image: php:7.4-fpm
pipelines:
default:
- step:
name: Build and test
caches:
- composer
script:
- apt-get update && apt-get install -qy git curl libmcrypt-dev mariadb-client ghostscript
- yes | pecl install mcrypt-1.0.3
- docker-php-ext-install pdo_mysql bcmath exif
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --file name=composer
- composer install
- ln -f -s .env.pipelines .env
- php artisan migrate
- ./vendor/bin/phpunit
services:
- mysql
- redis
definitions:
services:
mysql:
image: mysql:5.7
environment:
MYSQL_DATABASE: "laravel-pipeline"
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_USER: "homestead"
MYSQL_PASSWORD: "secret"
redis:
image: redis
The above works fine in building the application, running tests,etc. But when I add the below to deploy, using the scp pipe, I get a notice saying either I need to include an image or at times the notice says there is a bad indentation of a mapping entry.
- step:
name: Deploy to test
deployment: test
# trigger: manual # Uncomment to make this a manual deployment.
script:
- pipe: atlassian/scp-deploy:0.3.13
variables:
USER: '${remoteUser}'
SERVER: '${server}'
REMOTE_PATH: '${remote}'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'
I don't really know yaml, and this is my first time working with a CI/CD tool so I am lost. Can someone guide me in what I am doing wrong?

Your indentation for name and deployment is not the same as for the script. Try putting it all on the same indentation like this.
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/scp-deploy:0.3.13
variables:
USER: '${remoteUser}'
SERVER: '${server}'
REMOTE_PATH: '${remote}'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'

Related

PHP Fatal error on CI/CD run php artisan test

I use docker-compose with laravel and postgresql and all works fine in local system. The problem is in the CI/CD.
I have changed the CI/CD yml file over and over but I am stuck!
CI/CD
name: CI/CD
on:
pull_request:
branches: ['master']
push:
branches: ['master']
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: shivammathur/setup-php#v2
with:
php-version: '7.4'
- uses: actions/checkout#v2
- name: Run Containers
run: docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
# - name: Run composer install
# run: cd companyname_app_dir && composer install
# - name: Run composer update
# run: cd companyname_app_dir&& composer update
# - name: Setup Project
# run: |
# cd companyname_app_dir
# composer update
# composer install
# php artisan config:clear
# php artisan cache:clear
- name: Run test
run: cd companyname_app_dir && php artisan test
env:
APP_KEY: base64:x06N/IsV5iJ+R6TKlr6sC6Mr4riGgl8Rg09XHHnRZQw=
APP_ENV: testing
DB_CONNECTION: companyname-postgres
DB_DATABASE: db_test
DB_USERNAME: root
DB_PASSWORD: 1234
deploy:
needs: test
runs-on: ubuntu-latest
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action#v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v2
- name: Login to Docker Hub
uses: docker/login-action#v2
with:
username: secret
password: secret
- name: Build and push
uses: docker/build-push-action#v3
with:
push: true
file: ./companyname_app_dir/Dockerfile
tags: company_image:latest
build-args: |
"NODE_ENV=production"
There are line comments, I tried using these but I couldn't run a test successfully.
docker-compose
version: '3'
networks:
companyname_network:
driver: bridge
services:
nginx:
image: nginx:stable-alpine
container_name: companyname-nginx
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
restart: always
depends_on:
- companyname_app
networks:
- companyname_network
companyname_app:
restart: 'always'
image: 'companyname_laravel'
container_name: companyname-app
build:
context: .
dockerfile: ./Dockerfile
networks:
- companyname_network
depends_on:
- companyname_db
companyname_db:
image: 'companyname_multiple_db'
container_name: companyname-postgres
build:
context: .
dockerfile: ./DockerfileDB
restart: 'always'
volumes:
- local_pgdata:/docker-entrypoint-initdb.d
environment:
- POSTGRES_MULTIPLE_DATABASES=db,db_test
- POSTGRES_USER=root
- POSTGRES_PASSWORD=1234
ports:
- 15432:5432
networks:
- companyname_network
companyname_dbadmin:
image: adminer
container_name: companyname-dbadmin
restart: 'always'
depends_on:
- companyname_db
ports:
- 5051:8080
networks:
- companyname_network
volumes:
local_pgdata:
docker-compose.dev
version: '3'
services:
nginx:
ports:
- 9000:80
companyname_app:
build:
args:
- NODE_ENV=development
volumes:
- ./companyname_app_dir:/app
- /app/vendor
With this file, I get an error:
Run cd companyname_app_dir && php artisan test
PHP Warning: require(/home/runner/work/companyname_app/companyname_app /companyname_app_dir/vendor/autoload.php): failed to open stream: No such file or directory in /home/runner/work/companyname_app/companyname_app/companyname_app_dir/artisan on line 18
PHP Fatal error: require(): Failed opening required '/home/runner/work/companyname_app/companyname_app/companyname_app_dir/vendor/autoload.php' (include_path='.:/usr/share/php') in /home/runner/work/companyname_app/companyname_app/companyname_app_dir/artisan on line 18
Error: Process completed with exit code 255.
if I use:
- name: Run composer install
run: cd companyname_app_dir && composer install
- name: Run composer update
run: cd companyname_app_dir && composer update
In CI/CD yml and remove Run Containers part, composer install and update successfully, but php artisan test throws this error:
postgresql can not connect
You must use composer install, else you will have no vendor folder at all, so you have nothing to run. That is why you are getting an error if you don't run composer install
You should not run composer update, because you are updating packages to new versions, you never do that in production, you just run composer install --no-dev
You are mixing running docker with a command OUTSIDE the docker container.
Related to point 3., if you are using docker-compose, you cannot execute:
- name: Run test
run: cd companyname_app_dir && php artisan test
env:
APP_KEY: base64:x06N/IsV5iJ+R6TKlr6sC6Mr4riGgl8Rg09XHHnRZQw=
APP_ENV: testing
DB_CONNECTION: companyname-postgres
DB_DATABASE: db_test
DB_USERNAME: root
DB_PASSWORD: 1234
Because you are outside docker, so you should execute docker-compose exec companyname_app php artisan test, that will execute the tests INSIDE the docker container, where you correctly have everything setup.
So your code (if I am not missing anything) should be:
- name: Run test
run: docker-compose exec companyname_app php artisan test
env:
APP_KEY: base64:x06N/IsV5iJ+R6TKlr6sC6Mr4riGgl8Rg09XHHnRZQw=
APP_ENV: testing
DB_CONNECTION: companyname-postgres
DB_DATABASE: db_test
DB_USERNAME: root
DB_PASSWORD: 1234
But I am not certaing what will you get back from that execution, I have no idea if the test fails, if the CI/CD (I am assuming you are using GitHub Actions or Bitbucket Pipelines), will truly identify that it has failed or not.
What I usually do, is just install everything on the machine (CI/CD machine), instead of using a docker file or docker-compose yaml. But that is my preference (at least for PHP/Laravel)

docker container failing to start after running install.sh script [duplicate]

This question already has answers here:
Docker-Compose + Command
(2 answers)
Closed 9 months ago.
I am using this docker-compose file:
version: '3.8'
# Services
services:
# Nginx Service
nginx:
image: nginx:1.21
ports:
- 80:80
volumes:
- ./src:/var/www/php
- ./.docker/nginx/conf.d:/etc/nginx/conf.d
depends_on:
- php
# PHP Service
php:
build: ./.docker/php
working_dir: /var/www/php
volumes:
- ./src:/var/www/php
command: /bin/bash -c "./install.sh"
depends_on:
mysql:
condition: service_healthy
# MySQL Service
mysql:
image: mysql/mysql-server:8.0
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: demo
volumes:
- ./.docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
- mysqldata:/var/lib/mysql
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u root --password=$$MYSQL_ROOT_PASSWORD
interval: 2s
retries: 10
# PhpMyAdmin Service
phpmyadmin:
image: phpmyadmin/phpmyadmin:5
ports:
- 8080:80
environment:
PMA_HOST: mysql
depends_on:
mysql:
condition: service_healthy
# Volumes
volumes:
mysqldata:
I am trying to run a bash script (install.sh) after the container is created to run apt-get update install wget etc, but the php container fails when I try to run it.
My bash script is:
#!/bin/bash
mkdir testdir && apt-get update && apt-get install wget -y
(this file is here: ./src/install.sh)
It creates the folder correctly and the logs suggest it is trying to install wget (but never seems to finish) but the container never starts correctly.
If I remove the command: /bin/bash -c "./install.sh" line everything works correctly (but wget is not installed).
I have tried moving the command to a Dockerfile as a RUN command but it never seems to run
Any ideas why this is happening?
Thanks
As Hans Kilian said in the comments, docker-compose commands replace anything set by CMD or ENTRYPOINT. These commands are necessary for the container to function, and thus it never does anything more than installing wget.
You appear to be trying to run a file located under "./install.sh," which is not an absolute path. Try running the command using the absolute path of the file, as dockerfiles do not, in my experience, recognize changing directory after each command, so:
RUN cd /xyz
RUN /bin/bash -c "./install.sh"
does not have the same result as
RUN /bin/bash -c "/xyz/install.sh"
(where /xyz is the directory where install.sh is located)
Additionally, make sure the file is marked as executable with chmod when it is copied into your container.
However, if all you desire to do is create a directory and install wget, I would simply do this in the Dockerfile:
RUN mkdir testdir
RUN apt-get update && apt-get install -y wget

How to run db migration when Laravel container is up?

I have problem with running db migration on when container is up.
Problems:
cant set app key because gitlab-ci didn't copy .env file (getting err in gitlab ci console), so setting key needs to happen later
running migration with wait-for-it because container exits with success code 0 (migrations is up)
I will put code only for my db and web container.
db:
container_name: db
image: mysql:5.7.22
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_USER: ${DB_USERNAME}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
ports:
- 3306:3306
volumes:
- mysql-data:/var/lib/mysql
networks:
- my-network
backend:
image: registry image
container_name: "backend"
build:
context: ./backend
dockerfile: Dockerfile
depends_on:
- db
ports:
- 1020:80
networks:
- my-network
gitlab-ci:
build-backend:
tags:
- vps
variables:
GIT_CLEAN_FLAGS: none
stage: dockerize
image: docker:latest
services:
- docker:dind
dependencies: []
script:
- docker build -t backend backend
- cp .env ./backend/.env
- cd backend
- docker build -t $CI_REGISTRY_IMAGE/backend:$CI_COMMIT_BRANCH .
- docker tag $CI_REGISTRY_IMAGE/backend:$CI_COMMIT_BRANCH $CI_REGISTRY_IMAGE/frontend:$CI_COMMIT_REF_NAME
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE/backend:$CI_COMMIT_REF_NAME
...deploying code
I'm using https://github.com/vishnubob/wait-for-it
Dockerfile:
FROM webdevops/php-nginx:7.4-alpine
# Install Laravel framework system requirements (https://laravel.com/docs/8.x/deployment#optimizing-configuration-loading)
RUN apk add oniguruma-dev postgresql-dev libxml2-dev
RUN docker-php-ext-install \
bcmath \
ctype \
fileinfo \
json \
mbstring \
pdo_mysql \
pdo_pgsql \
tokenizer \
xml
# Copy Composer binary from the Composer official Docker image
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
ENV WEB_DOCUMENT_ROOT /var/www/html/public
ENV APP_ENV production
WORKDIR /var/www/html
COPY . .
RUN composer install --no-interaction --optimize-autoloader
RUN chown -R root:root .
RUN chmod -R ugo+rw storage
RUN chmod 777 wait-for-it.sh
RUN chmod 777 migrate.sh
EXPOSE 80
CMD ["./wait-for-it.sh", "db:3306", "--", "./migrate.sh"]
migrate.sh:
#!/bin/sh
php artisan key:generate
# Optimizing Configuration loading
php artisan config:cache
# Optimizing Route loading
php artisan route:cache
# Optimizing View loading
php artisan view:cache
echo "finished cashes"
php artisan migrate --force &
exec "$#"
So how can I solve exit code 0, meant to say how to prevent container from stopping?
Thanks
Solution is to use Supervisor which is useful for having all the jobs in background and wont close your container while running migrations.
I can't believe that this configuration from this repo is working perfectly it saves my time. I spent more than 7 days in searching for the best solution, but this guy who posted this saves me!
Refer to this repo please https://github.com/harshalone/laravel-9-production-ready
I didn't have to change a line of code, hope you don't have too. Simply works!
for anyone struggling with running migrations but your database isn't up before your Laravel application, just want to mention that I've used wait-for-it script and changed last line of code in my Dockerfile like this:
CMD ["/var/www/docker/wait-for-it.sh", "db:3306", "--", "/var/www/docker/run.sh"]
So now my migrations will first wait for database to be up and running.
Just put wait-for-it.sh inside of your docker folder or use it from github directly.

docker compose and swarm for laravel app in production

everyone, I am confused I am new to the DevOps world and I have no idea how to use docker-compose or swarm in production I mean what are the best practices in production for both I followed up with this article on the digital ocean How To Install and Set Up Laravel with Docker Compose on Ubuntu 20.04
all works like a charm in local, test, and dev environments, and I tried to take this to the next step for production env, and I noticed some things should be changed for production mode like so
Removing any volume bindings for application code, so that code stays inside the container and can’t be changed from outside.
Binding to different ports on the host. check the link for more info use compose in production
I don't know how to achieve #1 point
here's my DockerFile below to build my custom laravel image and docker-compose for my services
FROM php:7.4-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Download php extension installer
ADD https://github.com/mlocati/docker-php-extension-installer/releases/latest/download/install-php-extensions /usr/local/bin/
# Give php extension installer a permission
RUN chmod +x /usr/local/bin/install-php-extensions
# Install php extensions via php extension installer
RUN install-php-extensions zip
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www/html
USER $user
version: "3.7"
services:
app:
build:
args:
user: sammy
uid: 1000
context: ./
dockerfile: Dockerfile
image: app
container_name: app
restart: always
working_dir: /var/www/html
volumes:
- ./:/var/www/html
networks:
- backend
db:
image: mysql:8.0
container_name: db
restart: always
environment:
MYSQL_DATABASE: ROUTE
MYSQL_ROOT_PASSWORD: 2020
MYSQL_PASSWORD: 2020
MYSQL_USER: sqluser
volumes:
- db:/var/lib/mysql
networks:
- backend
nginx:
image: nginx:1.21.6
container_name: nginx
restart: always
ports:
- 8000:80
networks:
- backend
volumes:
- ./:/var/www/html
- ./docker-compose/nginx:/etc/nginx/conf.d/
phpmyadmin:
image: phpmyadmin
container_name: pma
restart: always
ports:
- 8283:80
environment:
PMA_HOSTS: db
PMA_ARBITRARY: 1
PMA_USER: sqluser
PMA_PASSWORD: 2020
networks:
- backend
networks:
backend:
driver: bridge
volumes:
db:
Note:-
in Nginx service, I Created two shared volumes. The first one will synchronize contents from the current directory to /var/www inside the container. This way, when you make local changes to the application files, they will be quickly reflected in the application being served by Nginx inside the container (Which is not good for production). The second volume will make sure our Nginx configuration file, located at docker-compose/nginx/, is copied to the container’s Nginx configuration folder.
i tried to remove the first volume but keep the second one to use my custom configuration but it did not work at all why?

Unable to start MySQL service in docker during gitlab-ci

I have the following .gitlab-ci taken from the example of Laravel Dusk CI:
stages:
- build
- test
# Variables
variables:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DATABASE: test
DB_HOST: mysql
DB_CONNECTION: mysql
build:
stage: build
services:
- mysql:5.7
image: chilio/laravel-dusk-ci:stable
script:
- composer install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts
# - npm install # if you need to install additional modules from your projects package.json
# - npm run dev # if you need to run dev scripts for example laravel mix
cache:
key: ${CI_COMMIT_REF_NAME}
paths:
# these are only examples, you should modify them according to your project,
# or remove cache routines entirely, if they are causing any problems on your next builds..
# below are 2 safe ones if you use composer install and npm install in your stage script
- vendor
- node_modules
# - /resources/assets/vendors # for example if you put your vendor node-libraries there
test:
stage: test
cache:
key: ${CI_COMMIT_REF_NAME}
paths:
- vendor
- node_modules
policy: pull
services:
- mysql:5.7
image: chilio/laravel-dusk-ci:stable
script:
- cp .env.example .env
# - cp phpunit.xml.ci phpunit.xml # if you are using custom config for your phpunit tests in CI
- configure-laravel
- start-nginx-ci-project
- ./vendor/phpunit/phpunit/phpunit -v --coverage-text --colors --stderr
# - phpunit -v --coverage-text --colors --stderr # if you want to use preinstalled phpunit
- php artisan dusk --colors --debug
artifacts:
paths:
- ./storage/logs # for debugging
- ./tests/Browser/screenshots
- ./tests/Browser/console
expire_in: 7 days
when: always
However, when the runner executes the job, I keep getting the following warning:
Using Docker executor with image chilio/laravel-dusk-ci:stable ...
Starting service mysql:5.7 ...
Pulling docker image mysql:5.7 ...
Using docker image sha256:66bc0f66b7af6ba3ea96582685d3afcd6dff93c2f8999da0ffadd67b280db548 for mysql:5.7 ...
Waiting for services to be up and running...
*** WARNING: Service runner-237f18d2-project-23-concurrent-0-mysql-0 probably didn't start properly.
Health check error:
ContainerStart: Error response from daemon: Cannot link to a non running container: /runner-237f18d2-project-23-concurrent-0-mysql-0 AS /runner-237f18d2-project-23-concurrent-0-mysql-0-wait-for-service/service
Service container logs:
2018-07-11T19:49:03.214991318Z
2018-07-11T19:49:03.215062485Z ERROR: mysqld failed while attempting to check config
2018-07-11T19:49:03.215067480Z command was: "mysqld --verbose --help"
2018-07-11T19:49:03.215070774Z
2018-07-11T19:49:03.215073778Z mysqld: error while loading shared libraries: libpthread.so.0: cannot stat shared object: Permission denied
I've tried to set the runner to privileged in the config.toml:
privileged = true
To solve the Question:
mysqld: error while loading shared libraries: libpthread.so.0: cannot stat shared object: Permission denied
Step1: update your software and kernel(maybe):
apt-get update && apt-get upgrade
Step2: install the docker dependency package:
(ubuntu/debian): apt-get install apt-transport-https ca-certificates curl gnupg2 software properties-common
(centos/redhat):yum-utils device-mapper-persistent-data lvm2
Step3: reboot your server & restart your docker-ce:
reboot
systemctl restart docker-ce

Resources