How to deploy laravel app on digitalocean after succesfull test pass on Gitalab CI - laravel

I have managed to succesfully run laravel test build on Gitlab CI using Gitlab Runner on digitaocean (With help from tutorial HOW TO: LARAVEL TESTING ON GITLAB CI WITH DOCKER)
Now I am wondering how I can deploy it after succesfull test.
This is my deploy process on my staging env:
cd MY_PROJECT_ROOT_DIR
git reset --hard HEAD
git checkout master
git pull
composer install
composer update
php artisan migrate
php artisan db:seed
How I can manage to include this deploy after test is done?
My configuration of GitLab Runner is the same as those files on this repo
This is the content of my .gitlab-ci.yml file:
before_script:
- bash .gitlab-ci.sh
variables:
MYSQL_DATABASE: laravel
MYSQL_ROOT_PASSWORD: secret
phpunit:php-laravel-env:mysql:
image: woohuiren/php-laravel-env:latest
services:
- mysql:latest
script:
- php vendor/bin/phpunit --colors
How I should change this file in order to execute deploy script after test passed?

You need to use "stages"
Basically you would update your current test setup to include a stage.
stages:
- test
- deploy
phpunit:php-laravel-env:mysql:
stage: test
image: woohuiren/php-laravel-env:latest
services: ...
deploy_my_site:
stage: deploy
...
These stages will get run in sequence by GitLab but will stop if there is any error.
If you are using Forge you could use the deploy stage to trigger a script to curl the forge deploy hook.

Related

Laravel Vapor Docker Runtime with Gitlab CI want not to be work

I use Laravel Vapor for deploying our microservices based on Laravel. This works very good so far, if the app with their dependencies is not too large. But if it is then it gets a little bit tricky.
Vapor provides a Docker runtime for this case where you are able to deploy apps up to 10GB size.
For local development we usually use Laradock.io because its easy and flexible.
That means if we deploy from our local environment it easy to enter the workspace container and and run the vapor deploy commands. After enabling Docker Client for the workspace container it works with the vapor Docker runtime properly.
But now we integrated the deployment process into Gitlab CI Pipeline. That works very well for our small services with Vapor PHP runtime.
But for the Docker runtime I desperate on the CI deployment.
The docker runtime needs an installed docker instance where vapor will be invoked. That means in the Gitlab-ci.yml I have to add an image with installed Docker and PHP to invoke the Vapor scripts.
So I created an docker image base on the laradock workspace container but the Gitlab-runner exits always with the error message no docker deamon is available.
This is the related part of my GitLab-CI yml (the image is only local available):
testing:
image:
name: lexitaldev/vapor-docker-deploy:latest
pull_policy: never
securityContext:
privileged: true
environment: testing
stage: deploy
only:
- test
script:
- composer install
- php vendor/bin/vapor deploy test
This is the specific output:
Error Output:
================
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the
docker daemon running?
I've tried to use the standard 'laravelphp/vapor:php80' image and install docker over the script section as well.
before_script:
- apk add docker
- addgroup root docker
But nothing helped. It seems to be there is a problem with the docker.sock.
Did anybody managed to add Vapor Docker Runtime deployment to CI scripts?
Best,
Michael
I would like to tell you, that you only need to add the Service: dind, but after you do that, it will throw an error, related to the image that Gitlab create for your pipelines. So you need to create a runner with volumes, privileged flag, and tags.
I did it, using gitlab-runner on my machine.
sudo gitlab-runner register -n \
--url {{ your_url }} \
--registration-token {{your_token}} \
--executor docker \
--description "{{ Describe your runner }}" \
--docker-image "docker:20.10.12-alpine3.15" \
--docker-privileged \
--docker-volumes="/certs/client" \
--docker-volumes="cache" \
--docker-volumes="/var/run/docker.sock:/var/run/docker.sock"
--tag-list {{ a_tag_for_your_pipeline }}
Once you did that, you would need to use a docker stable version in your gitlab-ci.yml file. For some reason, it doesn't work when I was trying to use version 20 or latest
image: docker:stable
services:
- name: docker:stable:dind
before_script:
- echo $CI_JOB_TOKEN | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin
build:
tags:
- {{the tag you defined in your runner}}
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- echo $IMAGE_TAG
- docker build -t $CI_REGISTRY_IMAGE -f {{your Dockerfile}} .
- docker push $CI_REGISTRY_IMAGE
All the variables are previously defined in Gitlab, so don't worry, you can "copy & paste". Also, I added some advices that Gitlab mention on its documentation when you need to register your Docker container in Gitlab container.

Getting a SQLSTATE[HY000] [2002] Connection refused error in github action?

This is what my actions look like, when I get to the point of php artisan test I get the error:
name: Continuous Integration
on:
push
jobs:
laravel-tests:
runs-on: ubuntu-20.04
- name: create migration
env:
DB_CONNECTION: sqlite
DB_DATABASE: database/database.sqlite
run: php artisan migrate --seed
- name: run tests
run: php artisan test
According to the documentation on steps:
Each step runs in its own process in the runner environment and has access to the workspace and filesystem. Because steps run in their own process, changes to environment variables are not preserved between steps.
So you'll need to provide the environment variables for the entire workflow, as described in the documentation for environment variables and the jobs.<job_id>.env keyword:
name: Continuous Integration
on:
push
jobs:
laravel-tests:
env:
DB_CONNECTION: sqlite
DB_DATABASE: database/database.sqlite
runs-on: ubuntu-20.04
steps:
...

heroku.yml not respecting docker build target

Either I'm doing something wrong or Heroku is messing up. Heroku supports targeting a particular stage in a Dockerfile. I have a multistage Dockerfile but Heroku is not respecting the build.docker.release.target in my heroku.yml. For what it's worth, targeting works fine with docker-compose.yml.
I'm trying to keep dev and prod in the same Dockerfile. Essentially dev and prod are forked from base. I could flesh it out more, but the stages are:
FROM python:3.10.0-slim-buster AS venv
...
FROM python:3.10.0-slim-buster as base
...
FROM base AS dev
...
ENTRYPOINT ["entrypoint.dev.sh"]
FROM base AS prod
...
ENTRYPOINT ["entrypoint.prod.sh"]
My heroku.yml specifically targets the prod stage:
setup:
addons:
- plan: heroku-postgresql
as: DATABASE
build:
docker:
release:
dockerfile: image/app/Dockerfile
target: prod
web: image/app/Dockerfile
config:
DJANGO_ENV: production
release:
image: web
command:
- ./deployment-tasks.sh
run:
web: gunicorn server.wsgi:application --bind 0.0.0.0:$PORT --log-level debug --access-logfile - --error-logfile -
However, Heroku builds all the stages, seems like it just runs down the Dockerfile till the end. The Heroku build logs show that first dev follows base
and then prod follows dev
I would expect it to jump from base to prod, skipping dev.
Is this an issue on my side or Heroku's?
I haven't tested this with heroku.yml because I've moved to GitHub Actions but I believe the error was having prod come after dev. Apparently the --target flag in docker build means it will stop at that stage, so it will run everything before it.

Gitlab runner with cassandra gitlab-ci.yml configururation

I am trying to use a gitlab runner to run a maven project integration tests that need a cassandra database. I am not sure how to write the gitlab-ci.yml file. At the moment this is what I have
stages:
- test
test_job:
stage: test
script: "mvn clean verify -DlocalIntegrationTests=true"
when: on_success
except:
- production
Cassandra doesn't start up. How do I change the file to include cassandra starting up?
You can run cassandra as a service and connect to it from your test stage
services:
- cassandra
Here you will find how to access the service.

deploy docker to heroku without using heroku docker plugin

Say I'm working on a web project that runs gitlab-ci shell runner on my own ci server to build docker and deploy it to heroku, and I've gone through some docs from both gitlab and heroku like gitlab-ci: using docker build and heroku:Build and Deploy with Docker. Can I deploy the docker project without using heroku-docker plugin, which seems not so flexible to me? However I tried, the following approach build succeeded in deploying to heroku, but the app crash. Heroku logs says start script is missing in package.json, but since I'm deploying docker project, I couldn't do "start": "docker-compose up" there, could I?
#.gitlab-ci.yml
stages:
- deploy
before_script:
- npm install
- bower install
dev:
stage: deploy
script:
- docker-compose run nginx run-test
- gem install dpl
- dpl --provider=heroku --app=xixi-web-dev --api-key=$HEROKU_API_KEY
only:
- dev
# docker-compose.yml
app:
build: .
volumes:
- .:/code:ro
expose:
- "3000"
working_dir: /code
command: pm2 start app.dev.json5
nginx:
build: ./setup/nginx
restart: always
volumes:
- ./setup/nginx/sites-enabled:/etc/nginx/sites-enabled:ro
- ./dist:/var/app:ro
ports:
- "$PORT:80"
links:
- app
I don't want to use heroku docker plugin, because it seems less flexible, I can't create a app.json because I don't want to use an existing docker image for my app. Instead, I define custom Dockerfiles for app and nginx used in docker-compose.yml
Now it seems that heroku wouldn't detect my project as a docker project unless I deploy it by using heroku docker plugin, but as I mentioned above, I can't do that. Then is there any docs I'm missing on heroku or gitlab could help me out? Or do you have any idea that might be helpful? Thanks a lot!
OK, seems that heroku docker:release is required. I ended up installing heroku cli and heroku docker plugin on my CI server and use heroku docker:release --app app to release my app

Resources