Laravel Vapoor on aws codepipeline - laravel

Laravel vapor is completely developed on aws platform but not used aws code pipeline to deploy codes. Has anyone tried aws code pipeline to deploy vapor codes?
I can deploy ubuntu server and install required PHP extensions and run vapor deploy staging command in aws codedeploy. Wondering is there any better way to deploy laravel vapor.

Finally, I made the below changes in the buildspec.yml file and used the ubuntu instance.
install:
commands:
- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2 &
- timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
#- command
#- echo Logging in to Amazon ECR
pre_build:
commands:
- docker run --name myvapor -d -e VAPOR_API_TOKEN=MY_VAPOR_API_TOKEN --volume $(pwd):/~ --workdir /~ teamnovu/laravel-vapor
- docker exec myvapor vapor deploy staging

Related

Laravel Vapor Docker Runtime with Gitlab CI want not to be work

I use Laravel Vapor for deploying our microservices based on Laravel. This works very good so far, if the app with their dependencies is not too large. But if it is then it gets a little bit tricky.
Vapor provides a Docker runtime for this case where you are able to deploy apps up to 10GB size.
For local development we usually use Laradock.io because its easy and flexible.
That means if we deploy from our local environment it easy to enter the workspace container and and run the vapor deploy commands. After enabling Docker Client for the workspace container it works with the vapor Docker runtime properly.
But now we integrated the deployment process into Gitlab CI Pipeline. That works very well for our small services with Vapor PHP runtime.
But for the Docker runtime I desperate on the CI deployment.
The docker runtime needs an installed docker instance where vapor will be invoked. That means in the Gitlab-ci.yml I have to add an image with installed Docker and PHP to invoke the Vapor scripts.
So I created an docker image base on the laradock workspace container but the Gitlab-runner exits always with the error message no docker deamon is available.
This is the related part of my GitLab-CI yml (the image is only local available):
testing:
image:
name: lexitaldev/vapor-docker-deploy:latest
pull_policy: never
securityContext:
privileged: true
environment: testing
stage: deploy
only:
- test
script:
- composer install
- php vendor/bin/vapor deploy test
This is the specific output:
Error Output:
================
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the
docker daemon running?
I've tried to use the standard 'laravelphp/vapor:php80' image and install docker over the script section as well.
before_script:
- apk add docker
- addgroup root docker
But nothing helped. It seems to be there is a problem with the docker.sock.
Did anybody managed to add Vapor Docker Runtime deployment to CI scripts?
Best,
Michael
I would like to tell you, that you only need to add the Service: dind, but after you do that, it will throw an error, related to the image that Gitlab create for your pipelines. So you need to create a runner with volumes, privileged flag, and tags.
I did it, using gitlab-runner on my machine.
sudo gitlab-runner register -n \
--url {{ your_url }} \
--registration-token {{your_token}} \
--executor docker \
--description "{{ Describe your runner }}" \
--docker-image "docker:20.10.12-alpine3.15" \
--docker-privileged \
--docker-volumes="/certs/client" \
--docker-volumes="cache" \
--docker-volumes="/var/run/docker.sock:/var/run/docker.sock"
--tag-list {{ a_tag_for_your_pipeline }}
Once you did that, you would need to use a docker stable version in your gitlab-ci.yml file. For some reason, it doesn't work when I was trying to use version 20 or latest
image: docker:stable
services:
- name: docker:stable:dind
before_script:
- echo $CI_JOB_TOKEN | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin
build:
tags:
- {{the tag you defined in your runner}}
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- echo $IMAGE_TAG
- docker build -t $CI_REGISTRY_IMAGE -f {{your Dockerfile}} .
- docker push $CI_REGISTRY_IMAGE
All the variables are previously defined in Gitlab, so don't worry, you can "copy & paste". Also, I added some advices that Gitlab mention on its documentation when you need to register your Docker container in Gitlab container.

docker-compose deployment configuration for Circle CI

I am using Circle CI to deploy a microservice to a Digital Ocean droplet and had a few questions about whether my approach is the right one.
My microservice is built using docker-compose, and therefore requires a docker-compose.yml file to pull, start the images that constitute it.
In a nutshell, my deployment approach would be:
Merge branch to master will kick off a CircleCI build
CircleCI will run unit tests
Upon all tests passing, docker-compose build and docker-compose push to Docker Hub
Stop all running images of that service on remote server.
Remove dangling images, and local networks.
Download the relevant docker-compose.yml, Dockerfile and docker-compose.env files.
Pull using docker-compose pull
Start images using docker-compose up
I am using this configuration in CircleCI:
version: 2.1
jobs:
build:
docker:
- image: "circleci/node:10.16.0"
steps:
- checkout
- run:
name: Update to latest npm version
command: "sudo npm install -g npm#latest"
- restore_cache:
key: dependency-cache-{{ checksum "package-lock.json" }}
- run:
name: Install dependencies
command: npm install
- run:
name: Install `docker-compose`
command: |
curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker:
docker_layer_caching: false
- run:
name: Build using `docker-compose`
command: |
docker-compose build
- run:
name: Login for Docker Hub
command: |
echo "$DOCKER_PASSWORD" | docker login --username $DOCKER_USERNAME --password-stdin
- run:
name: Push to Docker Hub
command: |
docker-compose push
- run: ssh-keyscan $DIGITALOCEAN_HOST >> ~/.ssh/known_hosts
- add_ssh_keys:
fingerprints:
- fo:of:fe:ef:af
- run:
name: Remove currently running containers
command: |
ssh root#$DIGITALOCEAN_HOST ./deploy_image.sh
I am planning on creating a bash script to handle steps 4 to 8 from my list above.
Is it a good idea to have a script take care of the Docker steps?
Or is there a better way to have a more "native" CircleCI configuration?

How to auto deploy Docker Image on own server with GitLab?

I am trying to Google it for few hours, but can't find it.
I have Java/Spring application (+MySQL if it matters) and I am looking to create CI for that.
I know what to do and how:
I know that I have to move my Git repo to Gitlab.
Push to repo will trigger CI script.
Gitlab will build my docker image into Gitlab Docker Registry.
Question is:
What do I have to do to force docker compose on my VPS to pull the new image from Gitlab and restart the server?
I know (correct me if I am wrong) that on my VPS I should run docker-compose pull && docker-compose up inside my app folder, but I have literally no idea how to make it automatically with Gitlab?
What do I have to do to force docker compose on my VPS to pull the new image from Gitlab and restart the server?
#m-uu, you don't need restart the server at all, just do docker-compose up to pull new image and restart service
I know (correct me if I am wrong) that on my VPS I should run docker-compose pull && docker-compose up inside my app folder, but I have literally no idea how to make it automatically with Gitlab?
Yes, you are on the right way. Look at my Gitlab CI configuration file, I think it doesn't difficult to change it for Java project. Just give you ideas how to build, push to your registry and deploy an image to your server. One thing you need to do is generate SSH keys and push public to server (.ssh/authorized_keys) and private to GITLAB pipeline secret variable (https://docs.gitlab.com/ee/ci/variables/#secret-variables)
cache:
key: "cache"
paths:
- junte-api
stages:
- build
- build_image
- deploy
build:
image: golang:1.7
stage: build
script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" > ~/key && chmod 600 ~/key
- ssh-add ~/key
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- go get -u github.com/kardianos/govendor
- mkdir -p $GOPATH/src/github.com/junte/junte-api
- mv * $GOPATH/src/github.com/junte/junte-api
- cd $GOPATH/src/github.com/junte/junte-api
- govendor sync
- go build -o junte-api
- cd -
- cp $GOPATH/src/github.com/junte/junte-api .
build_image:
image: docker:latest
stage: build_image
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE .
- docker push $CI_REGISTRY_IMAGE
deploy-dev:
stage: deploy
image: junte/ssh-agent
variables:
# should be set up at Gitlab CI env vars
SSH_PRIVATE_KEY: $SSH_DEV_PRIVATE_KEY
script:
# copy docker-compose yml to server
- scp docker-compose.dev.yml root#SERVER_IP:/home/junte/junte-api/
# login to gitlab registry
- ssh root#SERVER_IP docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
# then we cd to folder with docker-compose, run docker-compose pull to update images, and run services with `docker-compose up -d`
- ssh root#SERVER_IP "cd /home/junte/junte-api/ && docker-compose -f docker-compose.dev.yml pull api-dev && HOME=/home/dev docker-compose -f docker-compose.dev.yml up -d"
environment:
name: dev
only:
- dev
You also need Gitlab runner with Docker support. How install it look at in Gitlab doc, please.
About stages:
build - just change it to build what you need
build_image - very simple, just login to gitlab registry, build new image and push it to registry. Look at cache part, it need to cache files between stages and can be different for you.
deploy-dev - that part more about what you asked. Here first 6 commands just install ssh and create your private key file to have access to your VPS. Just copy it and add your SSH_PRIVATE_KEY to secret vars in Gitlab UI. Last 3 SSH commands more interesting for you.

GitLab CI/CD: Do not destroy a docker container after building

My use case is the following: I want to deploy a PHP app to docker after a branch is updated and then leave it running for manual testing.
So I would like to access it under a certain URL.
What I did so far is to create a gitlab docker runner
gitlab-ci-multi-runner register --url "https://git.example.com/ci" --registration-token xxxx \
--description "dockertest" \
--executor docker \
--docker-image "php:7.0-apache" \
--docker-services mariadb:latest
And I have a .gitlabci.yml
stages:
- deploy
job-deploy-docker:
only:
- docker-ci-test
stage: deploy
environment: docker
tags:
- docker
image: php:7.0-apache
services:
- mariadb
script:
- apt-get update && apt-get --assume-yes install mysql-client
- mysql --user=root --password="$MYSQL_ROOT_PASSWORD" --host=mariadb -e "create database ci"
The deploy runs Job succeeded and then the container is destroyed.
How can I avoid destroying the container. Of course it should be cleaned up or reused if a new push is done to a branch.
How can I make the web app accessible under a certain external URL (branchname.ci.example.com)
Is that even the right approach or should I do it differently?

Integration testing with drone.io

I am using a CI tool called Drone(drone.io). So i really want to do some integration tests with it. What i want is Drone to start my application container on some port on the drone host and then I would be able to run integration tests against it. For example in .drone.yml file:
build:
image: python3.5-uwsgi
pull: true
auth_config:
username: some_user
password: some_password
email: email
commands:
- pip install --user --no-cache-dir -r requirements.txt
- python manage.py integration_test -h 127.0.0.1:5000
# this should send various requests to 127.0.0.1:5000
# to test my application's behaviour
compose:
my_application:
# build and run a container based on dockerfile in local repo on port 5000
publish:
deploy:
Drone 0.4 can't start service from your Dockerfile, if you want start docker container, you should build it before, outside this build, and push to dockerhub or your own registry and put this in compose section, see http://readme.drone.io/usage/services/#images:bfc9941b6b6fd7b4ef09dd0ccd08af0c
You can also start your application in build, nohup python manage.py server -h 127.0.0.1:5000 & before you running your integration tests. Make sure that your application is started and listening 5000 port, before you run integration_test.
I recommend you use drone 0.5 with pipelines, you can build docker image and push that to registry before build, and use that as service inside your build.

Resources