Azure DevOps CI pipeline for standard Laravel 9 project - laravel

My team and I are familiarizing ourselves with Azure DevOps. I'd like to set up a CI pipeline for the standard Laravel 9 project as a proof of concept, but haven't been successful.
I also haven't been able to find a template.
All that needs to happen in our pipeline is for the new code to be built, tested and containerised as a docker container which can then be pushed to a repo for later deployment.
If anyone could help point me in the right direction, I'd greatly appreciate it!
Using the below YAML file, I've continously run into errors that I do not understand. With the version below, failing a unit test that only asserts whether true is true.
trigger:
- main
pool:
vmImage: 'Ubuntu-Latest'
variables:
PHP_VERSION: '8.0.2'
PHPUNIT_VERSION: '9.5.10'
steps:
- task: NodeTool#0
inputs:
versionSpec: '14.x'
- script: |
sudo apt-get update
sudo apt-get install -y php${{ variables.PHP_VERSION }} php${{ variables.PHP_VERSION }}-cli php${{ variables.PHP_VERSION }}-mbstring unzip
curl -sS https://getcomposer.org/installer -o composer-setup.php
sudo php composer-setup.php --install-dir=/usr/local/bin --filename=composer
sudo composer global require "phpunit/phpunit:${{ variables.PHPUNIT_VERSION }}"
displayName: 'Install PHP and Composer'
- script: |
sudo apt-get install -y git
git clone https://github.com/RadicalRumin/fluffy-fiesta.git
cd /fluffy-fiesta
composer install
displayName: 'Clone and install dependencies'
- script: |
phpunit
displayName: 'Run PHPUnit tests'

Related

Connection refused for local server in github actions workflow

I'm trying to run a local server for my project CI/CD Pipeline. When I start the server I got a "Connection refused" on it.
My project is a fastAPI application and I'm trying to run a integration tests on PR to validate the app before merge the code. I tried to start my app directly (gunicorn), building a docker image and starting it... I tried a lot of things. Then, I tried to run a simple server instead of my app and... got the same error!
This is my simple server workflow:
on:
push:
branches:
- "develop"
jobs:
mylocalserver:
name: Run server
runs-on: ubuntu-latest
steps:
- name: Setup python
uses: actions/setup-python#v3
with:
python-version: 3.9
- name: Run server in background
run: |
python -V
python -m http.server 3000 &> /dev/null &
sudo lsof -i -P -n | grep LISTEN
curl http://localhost:3000
- name: Run server with console
run: |
python -m http.server 3000
Output:
If I run my app with console (no daemon mode in gunicorn), the server start and log to console in workflow with success:
But this way I cannot run nothing after this (and I have to cancel workflow). Some idea? Thank you!
Maybe not the best answer, but for now runnig the job into a container works (only add a container label in question example). Example for my fastAPI app:
on:
pull_request:
branches:
- 'main'
- 'develop'
jobs:
run-on-pr:
runs-on: ubuntu-latest
container: ubuntu
services:
mongodb:
image: mongo
ports:
- 27017:27017
steps:
- name: Setup git
run: |
apt-get update; apt-get install -y git
- name: Git checkout
uses: actions/checkout#v3
with:
path: api
- name: Setup python
uses: actions/setup-python#v4
with:
python-version: 3.9
- name: Install pip
run: |
apt-get update; apt-get install -y python3-pip
- name: Build and ENV
run: |
cd api
cp .env_example_docker .env
pip3 install -r requirements.txt
- name: Run fastAPI
run: |
cd api
gunicorn -D -k uvicorn.workers.UvicornWorker -c ./gunicorn_conf.py app.main:app
env:
MONGO_URL: "mongodb://mongodb:27017"
- name: Install curl
run: |
apt-get update; apt-get install -y curl
- name: Run curl
run: |
curl http://localhost:3000
This works, but I have to install all in container (git, pip). I will try a solution without using the container label and if I found anything I can post here.

laravel CI / CD Development

Can anyone point me in the right direction for the Laravel CI/CD Development in code commit AWS?
I have gone through a lot of tutorials but always fail to connect the Database to either elastic beanstalk or EC2,
Can someone recommend a good tutorial for this
This is my build Command.
version: 0.2
phases:
install:
runtime-versions:
php: 7.4
nodejs: 12.x
commands:
- apt-get update -y
- apt-get install -y libpq-dev libzip-dev
- apt-get install -y php-pgsql
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
pre_build:
commands:
- cp .env.beta .env
- composer install
- npm install
build:
commands:
- npm run production
- php artisan migrate --force
- php artisan db:seed
artifacts:
files:
- '**/*'
name: $(date +%Y-%m-%dT%H:%M:%S).zip
proxy:
upload-artifacts: yes
logs: yes
now i am getting ngnix 404 error,Even after adding " /public " in beanstalk

Can not trigger build with indication of "Not Run" in CircleCI

I am planning to migrate my existing CI to CircleCI 2 in favor of the docker. Once I tried to trigger a build it's shown something like this:
I also checked the steps as indicated in the below alert here https://circleci.com/sunset1-0/
Service alert: Your project references CircleCI 1.0 or it has no
configuration. CircleCI 1.0 and projects without configuration files
are no longer supported. You must update your project to use CircleCI
2.0 configuration to continue. Learn more.
Is there anything I missed out?
Below is my .circleci/config.yml
version: 2 # use CircleCI 2.0
jobs: # a collection of steps
build: # runs not using Workflows must have a `build` job as entry point
parallelism: 3 # run three instances of this job in parallel
docker: # run the steps with Docker
- image: circleci/ruby:2.4.2-jessie-node # ...with this image as the primary container; this is where all `steps` will run
environment: # environment variables for primary container
BUNDLE_JOBS: 3
BUNDLE_RETRY: 3
BUNDLE_PATH: vendor/bundle
RAILS_ENV: test
- image: mysql:5.7 # database image
environment: # environment variables for database
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
steps: # a collection of executable commands
- checkout # special step to check out source code to working directory
- run:
name: setup
command: |
curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
- run:
name: Dependencies
command: |
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential mysql-client nodejs yarn && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Which version of bundler?
- run:
name: Which bundler?
command: bundle -v
# Restore bundle cache
# Read about caching dependencies: https://circleci.com/docs/2.0/caching/
- restore_cache:
keys:
- rails-demo-bundle-v2-{{ checksum "Gemfile.lock" }}
- rails-demo-bundle-v2-
- run: # Install Ruby dependencies
name: Bundle Install
command: bundle check --path vendor/bundle || bundle install --deployment
# Store bundle cache for Ruby dependencies
- save_cache:
key: rails-demo-bundle-v2-{{ checksum "Gemfile.lock" }}
paths:
- vendor/bundle
# Only necessary if app uses webpacker or yarn in some other way
- restore_cache:
keys:
- rails-demo-yarn-{{ checksum "yarn.lock" }}
- rails-demo-yarn-
- run:
name: Yarn Install
command: yarn install --cache-folder ~/.cache/yarn
# Store yarn / webpacker cache
- save_cache:
key: rails-demo-yarn-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
- run:
name: Wait for DB
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Database setup
command: bin/rails db:schema:load --trace
- run:
name: Run rspec in parallel
command: |
bundle exec rspec --profile 10 \
--format RspecJunitFormatter \
--out test_results/rspec.xml \
--format progress \
$(circleci tests glob "spec/**/*_spec.rb" | circleci tests split --split-by=timings)
# Save test results for timing analysis
- store_test_results: # Upload test results for display in Test Summary: https://circleci.com/docs/2.0/collect-test-data/
path: test_results
# See https://circleci.com/docs/2.0/deployment-integrations/ for example deploy configs
In order for the build work, your config.yml file must live inside a .circleci folder at the root of your project directory.
You post a very good description. Thanks for the code. The version config looks fine. However, you wrote that your config currently lives at .circle/config.yml (Notice the missing ci at end of circle)
Move the config.yml to .circleci/config.yml and the build should proceed further. (right now the build is failing because it has no configuration.
E

How to install dependencies after automatic deployment from GitLab CI/CD

I'm taking a stab at doing an automatic deployment using GitLab's CI/CD.
My project has a couple dependencies managed through Composer and I read somewhere that these dependencies (vendor directory) ideally should be added to the .gitignore file so that they're not uploaded to the repository and that's what I did.
When I tested the automatic deployment, the modified files are getting uploaded but I received errors regarding missing vendor files which I expected - so now the question is how do I install these dependencies in the context of the remote server from the GitLab CI/CD environment?
My .gitlab-ci.yml file looks like this:
staging:
stage: staging
before_script:
- apt-get update -qq && apt-get install -y -qq lftp
script:
- lftp -c "set ftp:ssl-allow no; open -u $USERNAME,$PASSWORD $HOST; mirror -Rev . /public_html --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
environment:
name: staging
url: http://staging.example.com
only:
- staging
If you look at GitLab's documentation for caching PHP dependencies you'll notice that it installs Composer through the CI. I think you could leverage this to download the project dependencies before uploading them through lftp.
staging:
stage: staging
before_script:
# Install git since Composer usually requires this if installing from source
- apt-get update -qq && apt-get install -y -qq git
# Install lftp to upload files to remote server
- apt-get update -qq && apt-get install -y -qq lftp
# Install Composer
- curl --show-error --silent https://getcomposer.org/installer | php
# Install project dependencies through Composer (downloads the vendor directory)
- php composer.phar install
script:
# Upload files including the vendor directory
- lftp -c "set ftp:ssl-allow no; open -u $USERNAME,$PASSWORD $HOST; mirror -Rev . /public_html --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
environment:
name: staging
url: http://staging.example.com
only:
- staging

Share installed requirements between jobs

I have the following yml configuration file with 3 different jobs:
stages:
- build
- test
- analyze
build:
stage: build
script:
- apt-get update -qy
- apt-get install -y python3-dev python3-pip
- pip3 install -r requirements.txt
test:
stage: test
script:
- coverage run --source='.' manage.py test
cache:
paths:
- .coverage
analyze:
stage: analyze
script:
- flake8
- coverage report
In the first job I install the requirements, among which are coverage or flake8. But these tools are used in the following jobs. I have tried using 'dependencies' or 'cache', but it didn't work: only files/dirs under the project root directory can be shared, not the binaries under /user/local/bin.
I have tried to indicate another directory for pip install, but the binary is installed in /user/local/bin.
The workaround I have found is to install the dependencies in each job, but I think that this is the less optimal solution.
I think that there must be a better solution for that.
Thanks.
I just found a solution, at least for python3 (enough for me):
python3 has a built-in tool for managing virtual envs: venv
Using venv, we can create the virtual env in the project root dir, cache this dir, and enable our virtual env in each job.
variables:
VENV_NAME: "env"
cache:
paths:
- $VENV_NAME
first_job:
script:
- apt-get update -qy
- apt-get install -y python3-dev python3-pip python3-venv
- python3 -m venv $VENV_NAME
- source $VENV_NAME/bin/activate
- pip3 install -r requirements.txt
next_jobs:
script:
- source $VENV_NAME/bin/activate
- echo "hello world!"
PD: don't forget to exclude virtual env dir from coverage or other analysis tools

Resources