If I push to master, it works perfectly, all environment variables available and I have a successful deploy to Heroku.
Problem: If I push to the dev branch, it can't see the environment variables for the deploy.
$ dpl --provider=heroku --app=$HEROKU_DEV_APP --api-key=$HEROKU_API_KEY
invalid option "--api-key="
ERROR: Job failed: exit code 1
Environment settings:
.gitlab-ci.yml:
stages:
- build
- test
- deploy
build:
stage: build
image: maven:3.6.3-jdk-14
script:
- mvn clean package
tags:
- docker
test:
stage: test
image: maven:3.6.3-jdk-14
script:
- mvn test
tags:
- docker
deploy_dev:
stage: deploy
image: ruby:2.3
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=$HEROKU_DEV_APP --api-key=$HEROKU_API_KEY
environment:
name: prod
url: https://.....herokuapp.com/
only:
- dev
tags:
- docker
deploy_prod:
stage: deploy
image: ruby:2.3
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=$HEROKU_PROD_APP --api-key=$HEROKU_API_KEY
environment:
name: prod
url: https://.....herokuapp.com/
when: manual
only:
- master
tags:
- docker
This is because your Heroku api key variable is set as protected.
Protected variables are visible only by protected branches and protected tags. That is why it works for you on master but not on dev.
More information: https://gitlab.com/help/ci/variables/README#protect-a-custom-variable and https://gitlab.com/help/user/project/protected_branches.md
Your options are: either remove protected flag, or introduce another unprotected variable with another api key for your non-protected branches which would be less sensitive.
Related
I'm trying to run a local server for my project CI/CD Pipeline. When I start the server I got a "Connection refused" on it.
My project is a fastAPI application and I'm trying to run a integration tests on PR to validate the app before merge the code. I tried to start my app directly (gunicorn), building a docker image and starting it... I tried a lot of things. Then, I tried to run a simple server instead of my app and... got the same error!
This is my simple server workflow:
on:
push:
branches:
- "develop"
jobs:
mylocalserver:
name: Run server
runs-on: ubuntu-latest
steps:
- name: Setup python
uses: actions/setup-python#v3
with:
python-version: 3.9
- name: Run server in background
run: |
python -V
python -m http.server 3000 &> /dev/null &
sudo lsof -i -P -n | grep LISTEN
curl http://localhost:3000
- name: Run server with console
run: |
python -m http.server 3000
Output:
If I run my app with console (no daemon mode in gunicorn), the server start and log to console in workflow with success:
But this way I cannot run nothing after this (and I have to cancel workflow). Some idea? Thank you!
Maybe not the best answer, but for now runnig the job into a container works (only add a container label in question example). Example for my fastAPI app:
on:
pull_request:
branches:
- 'main'
- 'develop'
jobs:
run-on-pr:
runs-on: ubuntu-latest
container: ubuntu
services:
mongodb:
image: mongo
ports:
- 27017:27017
steps:
- name: Setup git
run: |
apt-get update; apt-get install -y git
- name: Git checkout
uses: actions/checkout#v3
with:
path: api
- name: Setup python
uses: actions/setup-python#v4
with:
python-version: 3.9
- name: Install pip
run: |
apt-get update; apt-get install -y python3-pip
- name: Build and ENV
run: |
cd api
cp .env_example_docker .env
pip3 install -r requirements.txt
- name: Run fastAPI
run: |
cd api
gunicorn -D -k uvicorn.workers.UvicornWorker -c ./gunicorn_conf.py app.main:app
env:
MONGO_URL: "mongodb://mongodb:27017"
- name: Install curl
run: |
apt-get update; apt-get install -y curl
- name: Run curl
run: |
curl http://localhost:3000
This works, but I have to install all in container (git, pip). I will try a solution without using the container label and if I found anything I can post here.
So I've deployed my project into Heroku using Gitlab CI/CD using following file:
image: maven:3.8.6-openjdk-18
services:
- postgres:latest
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_HOST_AUTH_METHOD: trust
stages:
- build
- deploy
build:
stage: build
script:
- mvn package -Pprod
artifacts:
paths:
- target/cinema.jar
deploy:
stage: deploy
image: ruby:latest
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=cinema --api-key=$HEROKU_API_KEY
only:
- main
I would love to connect from this place to postgres and give user all of the needed privileges on all tables to provided user.
Can't wrap my head around it, in order to make it work.
Thank you for any help!
I have built a pipeline with four steps: build, test, lint and deploy. However, I have to run npm install in three individual steps which I think could be done in a cleaner way. Could someone point me to how I could maybe do npm install globally instead?
This is the config.yml file:
version: 2.1
orbs:
node: circleci/node#4.1.0
heroku: circleci/heroku#0.0.10
eslint: arrai/eslint#2.0.0
jobs:
build:
executor:
name: node/default
steps:
- checkout
- run: npm install
test:
executor:
name: node/default
steps:
- checkout
- run: npm install
- run: npm run test
lint:
executor:
name: node/default
steps:
- checkout
- run: npm install
- run: npm run lint
deploy:
executor:
name: heroku/default
steps:
- checkout
- heroku/deploy-via-git
workflows:
main:
jobs:
- build
- test:
requires:
- build
- lint:
requires:
- test
- deploy:
requires:
- lint
You should save installed packages in cache then restore them each time.
Ref:
https://circleci.com/blog/config-best-practices-dependency-caching/
I’m a little confused about how Postman (Newman) tests would execute against a build unless that build is running somewhere. Wouldn’t I need to deploy it somewhere and THEN execute Travis CI?
I connected Github to Travis & Heroku, I think I need to link them in the .travis.yml file.
.travis.yml
language: node_js
node_js:
- "12.14.1"
install:
- npm install newman
- npm install jest
before_script:
- node --version
- npm --version
- yarn --version
- node_modules/.bin/newman --version
- node_modules/.bin/jest --version
deploy:
provider: heroku
api_key:
secure: <HEROKU_API_KEY>
app: <HEROKU_APP_NAME>
on:
repo: <GITHUB_REPOSITORY>
script:
- node_modules/.bin/newman run <COLLECTION_LINK> --environment <ENV_LINK>
- yarn test
What should I specify to run tests after the build & deploy? Am I missing a step?
What you are looking for is build stages, see docs https://docs.travis-ci.com/user/build-stages/.
The syntax is pretty straightforward.
jobs:
include:
- stage: install
script: npm run install
- stage: build
script: npm run build
- stage: deploy
deploy:
provider: heroku
api_key:
secure: <HEROKU_API_KEY>
app: <HEROKU_APP_NAME>
on:
repo: <GITHUB_REPOSITORY>
- stage: test
script: npm run tests
I have created a CircleCI config which will run my PHPUnit tests against my laravel application and that is working 100% however I am now trying to add a workflow to then SSH and deploy my app to an AWS EC2 server and I am getting the following errors:
Your config file has errors and may not run correctly:
2 schema violations found
required key [jobs] not found
required key [version] not found
However I cannot see an issue with my CircleCI config file, have I made a mistake somewhere?
version: 2
jobs:
build:
docker:
- image: circleci/php:7.1-browsers
working_directory: ~/laravel
steps:
- checkout
- run:
name: Download NodeJS v6
command: curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
- run:
name: Install SQLite and NodeJS 6
command: sudo apt-get install -y libsqlite3-dev nodejs
- run:
name: Setup Laravel testing environment variables for CircleCI test
command: cp .env.circleci .env
- run:
name: Update composer to latest version
command: composer self-update
- restore_cache:
keys:
- composer-v1-{{ checksum "composer.json" }}
- composer-v1-
- run: composer install -n --prefer-dist --ignore-platform-reqs
- save_cache:
key: composer-v1-{{ checksum "composer.json" }}
paths:
- vendor
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install NodeJS Packages
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- ./node_modules
- run:
name: Create SQLite Database
command: touch database/database.sqlite
- run:
name: Migrate Laravel Database
command: php artisan migrate --database=sqlite --force
- run:
name: Run NPM
command: npm run production
# Run Laravel Server for front-end tests
- run:
name: Run Laravel Server
command: php artisan serve
background: true
- run:
name: Run PHPUnit Tests
command: vendor/bin/phpunit
deploy:
machine:
enabled: true
steps:
- run:
name: Deploy Over SSH
command: |
ssh $SSH_USER#$SSH_HOST "cd /var/www/html"
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
Any help is appreciated, thank you!
CircleCI has documentation for AWS deployment. Look here https://circleci.com/docs/1.0/continuous-deployment-with-aws-codedeploy/
I think your problem is with SSH authorization for AWS. You can try it locally and make sure that your authorization is successfully, and then do the same thing with your AWS.