Accessing postgres that is hosted on Heroku - ruby

So I've deployed my project into Heroku using Gitlab CI/CD using following file:
image: maven:3.8.6-openjdk-18
services:
- postgres:latest
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_HOST_AUTH_METHOD: trust
stages:
- build
- deploy
build:
stage: build
script:
- mvn package -Pprod
artifacts:
paths:
- target/cinema.jar
deploy:
stage: deploy
image: ruby:latest
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=cinema --api-key=$HEROKU_API_KEY
only:
- main
I would love to connect from this place to postgres and give user all of the needed privileges on all tables to provided user.
Can't wrap my head around it, in order to make it work.
Thank you for any help!

Related

Connection refused for local server in github actions workflow

I'm trying to run a local server for my project CI/CD Pipeline. When I start the server I got a "Connection refused" on it.
My project is a fastAPI application and I'm trying to run a integration tests on PR to validate the app before merge the code. I tried to start my app directly (gunicorn), building a docker image and starting it... I tried a lot of things. Then, I tried to run a simple server instead of my app and... got the same error!
This is my simple server workflow:
on:
push:
branches:
- "develop"
jobs:
mylocalserver:
name: Run server
runs-on: ubuntu-latest
steps:
- name: Setup python
uses: actions/setup-python#v3
with:
python-version: 3.9
- name: Run server in background
run: |
python -V
python -m http.server 3000 &> /dev/null &
sudo lsof -i -P -n | grep LISTEN
curl http://localhost:3000
- name: Run server with console
run: |
python -m http.server 3000
Output:
If I run my app with console (no daemon mode in gunicorn), the server start and log to console in workflow with success:
But this way I cannot run nothing after this (and I have to cancel workflow). Some idea? Thank you!
Maybe not the best answer, but for now runnig the job into a container works (only add a container label in question example). Example for my fastAPI app:
on:
pull_request:
branches:
- 'main'
- 'develop'
jobs:
run-on-pr:
runs-on: ubuntu-latest
container: ubuntu
services:
mongodb:
image: mongo
ports:
- 27017:27017
steps:
- name: Setup git
run: |
apt-get update; apt-get install -y git
- name: Git checkout
uses: actions/checkout#v3
with:
path: api
- name: Setup python
uses: actions/setup-python#v4
with:
python-version: 3.9
- name: Install pip
run: |
apt-get update; apt-get install -y python3-pip
- name: Build and ENV
run: |
cd api
cp .env_example_docker .env
pip3 install -r requirements.txt
- name: Run fastAPI
run: |
cd api
gunicorn -D -k uvicorn.workers.UvicornWorker -c ./gunicorn_conf.py app.main:app
env:
MONGO_URL: "mongodb://mongodb:27017"
- name: Install curl
run: |
apt-get update; apt-get install -y curl
- name: Run curl
run: |
curl http://localhost:3000
This works, but I have to install all in container (git, pip). I will try a solution without using the container label and if I found anything I can post here.

Build and deploy AWS Lambda of type image using SAM in Gitlab runner

I'm trying to set up CI/CD for an AWS Lambda using the SAM cli tool inside a Gitlab runner.
My Lambda function is a Go app shipped as a container image.
I followed this article to set it up properly:
https://aws.amazon.com/blogs/apn/using-gitlab-ci-cd-pipeline-to-deploy-aws-sam-applications/
Unfortunately, the used .gitlab-ci.yml seems to be only applicable to functions of PackageType Zip, i.e. by uploading the application code to an S3 bucket:
image: python:3.8
stages:
- deploy
production:
stage: deploy
before_script:
- pip3 install awscli --upgrade
- pip3 install aws-sam-cli --upgrade
script:
- sam build
- sam package --output-template-file packaged.yaml --s3-bucket #S3Bucket#
- sam deploy --template-file packaged.yaml --stack-name gitlab-example --s3-bucket #S3Bucket# --capabilities CAPABILITY_IAM --region us-east-1
environment: production
I adjusted the script to these lines:
script:
- sam build
- sam deploy
sam build fails at this stage:
Building codeuri: /builds/user/app runtime: None metadata: {'DockerTag': 'go1.x-v1', 'DockerContext': '/builds/user/app/hello-world', 'Dockerfile': 'Dockerfile'} architecture: x86_64 functions: ['HelloWorldFunction']
Building image for HelloWorldFunction function
Build Failed
Error: Building image for HelloWorldFunction requires Docker. is Docker running?
This guide at Gitlab suggests using image: docker with dind enabled, so I tried this config:
image: docker
services:
- docker:dind
stages:
- deploy
production:
stage: deploy
before_script:
- pip3 install awscli --upgrade
- pip3 install aws-sam-cli --upgrade
script:
- sam build
- sam deploy
environment: production
This in turn fails because the base docker image does not ship Python. How can I combine those two approaches to have Docker and SAM available at the same time?
You can:
use the docker image and install python in your job OR
use the python image and install docker in your job OR
build your own image containing all your dependencies (docker, python, awscli, etc) and use that as your job's image:.
Installing python in the docker image:
production:
image: docker
stage: deploy
before_script:
- apk add --update python3 py3-pip
- pip3 install awscli --upgrade
- pip3 install aws-sam-cli --upgrade
script:
- sam build
- sam deploy
environment: production
Installing docker in the Python image:
production:
image: python:3.9-slim
stage: deploy
before_script:
- apt update && apt install -y --no-install-recommends docker.io
- pip3 install awscli --upgrade
- pip3 install aws-sam-cli --upgrade
script:
- sam build
- sam deploy
environment: production
Or build your own image:
FROM python:3.9-slim
RUN apt update && apt install -y --no-install-recommends docker.io
docker build -t myregistry.example.com/myrepo/myimage:latest
docker push myregistry.example.com/myrepo/myimage:latest
production:
image: myregistry.example.com/myrepo/myimage:latest
# ...

Gitlab CI/CD env var availabel only on master

If I push to master, it works perfectly, all environment variables available and I have a successful deploy to Heroku.
Problem: If I push to the dev branch, it can't see the environment variables for the deploy.
$ dpl --provider=heroku --app=$HEROKU_DEV_APP --api-key=$HEROKU_API_KEY
invalid option "--api-key="
ERROR: Job failed: exit code 1
Environment settings:
.gitlab-ci.yml:
stages:
- build
- test
- deploy
build:
stage: build
image: maven:3.6.3-jdk-14
script:
- mvn clean package
tags:
- docker
test:
stage: test
image: maven:3.6.3-jdk-14
script:
- mvn test
tags:
- docker
deploy_dev:
stage: deploy
image: ruby:2.3
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=$HEROKU_DEV_APP --api-key=$HEROKU_API_KEY
environment:
name: prod
url: https://.....herokuapp.com/
only:
- dev
tags:
- docker
deploy_prod:
stage: deploy
image: ruby:2.3
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=$HEROKU_PROD_APP --api-key=$HEROKU_API_KEY
environment:
name: prod
url: https://.....herokuapp.com/
when: manual
only:
- master
tags:
- docker
This is because your Heroku api key variable is set as protected.
Protected variables are visible only by protected branches and protected tags. That is why it works for you on master but not on dev.
More information: https://gitlab.com/help/ci/variables/README#protect-a-custom-variable and https://gitlab.com/help/user/project/protected_branches.md
Your options are: either remove protected flag, or introduce another unprotected variable with another api key for your non-protected branches which would be less sensitive.

Running tests with Travis CI on Heroku

I’m a little confused about how Postman (Newman) tests would execute against a build unless that build is running somewhere. Wouldn’t I need to deploy it somewhere and THEN execute Travis CI?
I connected Github to Travis & Heroku, I think I need to link them in the .travis.yml file.
.travis.yml
language: node_js
node_js:
- "12.14.1"
install:
- npm install newman
- npm install jest
before_script:
- node --version
- npm --version
- yarn --version
- node_modules/.bin/newman --version
- node_modules/.bin/jest --version
deploy:
provider: heroku
api_key:
secure: <HEROKU_API_KEY>
app: <HEROKU_APP_NAME>
on:
repo: <GITHUB_REPOSITORY>
script:
- node_modules/.bin/newman run <COLLECTION_LINK> --environment <ENV_LINK>
- yarn test
What should I specify to run tests after the build & deploy? Am I missing a step?
What you are looking for is build stages, see docs https://docs.travis-ci.com/user/build-stages/.
The syntax is pretty straightforward.
jobs:
include:
- stage: install
script: npm run install
- stage: build
script: npm run build
- stage: deploy
deploy:
provider: heroku
api_key:
secure: <HEROKU_API_KEY>
app: <HEROKU_APP_NAME>
on:
repo: <GITHUB_REPOSITORY>
- stage: test
script: npm run tests

CircleCI YAML config fails

I have created a CircleCI config which will run my PHPUnit tests against my laravel application and that is working 100% however I am now trying to add a workflow to then SSH and deploy my app to an AWS EC2 server and I am getting the following errors:
Your config file has errors and may not run correctly:
2 schema violations found
required key [jobs] not found
required key [version] not found
However I cannot see an issue with my CircleCI config file, have I made a mistake somewhere?
version: 2
jobs:
build:
docker:
- image: circleci/php:7.1-browsers
working_directory: ~/laravel
steps:
- checkout
- run:
name: Download NodeJS v6
command: curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
- run:
name: Install SQLite and NodeJS 6
command: sudo apt-get install -y libsqlite3-dev nodejs
- run:
name: Setup Laravel testing environment variables for CircleCI test
command: cp .env.circleci .env
- run:
name: Update composer to latest version
command: composer self-update
- restore_cache:
keys:
- composer-v1-{{ checksum "composer.json" }}
- composer-v1-
- run: composer install -n --prefer-dist --ignore-platform-reqs
- save_cache:
key: composer-v1-{{ checksum "composer.json" }}
paths:
- vendor
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install NodeJS Packages
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- ./node_modules
- run:
name: Create SQLite Database
command: touch database/database.sqlite
- run:
name: Migrate Laravel Database
command: php artisan migrate --database=sqlite --force
- run:
name: Run NPM
command: npm run production
# Run Laravel Server for front-end tests
- run:
name: Run Laravel Server
command: php artisan serve
background: true
- run:
name: Run PHPUnit Tests
command: vendor/bin/phpunit
deploy:
machine:
enabled: true
steps:
- run:
name: Deploy Over SSH
command: |
ssh $SSH_USER#$SSH_HOST "cd /var/www/html"
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
Any help is appreciated, thank you!
CircleCI has documentation for AWS deployment. Look here https://circleci.com/docs/1.0/continuous-deployment-with-aws-codedeploy/
I think your problem is with SSH authorization for AWS. You can try it locally and make sure that your authorization is successfully, and then do the same thing with your AWS.

Resources