passing environment variables from codebuild to codedeploy ec2 instances - laravel

I'm new to sre. i building an aws codepipeline using cdk. i need to pass the rds instance information from my rds stack to my codepipeline(ec2) stack. I need a .env file in my ec2 instances. based on my research i saw there is something called environment variables that can do it for me instead of generating a .env file from codebuild. i set up a few environment variables(plain text) in codebuild and try to pass those environment variables into the ec2 instances that was deployed from the codedeploy. i was able to get the correct environment variable values in buildspec.yml. but when i tried to run echo $DB_HOST in ec2 terminal. i got nothing. here is my set up:
codebuild environment variables:
buildspec.yml
version: 0.2
env:
exported-variables:
- DB_HOST
- DB_PORT
- DB_DATABASE
- DB_PASSWORD
- DB_USERNAME
phases:
install:
commands:
- echo $DB_HOST
- export DB_HOST=$DB_HOST
pre_build:
commands:
- export DB_HOST=$DB_HOST
artifacts:
files:
- '**/*'
name: myname-$(date +%Y-%m-%d)
my appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/
hooks:
BeforeInstall:
- location: script/BeforeInstall.sh
runas: root
AfterInstall:
- location: script/AfterInstall.sh
runas: root
AfterInstall.sh
#!/bin/bash
# Set permissions to storage and bootstrap cache
sudo chmod -R 0777 /var/www/html/storage
sudo chmod -R 0777 /var/www/html/bootstrap/cache
#
cd /var/www/html
#
# Run composer
composer install --ignore-platform-reqs
please help me to pass those environment variables from codebuild to codedeploy ec2. or is there any other way to generate .env file for codebuild?

You can't do this the way you expect it. The proper way is to pass them through SSM Secrets Manager or SSM Paramter Store.
So in your setup, CodeBuild will populate the SSM Secrets Manager or SSM Paramter Store (or you populate them before hand youself), and CodeDeploy will read these secret stores for the parameters.

I found a way to work around with it. here is my solution:
since I'm able to get all the environment variables in build stage. i manage to build a .env file in build stage. I have a few environment variables coming in to build stage from secret manager or as plain text.
first, i created a .env.exmaple file in my project root directory:
...
APP_ENV=local
APP_KEY=
...
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=
DB_USERNAME=root
DB_PASSWORD=
MAIL_MAILER=smtp
MAIL_HOST=smtp.sendgrid.net
MAIL_PORT=587
MAIL_USERNAME=apikey
MAIL_PASSWORD=
...
second, i updated my buildspec.yml file and replace each value with environment variable values using sed commands
version: 0.2
env:
exported-variables:
- DB_HOST
- DB_DATABASE
- DB_PASSWORD
- DB_USERNAME
secrets-manager:
MAIL_PASSWORD: "email-token:MAIL_PASSWORD"
AWS_ACCESS_KEY_ID: "aws-token:AWS_ACCESS_KEY_ID"
AWS_SECRET_ACCESS_KEY: "aws-token:AWS_SECRET_ACCESS_KEY"
AWS_DEFAULT_REGION: "aws-token:AWS_DEFAULT_REGION"
AWS_BUCKET : "aws-token:AWS_BUCKET"
AWS_URL : "aws-token:AWS_URL"
phases:
build:
commands:
- cp .env.example .env
- sed -i "s/DB_HOST=127.0.0.1/DB_HOST=$DB_HOST/g" .env
- sed -i "s/DB_DATABASE=/DB_DATABASE=$DB_DATABASE/g" .env
- sed -i "s/DB_USERNAME=root/DB_USERNAME=$DB_USERNAME/g" .env
- sed -i "s/DB_PASSWORD=/DB_PASSWORD=$DB_PASSWORD/g" .env
- sed -i "s/APP_ENV=local/APP_ENV=$APP_ENV/g" .env
- sed -i "s/MAIL_PASSWORD=/MAIL_PASSWORD=$MAIL_PASSWORD/g" .env
...
- sed -i "s#AWS_URL=#AWS_URL=$AWS_URL#g" .env
artifacts:
files:
- '**/*'
name: myname-$(date +%Y-%m-%d)
in this way, i am able to create a .env file for the deploy stage.
one thing to notice here is that if your value contain / (for example url), you need to use # to instead of / for sed commands

Related

Docker-compose custom .env file unexpected behaviour

Example
Consider this example docker-compose file with custom .env file:
version: '3'
services:
service_example:
build:
dockerfile: Dockerfile
context: .
args:
AAA: ${AAA}
command: python3 src/service/run.py
env_file:
- custom_env.env
custom_env.env:
AAA=qqq
When I run docker-compose config I get the following output:
WARNING: The AAA variable is not set. Defaulting to a blank string.
services:
service_example:
build:
args:
AAA: '' <----------------------------- ??????
context: /Users/examples
dockerfile: Dockerfile
command: python3 src/service/run.py
environment:
AAA: qqq
version: '3'
Question
Why AAA is unset in build section?
What should I do to set it properly (to the value provided from custom file: AAA=qqq)?
I've also noticed that if I change the env file name to the default setting mv custom_env.env .env and remove env_file section from docker-compose.yml - everything will be just fine:
services:
service_example:
build:
args:
AAA: qqq
context: /Users/examples
dockerfile: Dockerfile
command: python3 src/service/run.py
version: '3'
Quick Answer
docker-compose --env-file custom_env.env config
Answers Explanation
Question-1: Why AAA is unset in build section?
Because the env file specified in env_file property custom_env.env is specific for the Container only, i.e. those variables are to be passed to container while running not during image build.
Question-2: What should I do to set it properly (to the value provided from custom file: AAA=qqq)?
To provide environment variables for build step in docker-compose file using custom env file, we need to specify the custom file path. Like
Syntax: docker-compose --env-file FILE_PATH config
Example: docker-compose --env-file custom_env.env config
Question-3: How .env works?
Because that is the default file for which docker-compose looks for.
Summary
So, In docker-compose for current scenario we can consider 2 stages for specifying env
Build Stage(Image)
Running Stage(Container)
For Build stage - we can use .env default file or we can use --env-file option to specify custom env file
For Running Stage - we can specify environment variables using environment: property or we can use env_file: property to specify a env file
References
https://docs.docker.com/compose/env-file/
https://docs.docker.com/compose/environment-variables/
https://docs.docker.com/compose/compose-file/#env_file
https://docs.docker.com/compose/compose-file/#environment

Can someone look at my yaml file for code deployment using Bitbucket Pipelines?

This is my first attempt at setting up pipelines or even using any CI/CD tool. So, reading the documentation at Bitbucket, I added the bitbucket-pipelines.yml file in the root of my Laravel application for a build. Here is the file.
image: php:7.4-fpm
pipelines:
default:
- step:
name: Build and test
caches:
- composer
script:
- apt-get update && apt-get install -qy git curl libmcrypt-dev mariadb-client ghostscript
- yes | pecl install mcrypt-1.0.3
- docker-php-ext-install pdo_mysql bcmath exif
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --file name=composer
- composer install
- ln -f -s .env.pipelines .env
- php artisan migrate
- ./vendor/bin/phpunit
services:
- mysql
- redis
definitions:
services:
mysql:
image: mysql:5.7
environment:
MYSQL_DATABASE: "laravel-pipeline"
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_USER: "homestead"
MYSQL_PASSWORD: "secret"
redis:
image: redis
The above works fine in building the application, running tests,etc. But when I add the below to deploy, using the scp pipe, I get a notice saying either I need to include an image or at times the notice says there is a bad indentation of a mapping entry.
- step:
name: Deploy to test
deployment: test
# trigger: manual # Uncomment to make this a manual deployment.
script:
- pipe: atlassian/scp-deploy:0.3.13
variables:
USER: '${remoteUser}'
SERVER: '${server}'
REMOTE_PATH: '${remote}'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'
I don't really know yaml, and this is my first time working with a CI/CD tool so I am lost. Can someone guide me in what I am doing wrong?
Your indentation for name and deployment is not the same as for the script. Try putting it all on the same indentation like this.
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/scp-deploy:0.3.13
variables:
USER: '${remoteUser}'
SERVER: '${server}'
REMOTE_PATH: '${remote}'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'

Serverless config credentials not working when serverless.yml file present

We're trying to deploy our lambda using serverless on BitBucket pipelines, but we're running into an issue when running the serverless config credentials command. This issue also happens in docker containers, and locally on our machines.
This is the command we're running:
serverless config credentials --stage staging --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
And it gives us the error:
Error: Profile default does not exist
The profile is defined in our serverless.yml file. If we rename the serverless file before running the command, it works, and then we can then put the serverless.yml file back and successfully deploy.
e.g.
- mv serverless.yml serverless.old
- serverless config credentials --stage beta --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
- mv serverless.old serverless.yml
We've tried adding the --profile default switch on there, but it makes no difference.
It's worth noting that this wasn't an issue until we started to use the SSM Parameter Store within the serverless file, the moment we added that, it started giving us the Profile default does not exist error.
serverless.yml (partial)
service: our-service
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
profile: default
stage: ${opt:stage, 'dev'}
iamRoleStatements:
- Effect: 'Allow'
Action: 'ssm:GetParameter'
Resource:
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-dev'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-beta'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-staging'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-live'
- Effect: 'Allow'
Action: 'kms:Decrypt'
Resource:
- 'arn:aws:kms:eu-west-1:0000000000:key/alias/aws/ssm'
environment:
LAUNCH_DARKLY_SDK_KEY: ${self:custom.launchDarklySdkKey.${self:provider.stage}}
custom:
stages:
- dev
- beta
- staging
- live
launchDarklySdkKey:
dev: ${ssm:/our-service-launchdarkly-key-dev~true}
beta: ${ssm:/our-service-launchdarkly-key-beta~true}
staging: ${ssm:/our-service-launchdarkly-key-staging~true}
live: ${ssm:/our-service-launchdarkly-key-live~true}
plugins:
- serverless-offline
- serverless-stage-manager
...
TLDR: serverless config credentials only works when serverless.yml isn't present, otherwise it complains about profile default not existing, only an issue when using SSM Param store in the serverless file.
The profile attribute in your serverless.yaml refers to saved credentials in ~/.aws/credentials. If a [default] entry is not present in that file, serverless will complain. I can think of 2 possible solutions to this:
Try removing profile from your serverless.yaml completely and using environment variables only.
Leave profile: default in your serverless.yaml but set the credentials in ~/.aws/credentials like this:
[default]
aws_access_key_id=***************
aws_secret_access_key=***************
If you go with #2, you don't have to run serverless config credentials anymore.

Can docker-compose.yml read database connection from laravel .env file?

My folder structure looks like this
- root-dir
-- docker
-- src //contains laravel application
---.env
-- docker-compose.yml
As you might know in both laravel .env and docker-compose.yml files you need to specify the connection settings to db
// .env
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=homestead
DB_USERNAME=homestead
DB_PASSWORD=secret
// docker-compose.yml
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=homestead
- MYSQL_USER=homestead
- MYSQL_PASSWORD=secret
Is there a way where I can make docker-compose to "read" the settings from the .env file, since the last one is not tracked by git? so basically if I have to change settings I have to do it only in one file and also to not track the credentials on git for docker-compose.yml
You can do it like(From docker documentation https://docs.docker.com/compose/environment-variables/#the-env-file):
The “.env” file
You can set default values for any environment variables referenced in the Compose file, or used to configure Compose, in an environment file named .env:
$ cat .env
TAG=v1.5
$ cat docker-compose.yml
version: '3'
services:
web:
image: "webapp:${TAG}"
You can also use:
The “env_file” configuration option
You can pass multiple environment variables from an external file through to a service’s containers with the ‘env_file’ option, just like with docker run --env-file=FILE ...:
web:
env_file:
- web-variables.env

Can't access env variables in RUN script.sh Dockerfile

I have this Dockerfile:
FROM php:5.6-apache
WORKDIR /var/www/html/
# ENV VARIABLES
ENV INI_FOLDER /usr/local/etc/php
ENV WWW_FOLDER /var/www
# ADD THE ENV CONFIGURATOR AND SET PERMISSIONS
ADD env.sh $WWW_FOLDER/
RUN chmod +x $WWW_FOLDER/env.sh
RUN /var/www/env.sh
The problems is that in env.sh I don't have access to the variables set on the docker-compose. Is there any workaround to fix this?
UPDATE: added docker-compose
version: '2.0'
services:
app:
env_file:
- app/mysql.env
- app/app.env
volumes:
- C:\Users\svirl\Documents\workspace\docker\my-app:/var/www/html/:rw
build: app

Resources