Deploying a laravel application on aws elastic beanstalk with code pipeline and code build with the laravel application using rds - laravel

So I am trying to deploy my laravel application (v7) to was elastic beanstalk. I have seen tutorials directing uploading a zip file that contains a .env file and update config.database to use the global RDS_* environment variables.
This does not work for me because I want to use codepipline and codebuild to build my application with git hooks. I have tried to set that up but my codebuild does not build successfully because in my pubsec.yaml file I added the usual laravel setup commands like installing dependencies and migrating the application's database.
Migrating the database is where I am encountering an issue. Somehow it seems codebuild does not get the RDS_* variables for my app database. I have been stuck here for a while.
This has made me question how codebuild handles environment variables. How does it create the .env file it uses to deploy? I even added a Linux command to copy my .env.example into an new .env file but having the same issues.
Any help would be greatly appreciated. Thanks
The error on logs:
SQLSTATE[HY000] [2002] Connection refused (SQL: select * from
information_schema.tables where table_schema = forge and table_name = migrations
and table_type = 'BASE TABLE') ```

Codebuild runs on a different environment from elastic beanstalk so environment variables created in elastic beanstalk cannot be accessed in the container AWS codebuild is running.
What code build actually does is build your application and transfer it to an s3 bucket so that during deployment your app can be accessed and moved to your VPC which in my case is an ec2 instance managed by elastic beanstalk.
After deployment (ie. app moved to vpc), EB environment variables can be accessed by the application.
So if you want to run commands that require access to EB environment variables, using commands in code build is the wrong place to put them. You should make use of EB extensions. You can read about them here.
For my Laravel application, I added an init.config file in the .ebextentions directory on the root of my application and then added my migration command as a container command. This worked for my use case.

Related

[Laravel + AWS]: Passing stored ENV variables into an EC2 container via an ECS deployment task results in the .env file being ignored

I'm working with a dockerized Laravel 8 website hosted on an ECS-managed EC2 instance.
Deployments are managed by AWS CodePipeline.
The code is stored in GitHub.
Production images are built by CodeBuild.
An ECS service runs a production release task to push that prod image to alternating EC2 instances.
This works well. I'm having problems, however, altering how I provision environment variables to newly released containers. During early development these were provided in a static .env file, then generated in CodeBuild during the build stage, and are now specified in the deployment task as stored AWS Systems Manager variables.
The goal is to allow the automatic provision of env variables without storing secrets in the codebase or build artifacts, or having to SSH into containers, and that's achieved.
However, I'd still like to run php artisan key:generate in the build stage to create a new app key when the production site is released, rather than storing that statically in AWS.
The problem
Whenever I specify any environment variables in the ECS deployment task, any environment variables I have provided in the site's .env file (specifically, those built in the CodeBuild build stage) are ignored.
Here's a snippet of the relevant buildspec.yml section:
build:
commands:
- echo Building front-end assets...
- npm run prod
- echo Installing composer libraries...
- composer install
- echo Creating .env file...
- touch .env
- echo Generating app key...
- php artisan key:generate
On release the site will throw a missing app key error, implying that php artisan key:generate failed - yet CodeBuild logging reports that it has succeeded. If I remove the environment variables from the ECS task then the generated app key is read correctly and the site works.
Illuminate\Encryption\MissingAppKeyException
No application encryption key has been specified.
It appears, basically, that if I want to provide some environment variables via ECS deployment task injection, I have to provide all environment variables that way, because any others will be ignored.
Any insights into why the ECS task environment variables method could result in the .env (or its contents) being ignored?

How can I run AWS Lambda locally and access DynamoDB?

I try to run and test an AWS Lambda service written in Golang locally using SAM CLI. I have two problems:
The Lambda does not work locally if I use .zip files. When I deploy the code to AWS, it works without an issue, but if I try to run locally with .zip files, I get the following error:
A required privilege is not held by the client: 'handler' -> 'C:\Users\user\AppData\Local\Temp\tmpbvrpc0a9\bootstrap'
If I don't use .zip, then it works locally, but I still want to deploy as .zip and it is not feasible to change the template.yml every time I want to test locally
If I try to access AWS resources, I need to set the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
However, if I set these variables in template.yml and then use sam local start-api --env-vars to fill them with the credentials, then the local environment works and can access AWS resources, but when I deploy the code to the real AWS, it gives an error, since these variables are reserved. I also tried to use different names for these variables, but then the local environment does not work, and also tried to omit these from template.yml and just use the local env-vars, but environment variables must be present in template.yml and cannot be created with env-vars, can only fill existing variables with values.
How can I make local env work but still be able to deploy to AWS?
For accessing AWS resources you need to look at IAM permissions rather than using programmatic access keys, check this document out for cloudformation.
To be clear virtually nothing deployed on AWS needs those keys, it's all about applying permissions to X(lambda, ec2 etc etc) - those keys are only really needed for the aws cli and some local envs like serverless and sam
The serverless framework now supports golang, if you're new I'd say give that a go while you get up to speed with IAM/Cloudformation.

Configuring multiple modules for CLI deployment (eb deploy) for AWS ElasticBeanstalk is not working

I am working on a Laravel project. I am trying to deploy my application to the ElasticBeanstalk environment. I am configuring the CLI for the deployment. I could configure it just for one environment successfully. But I am configuring it to support multiple modules for multiple environments as mentioned here, https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebcli-compose.html. But it is not working. Here is what I have done so far.
I created two folders called, beanstalk-env-production and beanstalk-env-testing right inside the project's root folder.
Then I run the following command.
eb init --modules beanstalk-env-testing beanstalk-env-production
Then .elasticbeanstalk/config.yml file is created within each folder.
The config.yml files have the following content.
branch-defaults:
default:
environment: MyanEat-test-env
environment-defaults:
MyanEat-test-env:
branch: null
repository: null
global:
application_name: Myan Eat
default_ec2_keyname: null
default_platform: arn:aws:elasticbeanstalk:eu-west-1::platform/PHP 7.3 running on
64bit Amazon Linux 2/3.0.3
default_region: eu-west-1
instance_profile: null
platform_name: null
platform_version: null
profile: null
sc: null
workspace_type: Application
Then I run the following command to deploy to testing.
eb deploy --modules beanstalk-env-testing
Then I got the following error.
All specified modules require an env.yaml file.
The following modules are missing this file: beanstalk-env-testing
Where and how can I configure env.yaml file? Why is it not working? How can I fix it?

How to deploy Django Fixtures to Amazon AWS

I have my app stored on GitHub. To deploy it to Amazon, I use their EB deploy command which takes my git repository and sends it up. It then runs the container commands to load my data.
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
The problem is that I don't want the fixtures in my git. Git should not contain this data since it's shared with other users. How can I get my AWS to load the fixtures some other way?
You can use the old school way: scp to the ec2 instance.
You can go to the EC2 console to see the real EC2 instance associated to your EB environment (I assume you only have one instance). Write down the public ip, and then connect to the instance like you would do with a normal EC2 instance.
For example
scp -i [YOUR_AWS_KEY] [MY_FIXTURE_FILE] ec2-user#[INSTANCE_IP]:[PATH_ON_SERVER]
Note that the username has to be ec2-user.
But I do not recommend this way to deploy the project because you may need to manually execute the commands. This is, however, useful for me to get the fixture from a live server.
To avoid tracking fixtures in the git. I just use a simple workaround: create a local branch for EB deployment and track the fixtures along with other environment-specific credentials. Such EB branches should never be uploaded to the git remote repositories.

Parse Server S3 Adapter Deprecated

The Parse S3 Adapter's requirement of S3_ACCESS_KEY and S3_SECRET_KEY is now deprecated. It says to use the environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. We are have setup an AWS user with an Access Key ID and we have our secret key as well. We have updated to the latest version of the adapter and removed our old S3_X_Key variables. Unfortunately, as soon as we do this we are unable to access, upload or change files on our S3 bucket. The user does have access to our buckets properties and if we change it back to use the explicit S3_ACCESS_KEY and secret everything works.
We are hosting on Heroku and haven't had any issues until now.
What else needs to be done to set this up?
This deprecation notice is very vague on how to fix this.
(link to notice: https://github.com/parse-server-modules/parse-server-s3-adapter#deprecation-notice----aws-credentials)
I did the following steps and it's working now:
Installed Amazon's CLI
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
Configured CLI by creating a user and then creating key id and secret
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Set the S3_BUCKET env variable
export S3_BUCKET=
Installed files adapter using command
npm install --save #parse/s3-files-adapter
In my parse-server's index.js added the files adapter
var S3Adapter = require('#parse/s3-files-adapter');
var s3Adapter = new S3Adapter();
var api = new ParseServer({
appId: 'my_app',
masterKey: 'master_key',
filesAdapter: s3Adapter
})
Arjav Dave's answer below is best if you are using AWS or a hosting solution where you can login to the server and run the AWS Configure command on the server. Or if you are running everything locally.
However, I was asking about Heroku and this goes for any server environment where you can set ENV variables.
Really it comes down to just a few steps. If you have a previous version setup you are going to switch your file adapter to just read:
filesAdapter: 'parse-server-s3-adapter',
(or whatever your npm installed package is called some are using the #parse/... one)
Take out the require statement and don't create any instance variables of S3Adapter or anything like that in your index.js.
Then in Heroku.com create config vars or with the CLI: heroku config:set AWS_ACCESS_KEY_ID=abc and heroku config:set AWS_SECRET_ACCESS_KEY=abc
Now run and test your uploading. All should be good.
The new adapter uses the environment variables for access and you just have to tell it what file adapter is installed in the index.js file. It will handle the rest. If this isn't working it'll be worth testing the IAM profile setup and making sure it's all working before coming back to this part. See below:
Still not working? Try running this example (edit sample.js to be your bucket when testing):
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-started-nodejs.html
Completely lost and no idea where to start?
1 Get Your AWS Credentials:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-your-credentials.html
2 Setup Your Bucket
https://transloadit.com/docs/faq/how-to-set-up-an-amazon-s3-bucket/
(follow the part on IAM users as well)
3 Follow IAM Best Practices
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Then go back to the top of this posting.
Hope that helps anyone else that was confused by this.

Resources