How to add Laravel env file in AWS AutoScaling - laravel

I am working on a laravel project which is hosted on AMAZON AWS. We are also using AWS AutoScaling service, since new instances can be added/removed on the fly, I am also using AWS CodeDeploy so whenever a new instance will be created it will pull the code from github as we do not include environment variable file on git so new instance will not have the environment variable file so the application will not be able to run. I also do not want to include the environment variable file on git as it is not recommended to include that file on git. If I ignore the best practices here and add the env file on git then still there is a problem as I have different branches with different env files so when I merge the code it will replace the env file as well. So what are the best practices or solutions for this case ?
FYI: we are not using ElasticBeanstalk as I am familiar that on elastic beanstalk there is an option on EB dashboard to add environment variables and the path where env file will create upon new instance creation but we are not using ElasticBeanstalk, we are using AutoScaling service and according to my findings AWS do not provide such functionality for AutoScaling service.

There are several options to configure the env vars for an application.
Place the env files on S3 and on boot in the user data/launch config for that environments auto scaling pull down the config file for that env. Also to lock it down in the role for an environment only allow access to that bucket.
Store the env vars in Dynamodb for the env, and on boot look those up in user data and set at env vars. (unless the contain secrets/connection strings, et al., no way to store encrypted secrets in DDB. )
Use Consul https://www.consul.io/
KV Store: Applications can make use of Consul's hierarchical key/value store for any number of purposes, including dynamic configuration, feature flagging, coordination, leader election, and more. The simple HTTP API makes it easy to use.

Related

Keep My Env Parameters Secure While Deploying To AWS

I have a project in Laravel 8 and I have some secret env parameters and I do not want to ship them with my application to github. I will deploy my application with github actions to AWS beanstalk. How do I keep all the secrets secure and put them to EC2 instance when all application deployed.
There are multiple ways to do that and you should not send your env file with your application to github.
You can use beanstalk's own parameter store page. However, if you do that another developer who has access to your AWS account can see all the env parameters. It is simple key value store page.
Benastalk Panel -> (Select Your Environment) -> Configuration -> Software
Under the systems manager there is a service called Parameter Store (this is my prefered way)
In here, You can add as much as parameter as you like securely. You can simply add string parameters as well as secure (like password or api keys) strings also integers but string and secure types are my favorites.
You can split all you parameters by path like "APP_NAME/DB_NAME" etc.
You should get all the parameters from Parameter Store to your EC2 instance and put them on newly created .env file.
There is github secrets in github actions and you can put all your secret parameters to github secrets page. You can get all the secrets from github secrets and put your secrets to your application and ship from github to AWS directly.
You can go to settings in your repository and see this page:

ansible deploy to multi aws accounts using codebuild

The ansible playbook I'm running via aws codebuild only deploys to the same account. Instead of using a separate build for each account, I'd like to use only one build and manage multi-account deployment via ansible inventory. How can I set up the ansible static library to add yml files for every other aws account or environment it will be deploying to? That is, the inventory classifies those accounts into dev, stg & prod environments.
I know a bit about how this should be structured and that is to create a yml file in the inventory folder having the account name and also create a relevant file in the group-vars subfolder without the yml extension. But, I do not know the details of file contents. Can you please explain this to me?
On the other side, codebuild environment variable is given a few account names, the environment, and the role it should be assuming in those accounts to deploy. My question is how inventory structure and file content should be set up for this to work?
If you want to act on resources in different account, the general idea in AWS is to "assume" a role in that account and run API calls as normal. I see that Ansible has a module 'sts_assume_role' which helps to assume a role. I found the following blog article that may give you some pointers. Whether you run the ansible command on your laptop or CodeBuild, the idea is the same:
http://www.drivenbydevops.io/aws-ansible-and-assumed-roles/

How can I configure application.properties using AWS CodeDeploy and/or CloudFormation?

I have a Spring Web Service deployed on Elastic Beanstalk. I'm using AWS CloudFormation for the infrastructure and I'm using AWS CodePipeline to deploy the web service automatically from merges to the master branch.
Recently I added DynamoDB integration, and I need to configure a couple things in my application.properties. I attempted to use environment variables to configure the application.properties but I hit a wall when trying to set the environment variables from CodeDeploy.
This is my application.properties
amazon.dynamodb.endpoint=${DYNAMODB_ENDPOINT:http://localhost:8000}
amazon.dynamodb.region=${AWS_REGION:default-region}
amazon.dynamodb.accesskey=${DYNAMODB_ACCESS_KEY:TestAccessKey}
amazon.dynamodb.secretkey=${DYNAMODB_SECRET_KEY:TestSecretKey}
spring.data.dynamodb.entity2ddl.auto = create-drop
spring.data.dynamodb.entity2ddl.gsiProjectionType = ALL
spring.data.dynamodb.entity2ddl.readCapacity = 10
spring.data.dynamodb.entity2ddl.writeCapacity = 1
The defaults are for when I'm running a local DynamoDB instance and they work fine. However, I can't figure out how to get CodeDeploy to set environment variables for me, I also considered getting CloudFormation to set the environment variables, but couldn't find how to do that either. I tried manually setting the environment variables in the EC2 instance but that didn't work and isn't the solution I'm looking for as I'm using EB and want this project to use fully automated deployments. Please let me know if this is possible, what the industry standard is for configuring web services, and if I'm misunderstanding either CodeDeploy or CloudFormation.
In general, it is a bad practice to include access and secret keys in any sort of files or in your deployment automation.
Your instance that your application is deployed to should have an instance profile (i.e. IAM Role) attached to it which should have the appropriate DynamoDB permissions you need.
If you have that instance profile attached, the SDK should automatically be able to detect the credentials, region and endpoint is needs to communicate with.
You may need to update the way you are creating your DynamoDB client to just use the defaults.
To setup your development machine with these properties in a way that the AWS SDK can retrieve without explicitly putting them in properties files, you can run the aws configure command of the AWS CLI which should setup your ~/.aws/ folder with information about your region and credentials to use on your dev machine.

How to manage production, test and development environments with serverless framework

I am planning to build an enterprise application using aws lambda and serverless framework.
I want to separate the dev, test and prod environments and I am planning to use AWS Parameter store for it.
I don't want my production environment configuration be exposed to developers. If the developer runs the command serverless offline -s production start then the production configuration should not be obtained.
It should be obtained only when the serverless function has been successfully deployed to aws lambda.
Here are few considerations based on your question:
To have different environments on Serverless framework you have to set up the stage. This value can be passed as a parameter when executing sls commands.
If you are keeping your code in a repo, the developers will have access to all the configurations. If this is really important, you could keep the production configuration in a diff repo where only very specific people will have access to it, and then you make a reference to in in your serverless.yml. Ex:
custom: ${file(./config/${opt:stage, 'dev'}.json)} and then in your config folder you create the prod.json file, but pointing to the real one of the new repo you created. Note: this would make your project harder to maintain.
Considering you don't want your developers to execute your production environment locally. You can use the global variable of serverless offline to block the execution. You could also inform then to not do so.
Here is what should be a good practice and solution based on your problem:
Considering you have a production environment you want to isolate from a given group in your company, you should create VPC's and configure their resources access, accordingly.
Then you create users to have diff access. When your developer try to execute the code accessing a resource (dynamoDB for example) in a VPC they don't have access, they will be blocked.
AWS configure to define which user will execute the SLS command.
Your development team will still have access to your configuration file.
Note: In this case the person/group with access to the production VPC will have to do the deploy.
If the answer does not suffice, could you please reinforce which type of resource(s) are sensitive across your Serverless project? I am taking for granted it is the DB as it is the most common scenario.

How do I add local .env files to PaaS for faster deployment?

There is something I have never figured how to do with any PaaS provider.
How can I automatically deploy locally stored environment variables to PaaS when deploying the application? I know I can go to Heroku, AWS or Bluemix console and manually add my .envfile content as keys, but what I would really want to do is >
Pseudo code !
provider CLI deploy --ENV=.env.dev
Where --ENV is flag to use env. file stored in project root.
This would take my API keys from .env file and populate the provider environment variables. Preferably, the file would be usable across providers. Is this possible?
If you're using IBM BlueMix (or another Cloud Foundry), you can just list them in the application's manifest.yml file and cf push it with the rest of the application.

Resources