Keep My Env Parameters Secure While Deploying To AWS - laravel

I have a project in Laravel 8 and I have some secret env parameters and I do not want to ship them with my application to github. I will deploy my application with github actions to AWS beanstalk. How do I keep all the secrets secure and put them to EC2 instance when all application deployed.

There are multiple ways to do that and you should not send your env file with your application to github.
You can use beanstalk's own parameter store page. However, if you do that another developer who has access to your AWS account can see all the env parameters. It is simple key value store page.
Benastalk Panel -> (Select Your Environment) -> Configuration -> Software
Under the systems manager there is a service called Parameter Store (this is my prefered way)
In here, You can add as much as parameter as you like securely. You can simply add string parameters as well as secure (like password or api keys) strings also integers but string and secure types are my favorites.
You can split all you parameters by path like "APP_NAME/DB_NAME" etc.
You should get all the parameters from Parameter Store to your EC2 instance and put them on newly created .env file.
There is github secrets in github actions and you can put all your secret parameters to github secrets page. You can get all the secrets from github secrets and put your secrets to your application and ship from github to AWS directly.
You can go to settings in your repository and see this page:

Related

How can I configure application.properties using AWS CodeDeploy and/or CloudFormation?

I have a Spring Web Service deployed on Elastic Beanstalk. I'm using AWS CloudFormation for the infrastructure and I'm using AWS CodePipeline to deploy the web service automatically from merges to the master branch.
Recently I added DynamoDB integration, and I need to configure a couple things in my application.properties. I attempted to use environment variables to configure the application.properties but I hit a wall when trying to set the environment variables from CodeDeploy.
This is my application.properties
amazon.dynamodb.endpoint=${DYNAMODB_ENDPOINT:http://localhost:8000}
amazon.dynamodb.region=${AWS_REGION:default-region}
amazon.dynamodb.accesskey=${DYNAMODB_ACCESS_KEY:TestAccessKey}
amazon.dynamodb.secretkey=${DYNAMODB_SECRET_KEY:TestSecretKey}
spring.data.dynamodb.entity2ddl.auto = create-drop
spring.data.dynamodb.entity2ddl.gsiProjectionType = ALL
spring.data.dynamodb.entity2ddl.readCapacity = 10
spring.data.dynamodb.entity2ddl.writeCapacity = 1
The defaults are for when I'm running a local DynamoDB instance and they work fine. However, I can't figure out how to get CodeDeploy to set environment variables for me, I also considered getting CloudFormation to set the environment variables, but couldn't find how to do that either. I tried manually setting the environment variables in the EC2 instance but that didn't work and isn't the solution I'm looking for as I'm using EB and want this project to use fully automated deployments. Please let me know if this is possible, what the industry standard is for configuring web services, and if I'm misunderstanding either CodeDeploy or CloudFormation.
In general, it is a bad practice to include access and secret keys in any sort of files or in your deployment automation.
Your instance that your application is deployed to should have an instance profile (i.e. IAM Role) attached to it which should have the appropriate DynamoDB permissions you need.
If you have that instance profile attached, the SDK should automatically be able to detect the credentials, region and endpoint is needs to communicate with.
You may need to update the way you are creating your DynamoDB client to just use the defaults.
To setup your development machine with these properties in a way that the AWS SDK can retrieve without explicitly putting them in properties files, you can run the aws configure command of the AWS CLI which should setup your ~/.aws/ folder with information about your region and credentials to use on your dev machine.

Setting environment variable for JSON files in AWS Lambda Chalice

I'm working with some Kaggle project. Using Python library for BigQuery on my laptop, I can successfully download the dataset after passing the authentication credential by environment variable GOOGLE_APPLICATION_CREDENTIALS. As the documentation explains, this environment variable points to the location of a JSON file containing the credential.
Now I want to run this code on Amazon Lambda using Chalice. I know there's an option for environment variable in Chalice, but I don't know how to include a JSON file inside of a Chalice app and pass its location as an environment variable. Moreover, I'm not sure whether it's safe to pass the credential as a JSON file in Chalice.
Does anyone have some experience on how to pass Google Credential as an environment variable for Chalice app?
You could just embed the contents of the JSON file as an environment variable in Chalice, and then use the GCP Client.from_service_account_info() method to load credentials from memory instead of a file. This would not be advised since your private GCP credentials would then likely be committed to source control.
Might I suggest that you entertain other approaches to passing your GCP credentials other than environment variables. You could store this JSON object in AWS System Manager Parameter Store as a secure parameter. Your AWS Lambda function could then use the boto3 ssm.get_parameter() method when needed.
You could also consider AWS Secrets Manager as another similar alternative.

How to secure the AWS access key in serverless

I am writing a serverless application which is connected to DynamoDB.
Currently I am reading the access key ID and security access key from a json file.
I am going to use Jenkins for CI and need a way to secure these keys.
What I am going to do is setting the keys as environmental variables and read them in the application. But the problem is I don't know how to set the environmental variables every time a lambda function is started.
I have read there's a way to configure this in serverless.yml file, but don't know how.
How to achieve this?
Don't use environment variables. Use the IAM role that is attached to your lambda function. AWS Lambda assumes the role on your behalf and sets the credentials as environment variables when your function runs. You don't even need to read these variables yourself. All of the AWS SDKs will read these environment variables automatically.
There's a good guide on serverless security, which among other topics, cover this one as well. It's similar to the OWASP top 10:
In general, the best practice would be to use the AWS Secrets Manager, together with SSM parameter store.

How to add Laravel env file in AWS AutoScaling

I am working on a laravel project which is hosted on AMAZON AWS. We are also using AWS AutoScaling service, since new instances can be added/removed on the fly, I am also using AWS CodeDeploy so whenever a new instance will be created it will pull the code from github as we do not include environment variable file on git so new instance will not have the environment variable file so the application will not be able to run. I also do not want to include the environment variable file on git as it is not recommended to include that file on git. If I ignore the best practices here and add the env file on git then still there is a problem as I have different branches with different env files so when I merge the code it will replace the env file as well. So what are the best practices or solutions for this case ?
FYI: we are not using ElasticBeanstalk as I am familiar that on elastic beanstalk there is an option on EB dashboard to add environment variables and the path where env file will create upon new instance creation but we are not using ElasticBeanstalk, we are using AutoScaling service and according to my findings AWS do not provide such functionality for AutoScaling service.
There are several options to configure the env vars for an application.
Place the env files on S3 and on boot in the user data/launch config for that environments auto scaling pull down the config file for that env. Also to lock it down in the role for an environment only allow access to that bucket.
Store the env vars in Dynamodb for the env, and on boot look those up in user data and set at env vars. (unless the contain secrets/connection strings, et al., no way to store encrypted secrets in DDB. )
Use Consul https://www.consul.io/
KV Store: Applications can make use of Consul's hierarchical key/value store for any number of purposes, including dynamic configuration, feature flagging, coordination, leader election, and more. The simple HTTP API makes it easy to use.

Secure super user account within octopus deploy

I have been tasked with developing a service that takes requests for admin access to a windows server, receives approval from management, grants access, and then automatically revokes access after an hour.
I am required to do all deployments through Octopus Deploy.
I cannot store the super user password within the service, since all developers have read access to our SVN.
I was planning on storing the password within a secure variable in Octopus Deploy, but then realized that anyone with modification permissions on the project could add a powershell script to send themselves the variable values.
Is there any way to secure a variable within Octopus Deploy that can be used to install a windows service with super user access, but cannot be retrieved by any means?
How I have this setup uses a combination of roles and environments to limit access to sensitive variables such as prod passwords.
You need two roles:
1) is a project editor role that allows developers to do everything but only for Dev/UAT environments. This allows them to get everything ready and tested without access to the prod environment.
2) a production editor role which only a few people have access to. Production password variables are scoped to the Prod environment so developers can't access them.

Resources