How to secure the AWS access key in serverless - aws-lambda

I am writing a serverless application which is connected to DynamoDB.
Currently I am reading the access key ID and security access key from a json file.
I am going to use Jenkins for CI and need a way to secure these keys.
What I am going to do is setting the keys as environmental variables and read them in the application. But the problem is I don't know how to set the environmental variables every time a lambda function is started.
I have read there's a way to configure this in serverless.yml file, but don't know how.
How to achieve this?

Don't use environment variables. Use the IAM role that is attached to your lambda function. AWS Lambda assumes the role on your behalf and sets the credentials as environment variables when your function runs. You don't even need to read these variables yourself. All of the AWS SDKs will read these environment variables automatically.

There's a good guide on serverless security, which among other topics, cover this one as well. It's similar to the OWASP top 10:
In general, the best practice would be to use the AWS Secrets Manager, together with SSM parameter store.

Related

Keep My Env Parameters Secure While Deploying To AWS

I have a project in Laravel 8 and I have some secret env parameters and I do not want to ship them with my application to github. I will deploy my application with github actions to AWS beanstalk. How do I keep all the secrets secure and put them to EC2 instance when all application deployed.
There are multiple ways to do that and you should not send your env file with your application to github.
You can use beanstalk's own parameter store page. However, if you do that another developer who has access to your AWS account can see all the env parameters. It is simple key value store page.
Benastalk Panel -> (Select Your Environment) -> Configuration -> Software
Under the systems manager there is a service called Parameter Store (this is my prefered way)
In here, You can add as much as parameter as you like securely. You can simply add string parameters as well as secure (like password or api keys) strings also integers but string and secure types are my favorites.
You can split all you parameters by path like "APP_NAME/DB_NAME" etc.
You should get all the parameters from Parameter Store to your EC2 instance and put them on newly created .env file.
There is github secrets in github actions and you can put all your secret parameters to github secrets page. You can get all the secrets from github secrets and put your secrets to your application and ship from github to AWS directly.
You can go to settings in your repository and see this page:

Is there a method preventing dynamic ec2 key pairs from being written to tfstate file?

We are starting to use Terraform to build our aws ec2 infrastructure but would like to do this as securely as possible.
Ideally we would like to do is to create a key pair for each Windows ec2 instance dynamically and store the private key in Vault. This is possible, but I cannot think a way of implementing it without having the private key written to the tfstate file. Yes I know I can store the tfstate file in an encrypted s3 bucket but this does not seem like an optimal secure solution.
I am happy to write custom code if needs be to have the key pair generated via another mechanism and the name passed as a variable to Terraform but dont want to if there are other more robust and tested methods out there. I was hoping we could use Vault to do this but on researching it does not look possible.
Has anyone got any experience of doing something similar? Failing that, any suggestions?
The most secure option is to have an arbitrary keypair you destroy the private key for and user_data that joins the instances to a AWS Managed Microsoft AD domain controller. After that you can use conventional AD users, and groups to control access to the instances (but not group policy in any depth, regrettably). You'll need a domain member server to administrate AD at that level of detail.
If you really need to be able to use local admin on these Windows EC2 instances, then you'll need to create the keypair for decrypting the password once manually and then share it securely through a secret or password manager with other admins using something like Vault or 1Password.
I don't see any security advantage to creating a keypair per instance, just considerable added complexity. If you're concerned about exposure, change the administrator passwords after obtaining them and store those in your secret or password manager.
I still advise going with AD if you are using Windows. Windows with AD enables world-leading unified endpoint management and Microsoft has held the lead on that for a very long time.

How can I configure application.properties using AWS CodeDeploy and/or CloudFormation?

I have a Spring Web Service deployed on Elastic Beanstalk. I'm using AWS CloudFormation for the infrastructure and I'm using AWS CodePipeline to deploy the web service automatically from merges to the master branch.
Recently I added DynamoDB integration, and I need to configure a couple things in my application.properties. I attempted to use environment variables to configure the application.properties but I hit a wall when trying to set the environment variables from CodeDeploy.
This is my application.properties
amazon.dynamodb.endpoint=${DYNAMODB_ENDPOINT:http://localhost:8000}
amazon.dynamodb.region=${AWS_REGION:default-region}
amazon.dynamodb.accesskey=${DYNAMODB_ACCESS_KEY:TestAccessKey}
amazon.dynamodb.secretkey=${DYNAMODB_SECRET_KEY:TestSecretKey}
spring.data.dynamodb.entity2ddl.auto = create-drop
spring.data.dynamodb.entity2ddl.gsiProjectionType = ALL
spring.data.dynamodb.entity2ddl.readCapacity = 10
spring.data.dynamodb.entity2ddl.writeCapacity = 1
The defaults are for when I'm running a local DynamoDB instance and they work fine. However, I can't figure out how to get CodeDeploy to set environment variables for me, I also considered getting CloudFormation to set the environment variables, but couldn't find how to do that either. I tried manually setting the environment variables in the EC2 instance but that didn't work and isn't the solution I'm looking for as I'm using EB and want this project to use fully automated deployments. Please let me know if this is possible, what the industry standard is for configuring web services, and if I'm misunderstanding either CodeDeploy or CloudFormation.
In general, it is a bad practice to include access and secret keys in any sort of files or in your deployment automation.
Your instance that your application is deployed to should have an instance profile (i.e. IAM Role) attached to it which should have the appropriate DynamoDB permissions you need.
If you have that instance profile attached, the SDK should automatically be able to detect the credentials, region and endpoint is needs to communicate with.
You may need to update the way you are creating your DynamoDB client to just use the defaults.
To setup your development machine with these properties in a way that the AWS SDK can retrieve without explicitly putting them in properties files, you can run the aws configure command of the AWS CLI which should setup your ~/.aws/ folder with information about your region and credentials to use on your dev machine.

Setting environment variable for JSON files in AWS Lambda Chalice

I'm working with some Kaggle project. Using Python library for BigQuery on my laptop, I can successfully download the dataset after passing the authentication credential by environment variable GOOGLE_APPLICATION_CREDENTIALS. As the documentation explains, this environment variable points to the location of a JSON file containing the credential.
Now I want to run this code on Amazon Lambda using Chalice. I know there's an option for environment variable in Chalice, but I don't know how to include a JSON file inside of a Chalice app and pass its location as an environment variable. Moreover, I'm not sure whether it's safe to pass the credential as a JSON file in Chalice.
Does anyone have some experience on how to pass Google Credential as an environment variable for Chalice app?
You could just embed the contents of the JSON file as an environment variable in Chalice, and then use the GCP Client.from_service_account_info() method to load credentials from memory instead of a file. This would not be advised since your private GCP credentials would then likely be committed to source control.
Might I suggest that you entertain other approaches to passing your GCP credentials other than environment variables. You could store this JSON object in AWS System Manager Parameter Store as a secure parameter. Your AWS Lambda function could then use the boto3 ssm.get_parameter() method when needed.
You could also consider AWS Secrets Manager as another similar alternative.

Custom resource for existing Terraform provider?

I've been playing around with writing a custom resource for AWS which combines other resources in a useful way. (It's too complex to achieve effectively with a Terraform module.)
The documentation (starting with the Plugins page) outlines how to create a completely new resource from scratch. However, is it possible to "attach" my custom resource to the AWS provider? This would allow me to:
name my resources e.g. aws_foo instead of awscontrib_foo
presumably, access AWS credentials already defined for that provider
You can use the following provider to do exactly the same with Custom Resources in AWS CloudFormation.
https://github.com/mobfox/terraform-provider-multiverse
you can use even AWS Lambda and use any language you like to manage your resources, it also keep state of your resource, so you can delete, read, update them too. It create a resource, so it is not like External Data
Yes, the process is outlined here
https://github.com/hashicorp/terraform#developing-terraform
Your customised terraform can be in your own version of the AWS plugin

Resources