How to switch between multiple AWS accounts with Zappa - aws-lambda

I am experimenting with how to deploy lambdas into different AWS accounts in continuous delivery environment. At the moment I am stuck with that. Can you please give me a clue about this? As an example with AWS CLI we could define which profile we need to use.
Ex: aws s3 ls --profile account2
In the AWS config file, we define the profile as follows.
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
[account2]
aws_access_key_id = XXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Can we use the same approach with zappa deployments?
Highly appreciate any clue to solve this issue.

There is an options to nominate the profile name, did you try it?
"profile_name": "your-profile-name", // AWS profile credentials to use. Default 'default'. Removing this setting will use the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables instead.
https://github.com/Miserlou/Zappa/blob/b12bc67aac00b1302a7f9b18444a51f21deac46a/README.md

You can define which profile to use on your own using Zappa's setting:
"profile_name": "your-profile-name", // AWS profile credentials to use. Default 'default'. Removing this setting will use the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables instead.
But in your CI you first have to create your AWS config file and populate it with your profile from environment variables that are set in your CI's web interface.
In CircleCI (same would be done for TravisCI) I'm doing it like this for my mislavcimpersak profile:
mkdir -p ~/.aws
echo -e "[mislavcimpersak]" >> ~/.aws/credentials
echo -e "aws_access_key_id = "$AWS_ACCESS_KEY_ID >> ~/.aws/credentials
echo -e "aws_secret_access_key = "$AWS_SECRET_ACCESS_KEY >> ~/.aws/credentials
Complete working CircleCI config file is available in my repo:
https://github.com/mislavcimpersak/xkcd-excuse-generator/blob/master/.circleci/config.yml#L58-L60
And also complete working TravisCI config file:
https://github.com/mislavcimpersak/xkcd-excuse-generator/blob/feature/travis-ci/.travis.yml#L25-L29
Also, as it says in Zappa's docs:
Removing this setting will use the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables instead
So you can remove "profile_name": "default" from your zappa_settings.json and set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in your CI's web interface. Zappa should be able to use those.

Related

How can I get secrets from Azure Key Vault from Databricks without creating a Secret Scope?

I'm trying to find a way to get secrets from KV without creating a secret scope
OR
Create the secret scope automatically using Databricks CLI (following https://learn.microsoft.com/en-us/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope-using-the-databricks-cli)
For the second option, I'm confuse on where run those command lines.
Ideally, can Databricks CLI be used to retrieve secrets instead of creating the secret scope?
If you want to use dbutils.secrets.get or Databricks CLI, then you need to have secret scope created. To create secret scope using CLI you need to run it from your personal computer, for example, that has Databricks CLI installed. Please note the comment that if you're creating a secret scope from Key Vault using CLI, then you need to provide AAD token, not the Databricks PAT. Simplest way to do that is to set environment variables and then use CLI:
export DATABRICKS_HOST=https://adb-....azuredatabricks.net
export DATABRICKS_TOKEN=$(az account get-access-token -o tsv
--query accessToken --resource 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d)
databricks secrets create-scope ...

How can I run AWS Lambda locally and access DynamoDB?

I try to run and test an AWS Lambda service written in Golang locally using SAM CLI. I have two problems:
The Lambda does not work locally if I use .zip files. When I deploy the code to AWS, it works without an issue, but if I try to run locally with .zip files, I get the following error:
A required privilege is not held by the client: 'handler' -> 'C:\Users\user\AppData\Local\Temp\tmpbvrpc0a9\bootstrap'
If I don't use .zip, then it works locally, but I still want to deploy as .zip and it is not feasible to change the template.yml every time I want to test locally
If I try to access AWS resources, I need to set the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
However, if I set these variables in template.yml and then use sam local start-api --env-vars to fill them with the credentials, then the local environment works and can access AWS resources, but when I deploy the code to the real AWS, it gives an error, since these variables are reserved. I also tried to use different names for these variables, but then the local environment does not work, and also tried to omit these from template.yml and just use the local env-vars, but environment variables must be present in template.yml and cannot be created with env-vars, can only fill existing variables with values.
How can I make local env work but still be able to deploy to AWS?
For accessing AWS resources you need to look at IAM permissions rather than using programmatic access keys, check this document out for cloudformation.
To be clear virtually nothing deployed on AWS needs those keys, it's all about applying permissions to X(lambda, ec2 etc etc) - those keys are only really needed for the aws cli and some local envs like serverless and sam
The serverless framework now supports golang, if you're new I'd say give that a go while you get up to speed with IAM/Cloudformation.

Serveless offline TypeError: Cannot read property 'accessKeyId' of null

So I can deploy my lambda to aws no problem but trying to run it locally
serverless invoke local --function hello
TypeError: Cannot read property 'accessKeyId' of null
The congig and credentials file look ok./
Edit
~/.aws/config
[default]
region = eu-west-1
output = json
~
[default]
aws_access_key_id = A***************
aws_secret_access_key = /p*********************
What aws-sdk version are you using in your function? A quick google showed that there was a problem in the aws-sdk that contained the same error message. Make sure you have the latest version.
Also, remember that when you run a function locally, aws-sdk will look for credentials on your local system.
Run $ ls ~/.aws on a mac and C:\> dir "%UserProfile%\.aws" on windows to see if you have your credentials files stored locally. See this guide for more details.

AWS S3 authentification from windows

I am using Pentaho (8.1) from windows environment (remote desktop).
To Upload files to S3 I am using config & credential files.
When I use default file location in %USERPROFILE%.aws\config and %USERPROFILE%.aws\credentials it works fine.
I don't want every user to manually handle credentials file, so I would like to use same location for all users.
I have set environment variables:
AWS_SHARED_CREDENTIALS_FILE D:\data.aws\credentials
AWS_CONFIG_FILE D:\data.aws\config
But looks like it doesn't pick up this location correctly.
I am sure that files in %USERPROFILE% are actually used. I have also done full restart after changing variables, but it doesn't help.
Is there something I am missing from configuration?
If you are willing to set environment variables, then you can simply put the credentials in environment variables for each user:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY

Not able to run AWS CLI commands through windows script(.bat file)

I am trying to create a "aws_configure.bat" file which will run aws commands. I need to configure "aws_configure.bat" file as windows task. I created my script with below content.
aws configure set AWS_ACCESS_KEY_ID <mykey>
aws configure set aws_secret_access_key <myskey>
aws configure set region us-west-2
aws dynamodb list-tables
When I am trying to run this script then its printing the first line in cmd window. Can someone please suggest what is the problem here. Why my script is not able to run the aws cli commands. (I have installed aws cli in my system and when i am running these commands directly in cmd window, everything is working fine).
You should consider creating and configuring your AWS credentials outside of your batch file, then referencing the named profile from the batch file.
Run aws configure --profile myprofile, and provide the information required.
Then from your batch file, call aws dynamodb list-tables --profile myprofile.
To setup the prefered/default profile, set AWS_PROFILE=myprofile in system environment. With this method, you will not need to reference the profile in the batch file.

Resources