How to encrypt Lambda variables Using cloudformation - aws-lambda

AWS CloudFormation template that includes a Lambda function with sensitive environment variables. I'd like to set up a KMS key and encrypt them with it
Add basic cloudformation to encrypt the key even is ok with aws/lambda default encryption
LambdaFunction:
Type: AWS::Lambda::Function
DependsOn: LambdaRole
Properties:
Environment:
Variables:
key: AKIAJ6W7WERITYHYUHJGHN
secret: PGDzQ8277Fg6+SbuTyqxfrtbskjnaslkchkY1
dest: !Ref dstBucket
Code:
ZipFile: |
from __future__ import print_function
import os
import json
import boto3
import time
import string
import urllib
print('Loading function')
ACCESS_KEY_ID = os.environ['key']
ACCESS_SECRET_KEY = os.environ['secret']
#s3_bucket = boto3.resource('s3',aws_access_key_id=ACCESS_KEY_ID,aws_secret_access_key=ACCESS_SECRET_KEY)
s3 = boto3.client('s3',aws_access_key_id=ACCESS_KEY_ID,aws_secret_access_key=ACCESS_SECRET_KEY)
#s3 = boto3.client('s3')
def handler(event, context):
source_bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
#key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'])
#target_bucket = "${dstBucket}"
target_bucket = os.environ['dest']
copy_source = {'Bucket':source_bucket, 'Key':key}
try:
s3.copy_object(Bucket=target_bucket, Key=key, CopySource=copy_source)
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist '
'and your bucket is in the same region as this '
'function.'.format(key, source_bucket))
raise e
AWS CloudFormation template that includes a Lambda function with sensitive environment variables. I'd like to set up a KMS key and encrypt them with it

You can store the access key and Secret key in AWS SSM Parameter Store by encrypting it with KMS Key. Go to AWS Systems Manager -> Parameter store -> Create Parameter. You can choose secure string option and choose the KMS key to encrypt with. You can access that Parameter through the boto3 function call. For example, response = client.get_parameter(Name='AccessKey', WithDecryption=True). you can use 'response' variable to refer to access key. Make sure that Lambda function has enough permissions to use that KMS Key to decrypt that Parameter you stored. Attach all necessary Decrypt permissions to the IAM Role the Lambda uses. In this way, you don't need to pass your access key and secret key as environment variables. Hope this will help!

You can use the AWS KMS service to create a KMS key manually (or)
by using CFT (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html)
The return value will have an ARN which can be used for KmsKeyArn property in Lambda CFT
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html#cfn-lambda-function-kmskeyarn
Hope this helps !!

You can also use Secrets Manager AWS::SecretsManager::Secret CFN resource to store the secret values and Cloudformation.
Use Cloudformation dynamic references to retrieve the secret's values from either SSM Paramenter store or Secrets Manager, in the template where you consume them.

Related

How to generate SAS token using python legacy SDK(2.1) without account_key or connection_string

I am using python3.6 and azure-storage-blob(version = 1.5.0) and trying to use user assigned managed identity to connect to my azure storage blob from an Azure VM.The problem I am facing is I want to generate the SAS token to form a downloadable url.
I am using blob_service = BlockBlobService(account name,token credential) to authenticate. But I am not able to find any methods which let me generate SAS token without supplying the account key.
Also not seeing any way of using the user delegation key as is available in the new azure-storage-blob (versions>=12.0.0). Is there any workaround or I will need to upgrade the azure storage library at the end.
I tried to reproduce in my environment to generate SAS token without account key or connection string got result successfully.
Code:
import datetime as dt
import json
import os
from azure.identity import DefaultAzureCredential
from azure.storage.blob import (
BlobClient,
BlobSasPermissions,
BlobServiceClient,
generate_blob_sas,
)
credential = DefaultAzureCredential(exclude_shared_token_cache_credential=True)
storage_acct_name = "Accountname"
container_name = "containername"
blob_name = "Filename"
url = f"https://<Accountname>.blob.core.windows.net"
blob_service_client = BlobServiceClient(url, credential=credential)
udk = blob_service_client.get_user_delegation_key(
key_start_time=dt.datetime.utcnow() - dt.timedelta(hours=1),
key_expiry_time=dt.datetime.utcnow() + dt.timedelta(hours=1))
sas = generate_blob_sas(
account_name=storage_acct_name,
container_name=container_name,
blob_name=blob_name,
user_delegation_key=udk,
permission=BlobSasPermissions(read=True),
start = dt.datetime.utcnow() - dt.timedelta(minutes=15),
expiry = dt.datetime.utcnow() + dt.timedelta(hours=2),
)
sas_url = (
f'https://{storage_acct_name}.blob.core.windows.net/'
f'{container_name}/{blob_name}?{sas}'
)
print(sas_url)
Output:
Make sure you need to add storage blob data contributor role as below:

Update S3 KMS key on an object using server side encryption

I am working on a feature where a customer can update their KMS key in our platform so that they are using their KMS key to encrypt data instead of one generated by us. The way it works is when a customer signs up, we generate a KMS key for them and upload the objects using that key. If the customer wants to provide their own key, I want to be able to update this key without having to pull down the data and re-upload with the new key.
def enc_client
Aws::S3::Encryption::Client.new(
kms_client: Aws::KMS::Client.new(region: 'us-east-1'),
kms_key_id: ENV['MY_PRIVATE_KEY']
)
end
def s3_client
enc_client.client
end
bucket = "my_bucket_name"
key = "path/12345abcde/preview.html"
copy_source = "/#{key}"
server_side_encryption = "aws:kms"
# This returns the object with the key present. If I go in the AWS client and manually add or remove the key, it will update on this call.
resp = s3_client.get_object(bucket: bucket, key: key)
#<struct Aws::S3::Types::GetObjectOutput
body=#<StringIO:0x000000000bb45108>,
delete_marker=nil,
accept_ranges="bytes",
expiration=nil,
restore=nil,
last_modified=2019-04-12 15:40:09 +0000,
content_length=19863445,
etag="\"123123123123123123123123123123-1\"",
missing_meta=nil,
version_id=nil,
cache_control=nil,
content_disposition="inline; filename=\"preview.html\"",
content_encoding=nil,
content_language=nil,
content_range=nil,
content_type="text/html",
expires=nil,
expires_string=nil,
website_redirect_location=nil,
server_side_encryption="aws:kms",
metadata={},
sse_customer_algorithm=nil,
sse_customer_key_md5=nil,
ssekms_key_id="arn:aws:kms:us-east-1:123456789123:key/222b222b-bb22-2222-bb22-222bbb22bb2b",
storage_class=nil,
request_charged=nil,
replication_status=nil,
parts_count=nil,
tag_count=nil>
new_ssekms_key_id = "arn:aws:kms:us-east-1:123456789123:key/111a111a-aa11-1111-aa11-111aaa11aa1a"
resp = s3_client.copy_object(bucket: bucket, key: key, copy_source: copy_source, ssekms_key_id: ssekms_key_id)
Aws::S3::Errors::InvalidArgument: Server Side Encryption with AWS KMS managed key requires HTTP header x-amz-server-side-encryption : aws:kms
from /usr/local/bundle/gems/aws-sdk-core-3.6.0/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call'
resp = s3_client.copy_object(bucket: bucket, key: key, copy_source: copy_source, ssekms_key_id: ssekms_key_id, server_side_encryption: server_side_encryption)
Aws::S3::Errors::AccessDenied: Access Denied
from /usr/local/bundle/gems/aws-sdk-core-3.6.0/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call'
I would like to be able to update the kms id do a new one on the server side
copy_source = "/#{key}" is incorrect. The value should be "/#{bucket}/#{key}".
The service is interpreting the first element of your key path as the name of a bucket -- probably someone else's bucket.

glue job times out when calling aws boto3 client api

I am using glue console not dev endpoint. The glue job is able to access glue catalogue and table using below code
datasource0 = glueContext.create_dynamic_frame.from_catalog(database =
"glue-db", table_name = "countries")
print "Table Schema:", datasource0.schema()
print "datasource0", datasource0.show()
Now I want to get the metadata for all tables from the glue data base glue-db.
I could not find a function in awsglue.context api, therefore i am using boto3.
client = boto3.client('glue', 'eu-central-1')
responseGetDatabases = client.get_databases()
databaseList = responseGetDatabases['DatabaseList']
for databaseDict in databaseList:
databaseName = databaseDict['Name']
print ("databaseName:{}".format(databaseName))
responseGetTables = client.get_tables( DatabaseName = databaseName,
MaxResults=123)
print("responseGetDatabases{}".format(responseGetTables))
tableList = responseGetTables['TableList']
print("response Object{0}".format(responseGetTables))
for tableDict in tableList:
tableName = tableDict['Name']
print("-- tableName:{}".format(tableName))
the code runs in lambda function, but fails within glue etl job with following error
botocore.vendored.requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='glue.eu-central-1.amazonaws.com', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(, 'Connection to glue.eu-central-1.amazonaws.com timed out. (connect timeout=60)'))
The problem seems to be in environment configuration. Glue VPC has two subnets
private subnet: with s3 endpoint for glue, allows inbound traffic from the RDS security group. It has
public subnet: in glue vpc with nat gateway. Private subnet is reachable through gate nat Gateway. I am not sure what i am missing here.
Try using a proxy while creating the boto3 client:
from pyhocon import ConfigFactory
service_name = 'glue'
default = ConfigFactory.parse_file('glue-default.conf')
override = ConfigFactory.parse_file('glue-override.conf')
host = override.get('proxy.host', default.get('proxy.host'))
port = override.get('proxy.port', default.get('proxy.port'))
config = Config()
if host and port:
config.proxies = {'https': '{}:{}'.format(host, port)}
client = boto3.Session(region_name=region).client(service_name=service_name, config=config)
glue-default.conf and glue-override.conf are deployed to the cluster by glue while spark submit into the /tmp directory.
I had a similar issue and I did the same by using the public library from glue:
s3://aws-glue-assets-eu-central-1/scripts/lib/utils.py
can you please try the boto client creation as below by specifying the region explicitly?
client = boto3.client('glue',region_name='eu-central-1')
I had a similar problem when I was running this command from Glue Python Shell.
So I created endpoint (VPC->Endpoints) for Glue service (service name: "com.amazonaws.eu-west-1.glue"), this one was assigned to the same Subnet and Security Group as the Glue Connection which was used in the Glue Python Shell Job.

How do I use server side encryption when uploading a file to s3 via the ruby sdk?

I'm running into an issue uploading to S3 with version 2 of the sdk.
When running:
Aws.config.update({
region: 'us-east-1',
credentials: Aws::Credentials.new(credentials['key'],credentials['secret'],
s3_server_side_encryption: :aes256)
})
s3 = Aws::S3::Resource.new
bucket = 'VandalayIndustriesAccountingData'
s3_file_path = "folder/filename.tar.gz"
s3_object = s3.bucket(bucket).object(s3_file_path)
s3_object.upload_file(artifact_location)
I get the following error:
Aws::S3::Errors::InvalidToken
-----------------------------
The provided token is malformed or otherwise invalid.
When I remove the s3_server_side_encryption setting it changes to an access denied error.
I've been trying to find documentation around doing this with v2 of the API, but everything online seems to rely on the bucket object having a write method which doesn't seem to exist in v2 of the API.
http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingRubySDK.html
I'm likely just not finding the correct document in the v2 api. I'd like to avoid using v1 and v2 of the api but may fall back to that.
upload_file takes arguments similar to write
Aws.config.update({
region: 'us-east-1',
credentials: Aws::Credentials.new(credentials['key'],credentials['secret'],
)
})
s3 = Aws::S3::Resource.new
bucket = 'VandalayIndustriesAccountingData'
s3_file_path = "folder/filename.tar.gz"
s3_object = s3.bucket(bucket).object(s3_file_path)
s3_object.upload_file(artifact_location, server_side_encryption: :AES256)

AWS SDK v2 for s3

Can any one provide me a good documentation for uploading files to S3 using asw-sdk Version 2. I checked out the main doc and in v1 we used to do like
s3 = AWS::S3.new
obj = s3.buckets['my-bucket']
Now in v2 when I try as
s3 = Aws::S3::Client.new
am ending up with
Aws::Errors::MissingRegionError: missing region; use :region option or export region name to ENV['AWS_REGION']
Can anyone help me with this?
As per official documentation:
To use the Ruby SDK, you must configure a region and credentials.
Therefore,
s3 = Aws::S3::Client.new(region:'us-west-2')
Alternatively, a default region can be loaded from one of the following locations:
Aws.config[:region]
ENV['AWS_REGION']
Here's a complete S3 demo on aws v2 gem that worked for me:
Aws.config.update(
region: 'us-east-1',
credentials: Aws::Credentials.new(
Figaro.env.s3_access_key_id,
Figaro.env.s3_secret_access_key
)
)
s3 = Aws::S3::Client.new
resp = s3.list_buckets
puts resp.buckets.map(&:name)
Gist
Official list of AWS region IDs here.
If you're unsure of the region, the best guess would be US Standard, which has the ID us-east-1 for config purposes, as shown above.
If you were using a aws.yml file for your credentials in Rails, you might want to create a file config/initializers/aws.rb with the following content:
filename = File.expand_path(File.join(Rails.root, "config", "aws.yml"))
config = YAML.load_file(filename)
aws_config = config[Rails.env.to_s].symbolize_keys
Aws.config.update({
region: aws_config[:region],
credentials: Aws::Credentials.new(aws_config[:access_key_id], aws_config[:secret_access_key])
})
The config/aws.yml file would need to be adapter to include the region.
development: &development
region: 'your region'
access_key_id: 'your access key'
secret_access_key: 'your secret access key'
production:
<<: *development

Resources