AWS SDK v2 for s3 - ruby

Can any one provide me a good documentation for uploading files to S3 using asw-sdk Version 2. I checked out the main doc and in v1 we used to do like
s3 = AWS::S3.new
obj = s3.buckets['my-bucket']
Now in v2 when I try as
s3 = Aws::S3::Client.new
am ending up with
Aws::Errors::MissingRegionError: missing region; use :region option or export region name to ENV['AWS_REGION']
Can anyone help me with this?

As per official documentation:
To use the Ruby SDK, you must configure a region and credentials.
Therefore,
s3 = Aws::S3::Client.new(region:'us-west-2')
Alternatively, a default region can be loaded from one of the following locations:
Aws.config[:region]
ENV['AWS_REGION']

Here's a complete S3 demo on aws v2 gem that worked for me:
Aws.config.update(
region: 'us-east-1',
credentials: Aws::Credentials.new(
Figaro.env.s3_access_key_id,
Figaro.env.s3_secret_access_key
)
)
s3 = Aws::S3::Client.new
resp = s3.list_buckets
puts resp.buckets.map(&:name)
Gist
Official list of AWS region IDs here.
If you're unsure of the region, the best guess would be US Standard, which has the ID us-east-1 for config purposes, as shown above.

If you were using a aws.yml file for your credentials in Rails, you might want to create a file config/initializers/aws.rb with the following content:
filename = File.expand_path(File.join(Rails.root, "config", "aws.yml"))
config = YAML.load_file(filename)
aws_config = config[Rails.env.to_s].symbolize_keys
Aws.config.update({
region: aws_config[:region],
credentials: Aws::Credentials.new(aws_config[:access_key_id], aws_config[:secret_access_key])
})
The config/aws.yml file would need to be adapter to include the region.
development: &development
region: 'your region'
access_key_id: 'your access key'
secret_access_key: 'your secret access key'
production:
<<: *development

Related

Azure Key Vault Chef Cookbook

I am a noobie with coding but am learning. I was hoping someone can help look at this ruby code that I found online that helps to get a secret from an Azure Key vault. I will paste it below. I just need help clarifying what each block of code is referring to.
Not sure what the below code is referring to. I know they are attributes but how do they work?
node.default['azurespn']['client_id'] = azurespn[node.environment]['client_id']
node.default['azurespn']['tenant_id'] = azurespn[node.environment]['tenant_id']
node.default['azurespn']['client_secret'] = azurespn[node.environment]['client_secret']
Recipe:
# retrieve the secret stored in azure key vault using this chef recipe
include_recipe 'microsoft_azure'
azurespn = data_bag_item('azurespn', 'azurespnenv')
node.default['azurespn']['client_id'] = azurespn[node.environment]['client_id']
node.default['azurespn']['tenant_id'] = azurespn[node.environment]['tenant_id']
node.default['azurespn']['client_secret'] = azurespn[node.environment]['client_secret']
spn = {
'tenant_id' => "#{node['azurespn']['tenant_id']}",
'client_id' => "#{node['azurespn']['client_id']}",
'secret' => "#{node['azurespn']['client_secret']}"
}
secret = vault_secret("#{node['windowsnode']['vault_name']}", "#{node['windowsnode']
['secret']}", spn)
file 'c:/jenkins/secret' do
action :create
content "#{secret}"
rights :full_control, 'Administrators', :one_level_deep => true
end
Chef::Log.info("secret is '#{secret}' ")
Q. Not sure what the below code is referring to. I know they are attributes but how do they work?
As you understood, this block of code is setting some node attributes. The value of these attributes is being read from a data bag (in the line above), i.e. azurespn = data_bag_item('azurespn', 'azurespnenv')
Now azurespn variable contains the contents of the data bag item azurespnenv. For better understanding, try knife data bag show azurespn azurespnenv. I created a dummy data bag structure just to illustrate.
dev:
client_id: win10
client_secret: topsecret
tenant_id: testtenant
qa:
client_id: ubuntu
client_secret: changeme
tenant_id: footenant
id: azurespnenv
In this data bag, we have two environments - dev and qa.
Let's take 1 line for example:
node.default['azurespn']['client_id'] = azurespn[node.environment]['client_id']
So the azurespn[node.environment]['client_id'] will pick up the appropriate client_id based on the Chef environment of that node. Which translates to:
node.default['azurespn']['client_id'] = azurespn['dev']['client_id']
#=> 'win10'
node.default['azurespn']['client_id'] = azurespn['qa']['client_id']
#=> 'ubuntu'

How to encrypt Lambda variables Using cloudformation

AWS CloudFormation template that includes a Lambda function with sensitive environment variables. I'd like to set up a KMS key and encrypt them with it
Add basic cloudformation to encrypt the key even is ok with aws/lambda default encryption
LambdaFunction:
Type: AWS::Lambda::Function
DependsOn: LambdaRole
Properties:
Environment:
Variables:
key: AKIAJ6W7WERITYHYUHJGHN
secret: PGDzQ8277Fg6+SbuTyqxfrtbskjnaslkchkY1
dest: !Ref dstBucket
Code:
ZipFile: |
from __future__ import print_function
import os
import json
import boto3
import time
import string
import urllib
print('Loading function')
ACCESS_KEY_ID = os.environ['key']
ACCESS_SECRET_KEY = os.environ['secret']
#s3_bucket = boto3.resource('s3',aws_access_key_id=ACCESS_KEY_ID,aws_secret_access_key=ACCESS_SECRET_KEY)
s3 = boto3.client('s3',aws_access_key_id=ACCESS_KEY_ID,aws_secret_access_key=ACCESS_SECRET_KEY)
#s3 = boto3.client('s3')
def handler(event, context):
source_bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
#key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'])
#target_bucket = "${dstBucket}"
target_bucket = os.environ['dest']
copy_source = {'Bucket':source_bucket, 'Key':key}
try:
s3.copy_object(Bucket=target_bucket, Key=key, CopySource=copy_source)
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist '
'and your bucket is in the same region as this '
'function.'.format(key, source_bucket))
raise e
AWS CloudFormation template that includes a Lambda function with sensitive environment variables. I'd like to set up a KMS key and encrypt them with it
You can store the access key and Secret key in AWS SSM Parameter Store by encrypting it with KMS Key. Go to AWS Systems Manager -> Parameter store -> Create Parameter. You can choose secure string option and choose the KMS key to encrypt with. You can access that Parameter through the boto3 function call. For example, response = client.get_parameter(Name='AccessKey', WithDecryption=True). you can use 'response' variable to refer to access key. Make sure that Lambda function has enough permissions to use that KMS Key to decrypt that Parameter you stored. Attach all necessary Decrypt permissions to the IAM Role the Lambda uses. In this way, you don't need to pass your access key and secret key as environment variables. Hope this will help!
You can use the AWS KMS service to create a KMS key manually (or)
by using CFT (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html)
The return value will have an ARN which can be used for KmsKeyArn property in Lambda CFT
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html#cfn-lambda-function-kmskeyarn
Hope this helps !!
You can also use Secrets Manager AWS::SecretsManager::Secret CFN resource to store the secret values and Cloudformation.
Use Cloudformation dynamic references to retrieve the secret's values from either SSM Paramenter store or Secrets Manager, in the template where you consume them.

How do I use server side encryption when uploading a file to s3 via the ruby sdk?

I'm running into an issue uploading to S3 with version 2 of the sdk.
When running:
Aws.config.update({
region: 'us-east-1',
credentials: Aws::Credentials.new(credentials['key'],credentials['secret'],
s3_server_side_encryption: :aes256)
})
s3 = Aws::S3::Resource.new
bucket = 'VandalayIndustriesAccountingData'
s3_file_path = "folder/filename.tar.gz"
s3_object = s3.bucket(bucket).object(s3_file_path)
s3_object.upload_file(artifact_location)
I get the following error:
Aws::S3::Errors::InvalidToken
-----------------------------
The provided token is malformed or otherwise invalid.
When I remove the s3_server_side_encryption setting it changes to an access denied error.
I've been trying to find documentation around doing this with v2 of the API, but everything online seems to rely on the bucket object having a write method which doesn't seem to exist in v2 of the API.
http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingRubySDK.html
I'm likely just not finding the correct document in the v2 api. I'd like to avoid using v1 and v2 of the api but may fall back to that.
upload_file takes arguments similar to write
Aws.config.update({
region: 'us-east-1',
credentials: Aws::Credentials.new(credentials['key'],credentials['secret'],
)
})
s3 = Aws::S3::Resource.new
bucket = 'VandalayIndustriesAccountingData'
s3_file_path = "folder/filename.tar.gz"
s3_object = s3.bucket(bucket).object(s3_file_path)
s3_object.upload_file(artifact_location, server_side_encryption: :AES256)

Bucketeer - Heroku's add-on s3 bucket configuration on Django

I am currently using S3 to serve static files on Heroku. The S3 bucket was created and is managed by me and its settings.py file is the following.
import os
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = '<MY BUCKET NAME>'
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
STATIC_URL = 'http://' + AWS_STORAGE_BUCKET_NAME + '.s3.amazonaws.com/'
ADMIN_MEDIA_PREFIX = STATIC_URL + 'admin/'
Which is the same as this answer and it works perfectly fine: Django + Heroku + S3
However I wanted to switch to Bucketeer which is a Heroku add-on that creates and manages a S3 bucket for you. But Bucketeer provides different parameters and the static URL looks different and I can't make it work. The URL has the following pattern: "bucketeer-heroku-shared.s3.amazonaws.com/UNIQUE_BUCKETEER_BUCKET_PREFIX/public/". So my updated code is the following.
#Bucketeer
AWS_ACCESS_KEY_ID = os.environ.get('BUCKETEER_AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('BUCKETEER_AWS_SECRET_ACCESS_KEY')
BUCKETEER_BUCKET_PREFIX = os.environ.get('BUCKETEER_BUCKET_PREFIX')
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
#Bucketeer Config
STATIC_URL = 'http://bucketeer-heroku-shared.s3.amazonaws.com/' +
BUCKETEER_BUCKET_PREFIX + '/public/'
#I also tried
#STATIC_URL = 'http://bucketeer-heroku-shared.s3.amazonaws.com/' +
# BUCKETEER_BUCKET_PREFIX + '/'
And this is the error I got.
Preparing static assets
Collectstatic configuration error. To debug, run:
$ heroku run python manage.py collectstatic --noinput
Needless to say no static files were present on the app, so when I ran the suggested command I got:
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
Which means I'm not authorized to access said bucket. Could somebody shed some light on what is going on here and how to fix it.

unable to delete file from amazon s3 using ruby script

I am using aws-sdk-ruby for deleting a file saved in a bucket in my amazon s3 account, but i can't figure out why i am able to delete the desired file from S3 bucket using the following code.
this is my code
require 'aws-sdk-v1'
require 'aws-sdk'
ENV['AWS_ACCESS_KEY_ID'] = "XXXXXXX"
ENV["AWS_SECRET_ACCESS_KEY"] = '/ZZZZZZZZ'
ENV['AWS_REGION'] = 'us-east-1'
s3 = Aws::S3::Resource.new
bucket = s3.bucket('some-bucket')
obj = bucket.object('https://s3.amazonaws.com/some-bucket/38ac8226-fa72-4aee-8c3d-a34a1db77b91/some_image.jpg')
obj.delete
The documentation tells that it should look like this:
s3 = Aws::S3.new
bucket = s3.buckets['some-bucket']
object = bucket.objects['38ac8226-fa72-4aee-8c3d-a34a1db77b91/some_image.jpg']
object.delete
Please note:
the square brackets,
that the object's key doesn't include the domain and
instead of creating an instance of Aws::S3::Resource create an instance of AWS::S3
If you use API version 3 (aws-sdk-s3 (1.81.1)) you should do something like below:
s3 = Aws::S3::Client.new
s3.delete_object(bucket: 'bucket_name', key: 'bucket_folder/file.txt')
it should be:
objs = bucket.objects('https://s3.amazonaws.com/some-bucket/38ac8226-fa72-4aee-8c3d-a34a1db77b91/some_image.jpg')
objs.each {|obj| obj.delete}
With the aws-sdk v2: I had to do this: (see Doc)
$s3.bucket("my-bucket").objects(prefix: 'my_folder/').batch_delete!
(delete is deprecated in favor of batch_delete)
Useful post: https://ruby.awsblog.com/post/Tx1H87IVGVUMIB5/Using-Resources

Resources