Update S3 KMS key on an object using server side encryption - ruby

I am working on a feature where a customer can update their KMS key in our platform so that they are using their KMS key to encrypt data instead of one generated by us. The way it works is when a customer signs up, we generate a KMS key for them and upload the objects using that key. If the customer wants to provide their own key, I want to be able to update this key without having to pull down the data and re-upload with the new key.
def enc_client
Aws::S3::Encryption::Client.new(
kms_client: Aws::KMS::Client.new(region: 'us-east-1'),
kms_key_id: ENV['MY_PRIVATE_KEY']
)
end
def s3_client
enc_client.client
end
bucket = "my_bucket_name"
key = "path/12345abcde/preview.html"
copy_source = "/#{key}"
server_side_encryption = "aws:kms"
# This returns the object with the key present. If I go in the AWS client and manually add or remove the key, it will update on this call.
resp = s3_client.get_object(bucket: bucket, key: key)
#<struct Aws::S3::Types::GetObjectOutput
body=#<StringIO:0x000000000bb45108>,
delete_marker=nil,
accept_ranges="bytes",
expiration=nil,
restore=nil,
last_modified=2019-04-12 15:40:09 +0000,
content_length=19863445,
etag="\"123123123123123123123123123123-1\"",
missing_meta=nil,
version_id=nil,
cache_control=nil,
content_disposition="inline; filename=\"preview.html\"",
content_encoding=nil,
content_language=nil,
content_range=nil,
content_type="text/html",
expires=nil,
expires_string=nil,
website_redirect_location=nil,
server_side_encryption="aws:kms",
metadata={},
sse_customer_algorithm=nil,
sse_customer_key_md5=nil,
ssekms_key_id="arn:aws:kms:us-east-1:123456789123:key/222b222b-bb22-2222-bb22-222bbb22bb2b",
storage_class=nil,
request_charged=nil,
replication_status=nil,
parts_count=nil,
tag_count=nil>
new_ssekms_key_id = "arn:aws:kms:us-east-1:123456789123:key/111a111a-aa11-1111-aa11-111aaa11aa1a"
resp = s3_client.copy_object(bucket: bucket, key: key, copy_source: copy_source, ssekms_key_id: ssekms_key_id)
Aws::S3::Errors::InvalidArgument: Server Side Encryption with AWS KMS managed key requires HTTP header x-amz-server-side-encryption : aws:kms
from /usr/local/bundle/gems/aws-sdk-core-3.6.0/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call'
resp = s3_client.copy_object(bucket: bucket, key: key, copy_source: copy_source, ssekms_key_id: ssekms_key_id, server_side_encryption: server_side_encryption)
Aws::S3::Errors::AccessDenied: Access Denied
from /usr/local/bundle/gems/aws-sdk-core-3.6.0/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call'
I would like to be able to update the kms id do a new one on the server side

copy_source = "/#{key}" is incorrect. The value should be "/#{bucket}/#{key}".
The service is interpreting the first element of your key path as the name of a bucket -- probably someone else's bucket.

Related

Increasing redis key expiry on fetching that key data from redis cache

We all know that redis cache has ttl timeouts for the cache. I would like to know if there is a provision in redis cache to increase the ttl for every key based on fetch of the data .
That means if the data is fetched from the redis for a key then automatically the ttl is increased.
Pls help me to get some info on that.
You can achieve that with Lua scripting: get val and TTL (in millisecond), increment TTL, set new TTL:
local key = KEYS[1]
local pttl_incr = ARGV[1]
local val = redis.call("get", key)
if not val then return nil end
local pttl = redis.call("pttl", key)
pttl = pttl + pttl_incr
redis.call("expire", key, pttl)
return val

How to encrypt Lambda variables Using cloudformation

AWS CloudFormation template that includes a Lambda function with sensitive environment variables. I'd like to set up a KMS key and encrypt them with it
Add basic cloudformation to encrypt the key even is ok with aws/lambda default encryption
LambdaFunction:
Type: AWS::Lambda::Function
DependsOn: LambdaRole
Properties:
Environment:
Variables:
key: AKIAJ6W7WERITYHYUHJGHN
secret: PGDzQ8277Fg6+SbuTyqxfrtbskjnaslkchkY1
dest: !Ref dstBucket
Code:
ZipFile: |
from __future__ import print_function
import os
import json
import boto3
import time
import string
import urllib
print('Loading function')
ACCESS_KEY_ID = os.environ['key']
ACCESS_SECRET_KEY = os.environ['secret']
#s3_bucket = boto3.resource('s3',aws_access_key_id=ACCESS_KEY_ID,aws_secret_access_key=ACCESS_SECRET_KEY)
s3 = boto3.client('s3',aws_access_key_id=ACCESS_KEY_ID,aws_secret_access_key=ACCESS_SECRET_KEY)
#s3 = boto3.client('s3')
def handler(event, context):
source_bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
#key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'])
#target_bucket = "${dstBucket}"
target_bucket = os.environ['dest']
copy_source = {'Bucket':source_bucket, 'Key':key}
try:
s3.copy_object(Bucket=target_bucket, Key=key, CopySource=copy_source)
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist '
'and your bucket is in the same region as this '
'function.'.format(key, source_bucket))
raise e
AWS CloudFormation template that includes a Lambda function with sensitive environment variables. I'd like to set up a KMS key and encrypt them with it
You can store the access key and Secret key in AWS SSM Parameter Store by encrypting it with KMS Key. Go to AWS Systems Manager -> Parameter store -> Create Parameter. You can choose secure string option and choose the KMS key to encrypt with. You can access that Parameter through the boto3 function call. For example, response = client.get_parameter(Name='AccessKey', WithDecryption=True). you can use 'response' variable to refer to access key. Make sure that Lambda function has enough permissions to use that KMS Key to decrypt that Parameter you stored. Attach all necessary Decrypt permissions to the IAM Role the Lambda uses. In this way, you don't need to pass your access key and secret key as environment variables. Hope this will help!
You can use the AWS KMS service to create a KMS key manually (or)
by using CFT (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html)
The return value will have an ARN which can be used for KmsKeyArn property in Lambda CFT
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html#cfn-lambda-function-kmskeyarn
Hope this helps !!
You can also use Secrets Manager AWS::SecretsManager::Secret CFN resource to store the secret values and Cloudformation.
Use Cloudformation dynamic references to retrieve the secret's values from either SSM Paramenter store or Secrets Manager, in the template where you consume them.

Generate expiring activator token or a key hash in rails manually

I'm trying to verify a link that will expire in a week. I have an activator_token stored in the database, which will be used to generate the link in this format: http://www.example.com/activator_token. (And not activation tokens generated by Devise or Authlogic.)
Is there a way to make this activator token expire (in a week or so) without comparing with updated_at or some other date. Something like an encoded token, which will return nil when decoded after a week. Can any existing modules in Ruby do this? I don't want to store the generated date in the database or in an external store like Redis and compare it with Time.now. I want it to be very simple, and wanted to know if something like this already exists, before writing the logic again.
What you want to use is: https://github.com/jwt/ruby-jwt .
Here is some boilerplate code so you can try it out yourself.
require 'jwt'
# generate your keys when deploying your app.
# Doing so using a rake task might be a good idea
# How to persist and load the keys is up to you!
rsa_private = OpenSSL::PKey::RSA.generate 2048
rsa_public = rsa_private.public_key
# do this when you are about to send the email
exp = Time.now.to_i + 4 * 3600
payload = {exp: exp, discount: '9.99', email: 'user#example.com'}
# when generating an invite email, this is the token you want to incorporate in
# your link as a parameter
token = JWT.encode payload, rsa_private, 'RS256'
puts token
puts token.length
# this goes into your controller
begin
#token = params[:token]
decoded_token = JWT.decode token, rsa_public, true, { :algorithm => 'RS256' }
puts decoded_token.first
# continue with your business logic
rescue JWT::ExpiredSignature
# Handle expired token
# inform the user his invite link has expired!
puts "Token expired"
end

How to sign JWT?

I'm trying to secure a Sinatra API.
I'm using ruby-jwt to create the JWT, but I don't know exactly what to sign it with.
I'm trying to use the user's BCrypt password_digest, but every time password_digest is called it changes, making the signature invalid when I go to verify it.
Use any kind of application secret key, not a user's bcrypt password digest.
For example, use the dot env gem and a .env file, with an entry such as:
JWT_KEY=YOURSIGNINGKEYGOESHERE
I personally generate a key by using a simple random hex string:
SecureRandom.hex(64)
The hex string contains just 0-9 and a-f, so the string is URL safe.
For RS256 public and private key strategy you can use Ruby OpenSSL lib:
Generating keys:
key = OpenSSL::PKey::RSA.new 2048
open 'private_key.pem', 'w' do |io| io.write key.to_pem end
open 'public_key.pem', 'w' do |io| io.write key.public_key.to_pem end
Load key from .pem file to sign token:
priv_key = OpenSSL::PKey::RSA.new File.read 'private_key.pem'
token = JWT.encode payload, priv_key, 'RS256'
Load key from .pem file to Verify token(Create a middleware for this):
begin
# env.fetch gets http header
bearer = env.fetch('HTTP_AUTHORIZATION').slice(7..-1)
pub_key = OpenSSL::PKey::RSA.new File.read 'public_key.pem'
payload = JWT.decode bearer, pub_key, true, { algorithm: 'RS256'}
# access your payload here
#app.call env
rescue JWT::ExpiredSignature
[403, { 'Content-Type' => 'text/plain' }, ['The token has expired.']]
rescue JWT::DecodeError
[401, { 'Content-Type' => 'text/plain' }, ['A token must be passed.']]
rescue JWT::InvalidIssuerError
[403, { 'Content-Type' => 'text/plain' }, ['The token does not have a valid issuer.']]
rescue JWT::InvalidIatError
[403, { 'Content-Type' => 'text/plain' }, ['The token does not have a valid "issued at" time.']]
end
To use RSA key in your .env instead of loading a file, you will need to use gem 'dotenv' and import the key as a single line variable with the use of newline '\n'. check this question on how to do it. example:
PUBLIC_KEY="-----BEGIN PUBLIC KEY-----\nmineminemineminemine\nmineminemineminemine\nmineminemine...\n-----END PUBLIC KEY-----\n"
as an .env PUBLIC_KEY variable, loading the key will change to this:
key = OpenSSL::PKey::RSA.new ENV['PUBLIC_KEY']
According to wikipedia, a secret key used in cryptography is basically just that, a key to open the lock. The key should be consistent and reliable, but not easy to duplicate, just like a key you would use on your home.
As stated in this answer, secret keys should be randomly-generated. However, you still want the key to be retained for use across the application. By using the password digest from bcrypt, you are actually using a hashed key that was derived from a base secret key (the password). Because the hash is random, this is not a reliable secret key to use, as you stated.
The previous answer using SecureRandom.hex(64) is a great way to create an initial base application key. However, in a production system, you should be taking this in as a configuration variable and storing it for consistent use across multiple runs of your application (for example following a server reboot, you should not invalidate all of your user's JWTs) or across multiple distributed servers. This article gives an example of pulling in the secret key from an environment variable for rails.

Amazon S3 - Generating an expiring link using Ruby 1.9.3

I'm trying to create an expiring link to allow access to a private file on S3 using Ruby (1.9.3).
I've been following the instructions from here: http://docs.aws.amazon.com/AmazonS3/2006-03-01/dev/RESTAuthentication.html#RESTAuthenticationQueryStringAuth - However the final values that I'm getting are wrong. The example only gives the final result, not the values from each step, so I'm not sure where this is going wrong. The intermediate values should be the same regardless of implementation.
The Ruby code that I'm using (including the same key and expires from the above link):
require "cgi"
require "base64"
require "openssl"
require "digest/sha1"
key_id = 'AKIAIOSFODNN7EXAMPLE' # Example Amazon key id and secret key
key = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
expires = 1141889120
string_to_sign = "GET\n\n\n#{expires.to_s}\n/johnsmith/photos/puppy.jpg"
digest = OpenSSL::HMAC.digest(OpenSSL::Digest::Digest.new("sha1"), key, string_to_sign)
base64 = Base64.encode64(digest).strip
signature = CGI::escape(base64)
puts "Digest: #{digest}"
puts "Base64: #{base64}"
puts "Signature: #{signature}"
Outputs:
Digest: }'\n\x18p\x83#CX\xE4N\xC2b\x9FUs\xC5J1\xB6
Base64: fScKGHCDI0NY5E7CYp9Vc8VKMbY=
Signature: fScKGHCDI0NY5E7CYp9Vc8VKMbY%3D
However the signature on the Amazon page is: NpgCjnDzrM%2BWFzoENXmpNDUsSn8%3D
Any ideas on where this is going wrong?

Resources