Ruby gem to invalidate CloudFront Distribution? - ruby

I have tried all the gems I can find on Google and Stackoverflow, they all seems to be outdated and unmaintained, so what is the simplest way to invalidate a CloudFront distribution from Ruby?

Here's the little script we ended up using to invalidate the entire cache:
require 'aws-sdk-cloudfront'
cf = Aws::CloudFront::Client.new(
access_key_id: ENV['FOG_AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['FOG_AWS_SECRET_ACCESS_KEY'],
region: ENV['FOG_REGION']
)
resp = cf.create_invalidation({
distribution_id: ENV['FOG_DISTRIBUTION_ID'], # required
invalidation_batch: { # required
paths: { # required
quantity: 1, # required
items: ["/*"],
},
caller_reference: DateTime.now.to_s, # required
},
})
if resp.is_a?(Seahorse::Client::Response)
puts "Invalidation #{resp.invalidation.id} has been created. Please wait about 60 seconds for it to finish."
else
puts "ERROR"
end

https://rubygems.org/gems/aws-sdk
Specifically the cloudfront module:
https://docs.aws.amazon.com/sdkforruby/api/Aws/CloudFront.html
This should give you full CLI control of your cloudfront resources provided you have the correct IAM roles etc set up.

Related

Rails 5.2 Shrine and Tus server: Cannot create a custom folder structure to save files

I am using rails 5.2, Shrine 2.19 and tus server 2.3 for resumable file upload
routes.rb
mount Tus::Server => '/files'
model, file_resource.rb
class FileResource < ApplicationRecord
# adds an `file` virtual attribute
include ResumableFileUploader::Attachment.new(:file)
controllers/files_controller.rb
def create
file = FileResource.new(permitted_params)
...
file.save
config/initializers/shrine.rb
s3_options = {
bucket: ENV['S3_MEDIA_BUCKET_NAME'],
access_key_id: ENV['S3_ACCESS_KEY'],
secret_access_key: ENV['S3_SECRET_ACCESS_KEY'],
region: ENV['S3_REGION']
}
Shrine.storages = {
cache: Shrine::Storage::S3.new(prefix: 'file_library/shrine_cache', **s3_options),
store: Shrine::Storage::S3.new(**s3_options), # public: true,
tus: Shrine::Storage::Tus.new
}
Shrine.plugin :activerecord
Shrine.plugin :cached_attachment_data
config/initializers/tus.rb
Tus::Server.opts[:storage] = Tus::Storage::S3.new(
prefix: 'file_library',
bucket: ENV['S3_MEDIA_BUCKET_NAME'],
access_key_id: ENV['S3_ACCESS_KEY'],
secret_access_key: ENV['S3_SECRET_ACCESS_KEY'],
region: ENV['S3_REGION'],
retry_limit: 3
)
Tus::Server.opts[:redirect_download] = true
My issue is I cannot override the generate_location method of Shrine class to store the files in different folder structure in AWS s3.
All the files are created inside s3://bucket/file_library/ (the prefix provided in tus.rb). I want something like s3://bucket/file_library/:user_id/:parent_id/ folder structure.
I found that Tus configuration overrides all my resumable_file_uploader class custom options and have no effect on uploading.
resumable_file_uploader.rb
class ResumableFileUploader < Shrine
plugin :validation_helpers # NOT WORKS
plugin :pretty_location # NOT WORKS
def generate_location(io, context = {}) # NOT WORKS
f = context[:record]
name = super # the default unique identifier
puts "<<<<<<<<<<<<<<<<<<<<<<<<<<<<"*10
['users', f.author_id, f.parent_id, name].compact.join('/')
end
Attacher.validate do # NOT WORKS
validate_max_size 15 * 1024 * 1024, message: 'is too large (max is 15 MB)'
end
end
So how can I create custom folder structure in S3 using tus options (as shrine options not works)?
A tus server upload doesn't touch Shrine at all, so the #generate_location won't be called, but instead the tus-ruby-server decides the location.
Note that the tus server should only act as a temporary storage, you should still use Shrine to copy the file to a permanent storage (aka "promote"), just like with regular direct uploads. On promotion the #generate_location method will be called, so the file will be copied to the desired location; this all happens automatically with default Shrine setup.

Triggering Lambda on s3 video upload?

I am testing adding a watermark to a video once uploaded. I am running into an issue where lamdba wants me to specify which file to change on upload. but i want it to trigger when any (really, any file that ends in .mov, .mp4, etc.) file is uploaded.
To clarify, this all works manually in creating a pipeline and job.
Here's my code:
require 'json'
require 'aws-sdk-elastictranscoder'
def lambda_handler(event:, context:)
client = Aws::ElasticTranscoder::Client.new(region: 'us-east-1')
resp = client.create_job({
pipeline_id: "15521341241243938210-qevnz1", # required
input: {
key: File, #this is where my issue
},
output: {
key: "CBtTw1XLWA6VSGV8nb62gkzY",
# thumbnail_pattern: "ThumbnailPattern",
# thumbnail_encryption: {
# mode: "EncryptionMode",
# key: "Base64EncodedString",
# key_md_5: "Base64EncodedString",
# initialization_vector: "ZeroTo255String",
# },
# rotate: "Rotate",
preset_id: "1351620000001-000001",
# segment_duration: "FloatString",
watermarks: [
{
preset_watermark_id: "TopRight",
input_key: "uploads/2354n.jpg",
# encryption: {
# mode: "EncryptionMode",
# key: "zk89kg4qpFgypV2fr9rH61Ng",
# key_md_5: "Base64EncodedString",
# initialization_vector: "ZeroTo255String",
# },
},
],
}
})
end
How do i specify just any file that is uploaded, or files that are a specific format? for the input: key: ?
Now, my issue is that i am using active storage so it doesn't end in .jpg or .mov, etc., it just is a random generated string (they have reasons for doing this). I am trying to find a reason to use active storage and this is my final step to making it work like other alternatives before it.
The extension field is Optional. If you don't specify anything in it, the lambda will be triggered no matter what file is uploaded. You can then check if it's the type of file you want and proceed.

Making each account have a separate S3 bucket for attachments using Shrine

In our Ruby I would like that each account has a separate S3 bucket for its attachments. I would also like that bucket names can be derived from account's attributes:
Account(id: 1, username: "johnny") # uses the "1-johnny" bucket
Account(id: 2, username: "peter") # uses the "2-peter" bucket
# ...
Is something like this possible to do in Shrine?
Yes. First you use the default_storage plugin to dynamically assign storage names:
Shrine.plugin :default_storage, store: ->(record, name) do
"store_#{record.id}_#{record.username}"
end
# store_1_johnny
# store_2_peter
Next you use the dynamic_storage plugin to dynamically instantiate S3 storages based on the identifier:
Shrine.plugin :dynamic_storage
Shrine.storage /store_(\d+)_(\w+)/ do |match|
bucket_name = "#{match[1]}_#{match[2]}"
Shrine::Storage::S3.new(bucket: bucket_name, **s3_options)
end
# 1-johnny
# 2-peter

How I can rollback to previous version of deleted file on s3 via ruby aws-sdk

Any example how rollback to previous version of deleted file on s3 via ruby aws-sdk?
Looks like GEM aws-sdk-ruby not show deleted files in list of objects
s3 = Aws::S3::Resource.new
bucket = s3.bucket('aws-sdk')
bucket.objects.each do |obj|
if obj.key.start_with?("images/file_name.jpg")
puts obj.to_yaml
end
end
You can list previous versions like this:
aws_versions = s3.client.list_object_versions(
bucket: 'bucket_name',
prefix: 'images/12345/50x50.jpg'
).versions
https://docs.aws.amazon.com/sdk-for-ruby/v2/api/Aws/S3/Client.html#list_object_versions-instance_method
Download a necessary version like this:
cache = s3.client.get_object(
bucket: 'bucket_name',
key: 'images/12345/50x50.jpg',
version_id: 'your_version_id',
response_target: Rails.root.join('tmp/images/12345/50x50.jpg')
)
https://docs.aws.amazon.com/sdk-for-ruby/v2/api/Aws/S3/Client.html#get_object-instance_method
And finally, save it to your model:
model.attachment = Rails.root.join('tmp/images/12345/50x50.jpg').open
model.save
PS: It's good to have paper_trail installed, so you could find the previous version prefix (the images/12345/50x50.jpg part) in model change history.

Unable to upload images to AWS through a rails App using carrierwave and fog by heroku

I am getting following errors on my Heroku logs:-
Excon::Errors::Forbidden (Expected(200) <=> Actual(403 Forbidden)
2013-10-02T16:25:51.131316+00:00 app[web.1]: response => #"\nInvalidAccessKeyIdThe AWS Access Key Id you provided does not exist in our records.5CA6A058BCE5D28AQ6grl4LPNO+F9YVtJZA7YIASYUFw4IpggAVlMJEzsdAhdwSWOTIB8K+VolEwyGYLS3_KEY", :headers=>{"x-amz-request-id"=>"5CA6A058BCE5D28A", "x-amz-id-2"=>"Q6grl4LPNO+F9YVtJZA7YIASYUFw4IpggAVlMJEzsdAhdwSWOTIB8K+VolEwyGYL", "Content-Type"=>"application/xml", "Transfer-Encoding"=>"chunked", "Date"=>"Wed, 02 Oct 2013 16:25:50 GMT", "Connection"=>"close", "Server"=>"AmazonS3"}, :status=>403, :remote_ip=>"176.32.100.200"}, #body="\nInvalidAccessKeyIdThe AWS Access Key Id you provided does not exist in our records.5CA6A058BCE5D28AQ6grl4LPNO+F9YVtJZA7YIASYUFw4IpggAVlMJEzsdAhdwSWOTIB8K+VolEwyGYL
I have checked the AWS key for at least a dozen times.
I have set up Heroku variables by using following:
heroku config:add S3_KEY=XXXXXXXXXXXXXXX S3_SECRET=XXXXXXXXXXXXXXXXXXXXXX
But I get the error as above.
Looks like your AWS access key is invalid. A couple things to double check:
Do your access key, secret key and bucket all match what's in the AWS dashboard?
Are you setting those variables correctly in your carrierwave initializer? You should be able to check by running the following from heroku run rails console: CarrierWave.configure { |config| puts config.fog_credentials; puts config.fog_directory }.
If you double and triple check those and there really isn't anything wrong, then you may have a weird problem with your S3 account (can you access your S3 account with another S3 utility using the same credentials?), or there's something loony happening in your code.
Good luck!
I was able to figure out using Taavo's suggestion. I used figaro gem where I did put AWS credentials into config/application.yml.
In addition, I also changed my carrierwave.rb file from:
CarrierWave.configure do |config|
config.fog_credentials = {
provider: "AWS",
aws_access_key_id: "S3_KEY",
aws_secret_access_key: "S3_SECRET",
}
config.cache_dir = "#{Rails.root}/tmp/uploads"
config.fog_directory = "S3_BUCKET_NAME"
config.fog_public = false
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}
end
to
CarrierWave.configure do |config|
config.fog_credentials = {
provider: "AWS",
aws_access_key_id: ENV["S3_KEY"],
aws_secret_access_key: ENV["S3_SECRET"],
#region: 'Northern California'
}
config.cache_dir = "#{Rails.root}/tmp/uploads"
config.fog_directory = ENV["S3_BUCKET_NAME"]
config.fog_public = false
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}
end
Then added following to Heroku:
$ heroku config:set S3_BUCKET_NAME=your_bucket_name
$ heroku config:set S3_KEY=your_access_key_id
$ heroku config:set S3_SECRET=your_secret_access_key
That did the work. Thanks Taavo for suggestions.

Resources