Triggering Lambda on s3 video upload? - ruby

I am testing adding a watermark to a video once uploaded. I am running into an issue where lamdba wants me to specify which file to change on upload. but i want it to trigger when any (really, any file that ends in .mov, .mp4, etc.) file is uploaded.
To clarify, this all works manually in creating a pipeline and job.
Here's my code:
require 'json'
require 'aws-sdk-elastictranscoder'
def lambda_handler(event:, context:)
client = Aws::ElasticTranscoder::Client.new(region: 'us-east-1')
resp = client.create_job({
pipeline_id: "15521341241243938210-qevnz1", # required
input: {
key: File, #this is where my issue
},
output: {
key: "CBtTw1XLWA6VSGV8nb62gkzY",
# thumbnail_pattern: "ThumbnailPattern",
# thumbnail_encryption: {
# mode: "EncryptionMode",
# key: "Base64EncodedString",
# key_md_5: "Base64EncodedString",
# initialization_vector: "ZeroTo255String",
# },
# rotate: "Rotate",
preset_id: "1351620000001-000001",
# segment_duration: "FloatString",
watermarks: [
{
preset_watermark_id: "TopRight",
input_key: "uploads/2354n.jpg",
# encryption: {
# mode: "EncryptionMode",
# key: "zk89kg4qpFgypV2fr9rH61Ng",
# key_md_5: "Base64EncodedString",
# initialization_vector: "ZeroTo255String",
# },
},
],
}
})
end
How do i specify just any file that is uploaded, or files that are a specific format? for the input: key: ?
Now, my issue is that i am using active storage so it doesn't end in .jpg or .mov, etc., it just is a random generated string (they have reasons for doing this). I am trying to find a reason to use active storage and this is my final step to making it work like other alternatives before it.

The extension field is Optional. If you don't specify anything in it, the lambda will be triggered no matter what file is uploaded. You can then check if it's the type of file you want and proceed.

Related

Rails 5.2 Shrine and Tus server: Cannot create a custom folder structure to save files

I am using rails 5.2, Shrine 2.19 and tus server 2.3 for resumable file upload
routes.rb
mount Tus::Server => '/files'
model, file_resource.rb
class FileResource < ApplicationRecord
# adds an `file` virtual attribute
include ResumableFileUploader::Attachment.new(:file)
controllers/files_controller.rb
def create
file = FileResource.new(permitted_params)
...
file.save
config/initializers/shrine.rb
s3_options = {
bucket: ENV['S3_MEDIA_BUCKET_NAME'],
access_key_id: ENV['S3_ACCESS_KEY'],
secret_access_key: ENV['S3_SECRET_ACCESS_KEY'],
region: ENV['S3_REGION']
}
Shrine.storages = {
cache: Shrine::Storage::S3.new(prefix: 'file_library/shrine_cache', **s3_options),
store: Shrine::Storage::S3.new(**s3_options), # public: true,
tus: Shrine::Storage::Tus.new
}
Shrine.plugin :activerecord
Shrine.plugin :cached_attachment_data
config/initializers/tus.rb
Tus::Server.opts[:storage] = Tus::Storage::S3.new(
prefix: 'file_library',
bucket: ENV['S3_MEDIA_BUCKET_NAME'],
access_key_id: ENV['S3_ACCESS_KEY'],
secret_access_key: ENV['S3_SECRET_ACCESS_KEY'],
region: ENV['S3_REGION'],
retry_limit: 3
)
Tus::Server.opts[:redirect_download] = true
My issue is I cannot override the generate_location method of Shrine class to store the files in different folder structure in AWS s3.
All the files are created inside s3://bucket/file_library/ (the prefix provided in tus.rb). I want something like s3://bucket/file_library/:user_id/:parent_id/ folder structure.
I found that Tus configuration overrides all my resumable_file_uploader class custom options and have no effect on uploading.
resumable_file_uploader.rb
class ResumableFileUploader < Shrine
plugin :validation_helpers # NOT WORKS
plugin :pretty_location # NOT WORKS
def generate_location(io, context = {}) # NOT WORKS
f = context[:record]
name = super # the default unique identifier
puts "<<<<<<<<<<<<<<<<<<<<<<<<<<<<"*10
['users', f.author_id, f.parent_id, name].compact.join('/')
end
Attacher.validate do # NOT WORKS
validate_max_size 15 * 1024 * 1024, message: 'is too large (max is 15 MB)'
end
end
So how can I create custom folder structure in S3 using tus options (as shrine options not works)?
A tus server upload doesn't touch Shrine at all, so the #generate_location won't be called, but instead the tus-ruby-server decides the location.
Note that the tus server should only act as a temporary storage, you should still use Shrine to copy the file to a permanent storage (aka "promote"), just like with regular direct uploads. On promotion the #generate_location method will be called, so the file will be copied to the desired location; this all happens automatically with default Shrine setup.

Why does puppet think my custom fact is a string?

I am trying to create a custom fact I can use as the value for a class parameter in a hiera yaml file.
I am using the openstack/puppet-keystone module and I want to use fernet-keys.
According to the comments in the module I can use this parameter.
# [*fernet_keys*]
# (Optional) Hash of Keystone fernet keys
# If you enable this parameter, make sure enable_fernet_setup is set to True.
# Example of valid value:
# fernet_keys:
# /etc/keystone/fernet-keys/0:
# content: c_aJfy6At9y-toNS9SF1NQMTSkSzQ-OBYeYulTqKsWU=
# /etc/keystone/fernet-keys/1:
# content: zx0hNG7CStxFz5KXZRsf7sE4lju0dLYvXdGDIKGcd7k=
# Puppet will create a file per key in $fernet_key_repository.
# Note: defaults to false so keystone-manage fernet_setup will be executed.
# Otherwise Puppet will manage keys with File resource.
# Defaults to false
So wrote this custom fact ...
[root#puppetmaster modules]# cat keystone_fernet/lib/facter/fernet_keys.rb
Facter.add(:fernet_keys) do
setcode do
fernet_keys = {}
puts ( 'Debug keyrepo is /etc/keystone/fernet-keys' )
Dir.glob('/etc/keystone/fernet-keys/*').each do |fernet_file|
data = File.read(fernet_file)
if data
content = {}
puts ( "Debug Key file #{fernet_file} contains #{data}" )
fernet_keys[fernet_file] = { 'content' => data }
end
end
fernet_keys
end
end
Then in my keystone.yaml file I have this line:
keystone::fernet_keys: '%{::fernet_keys}'
But when I run puppet agent -t on my node I get this error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, "{\"/etc/keystone/fernet-keys/1\"=>{\"content\"=>\"xxxxxxxxxxxxxxxxxxxx=\"}, \"/etc/keystone/fernet-keys/0\"=>{\"content\"=>\"xxxxxxxxxxxxxxxxxxxx=\"}}" is not a Hash. It looks to be a String at /etc/puppetlabs/code/environments/production/modules/keystone/manifests/init.pp:1144:7 on node mgmt-01
I had assumed that I had formatted the hash correctly because facter -p fernet_keys output this on the agent:
{
/etc/keystone/fernet-keys/1 => {
content => "xxxxxxxxxxxxxxxxxxxx="
},
/etc/keystone/fernet-keys/0 => {
content => "xxxxxxxxxxxxxxxxxxxx="
}
}
The code in the keystone module looks like this (with line numbers)
1142
1143 if $fernet_keys {
1144 validate_hash($fernet_keys)
1145 create_resources('file', $fernet_keys, {
1146 'owner' => $keystone_user,
1147 'group' => $keystone_group,
1148 'subscribe' => 'Anchor[keystone::install::end]',
1149 }
1150 )
1151 } else {
Puppet does not necessarily think your fact value is a string -- it might do, if the client is set to stringify facts, but that's actually beside the point. The bottom line is that Hiera interpolation tokens don't work the way you think. Specifically:
Hiera can interpolate values of any of Puppet’s data types, but the
value will be converted to a string.
(Emphasis added.)

Ruby gem to invalidate CloudFront Distribution?

I have tried all the gems I can find on Google and Stackoverflow, they all seems to be outdated and unmaintained, so what is the simplest way to invalidate a CloudFront distribution from Ruby?
Here's the little script we ended up using to invalidate the entire cache:
require 'aws-sdk-cloudfront'
cf = Aws::CloudFront::Client.new(
access_key_id: ENV['FOG_AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['FOG_AWS_SECRET_ACCESS_KEY'],
region: ENV['FOG_REGION']
)
resp = cf.create_invalidation({
distribution_id: ENV['FOG_DISTRIBUTION_ID'], # required
invalidation_batch: { # required
paths: { # required
quantity: 1, # required
items: ["/*"],
},
caller_reference: DateTime.now.to_s, # required
},
})
if resp.is_a?(Seahorse::Client::Response)
puts "Invalidation #{resp.invalidation.id} has been created. Please wait about 60 seconds for it to finish."
else
puts "ERROR"
end
https://rubygems.org/gems/aws-sdk
Specifically the cloudfront module:
https://docs.aws.amazon.com/sdkforruby/api/Aws/CloudFront.html
This should give you full CLI control of your cloudfront resources provided you have the correct IAM roles etc set up.

Making each account have a separate S3 bucket for attachments using Shrine

In our Ruby I would like that each account has a separate S3 bucket for its attachments. I would also like that bucket names can be derived from account's attributes:
Account(id: 1, username: "johnny") # uses the "1-johnny" bucket
Account(id: 2, username: "peter") # uses the "2-peter" bucket
# ...
Is something like this possible to do in Shrine?
Yes. First you use the default_storage plugin to dynamically assign storage names:
Shrine.plugin :default_storage, store: ->(record, name) do
"store_#{record.id}_#{record.username}"
end
# store_1_johnny
# store_2_peter
Next you use the dynamic_storage plugin to dynamically instantiate S3 storages based on the identifier:
Shrine.plugin :dynamic_storage
Shrine.storage /store_(\d+)_(\w+)/ do |match|
bucket_name = "#{match[1]}_#{match[2]}"
Shrine::Storage::S3.new(bucket: bucket_name, **s3_options)
end
# 1-johnny
# 2-peter

windows django AttributeError: 'tuple' object has no attribute 'split

I am using the following command on WinXP and getting an error, but works fine on MacOS and Linux, thank you very very much for any help.
C:\Documents and Settings\Administrator\Sites\team_track>manage.py syncdb --settings=local_settings
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_user_permissions
Creating table auth_user_groups
Creating table auth_user
Creating table auth_message
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table django_admin_log
Creating table app_player
Creating table app_team_players
Creating table app_team
Creating table app_game
Creating table app_gameparticipant
You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): yes
Username (Leave blank to use 'administrator'):
E-mail address: kam#kam.com
Password:
Password (again):
Superuser created successfully.
Installing custom SQL ...
Traceback (most recent call last):
File "C:\Documents and Settings\Administrator\Sites\team_track\manage.py", line 19, in <module>
execute_manager(team_tracker.settings)
File "c:\Python27\lib\site-packages\django\core\management\__init__.py", line 438, in execute_mana
ger
utility.execute()
File "c:\Python27\lib\site-packages\django\core\management\__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "c:\Python27\lib\site-packages\django\core\management\base.py", line 191, in run_from_argv
self.execute(*args, **options.__dict__)
File "c:\Python27\lib\site-packages\django\core\management\base.py", line 220, in execute
output = self.handle(*args, **options)
File "c:\Python27\lib\site-packages\django\core\management\base.py", line 351, in handle
return self.handle_noargs(**options)
File "c:\Python27\lib\site-packages\django\core\management\commands\syncdb.py", line 121, in handl
e_noargs
custom_sql = custom_sql_for_model(model, self.style, connection)
File "c:\Python27\lib\site-packages\django\core\management\sql.py", line 166, in custom_sql_for_mo
del
backend_name = connection.settings_dict['ENGINE'].split('.')[-1]
AttributeError: 'tuple' object has no attribute 'split'
Here is what my manage.py looks like:
#!/usr/bin/env python
import sys
import os.path
from django.core.management import execute_manager
try:
import team_tracker.settings # Assumed to be in the same directory.
except ImportError:
import sys
sys.stderr.write("Error: Can't find the file 'settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
sys.exit(1)
if __name__ == "__main__":
execute_manager(team_tracker.settings)
And my local_settings.py resides in root dir:
from team_tracker.settings import *
DEBUG = True
#DATABASE_ENGINE = 'sqlite3'
#DATABASE_NAME = 'caktus_website.db'
DATABASE_ENGINE = 'sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
DATABASE_NAME = 'team_track.db' # Or path to database file if using sqlite3.
And finally my team_tracker/settings.py is here:
# Django settings for team_tracker project.
import os.path
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ADMINS = (
# ('Your Name', 'your_email#example.com'),
)
SITE_ROOT = os.path.realpath(os.path.dirname(__file__))
MANAGERS = ADMINS
#DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
# 'NAME': 'team_track.db', # Or path to database file if using sqlite3.
# 'USER': '', # Not used with sqlite3.
# 'PASSWORD': '', # Not used with sqlite3.
# 'HOST': '', # Set to empty string for localhost. Not used with sqlite3.
# 'PORT': '', # Set to empty string for default. Not used with sqlite3.
# }
#}
# Local time zone for this installation. Choices can be found here:
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
# although not all choices may be available on all operating systems.
# On Unix systems, a value of None will cause Django to use the same
# timezone as the operating system.
# If running in a Windows environment this must be set to the same as your
# system time zone.
TIME_ZONE = 'America/Chicago'
# Language code for this installation. All choices can be found here:
# http://www.i18nguy.com/unicode/language-identifiers.html
LANGUAGE_CODE = 'en-us'
SITE_ID = 1
# If you set this to False, Django will make some optimizations so as not
# to load the internationalization machinery.
USE_I18N = True
# If you set this to False, Django will not format dates, numbers and
# calendars according to the current locale
USE_L10N = True
# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/home/media/media.lawrence.com/media/"
MEDIA_ROOT = '/Users/kamilski81/Sites/team_tracker/media/'#os.path.join(SITE_ROOT, 'appmedia')
# URL that handles the media served from MEDIA_ROOT. Make sure to use a
# trailing slash.
# Examples: "http://media.lawrence.com/media/", "http://example.com/media/"
MEDIA_URL = '/media/'
# Absolute path to the directory static files should be collected to.
# Don't put anything in this directory yourself; store your static files
# in apps' "static/" subdirectories and in STATICFILES_DIRS.
# Example: "/home/media/media.lawrence.com/static/"
STATIC_ROOT = '/Users/kamilski81/Sites/team_tracker/static/'
# URL prefix for static files.
# Example: "http://media.lawrence.com/static/"
STATIC_URL = '/static/'
# URL prefix for admin static files -- CSS, JavaScript and images.
# Make sure to use a trailing slash.
# Examples: "http://foo.com/static/admin/", "/static/admin/".
ADMIN_MEDIA_PREFIX = '/static/admin/'
# Additional locations of static files
STATICFILES_DIRS = (
# Put strings here, like "/home/html/static" or "C:/www/django/static".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
# Make this unique, and don't share it with anybody.
SECRET_KEY = 'v8#s)7gw-^#zp&6**g7rz$uj!#3v4a36so_uw!_#0pa$h4)b-s'
# List of callables that know how to import templates from various sources.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
# #kamtodo: find out how to truly use this and the best way if we have many forms
# 'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
)
ROOT_URLCONF = 'urls'
TEMPLATE_DIRS = (
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
os.path.join(SITE_ROOT, 'templates').replace('\\','/'),
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
# Uncomment the next line to enable the admin:
'django.contrib.admin',
# Uncomment the next line to enable admin documentation:
'django.contrib.admindocs',
'app',
)
# A sample logging configuration. The only tangible logging
# performed by this configuration is to send an email to
# the site admins on every HTTP 500 error.
# See http://docs.djangoproject.com/en/dev/topics/logging for
# more details on how to customize your logging configuration.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
In local_settings.py:
DATABASE_ENGINE = 'sqlite3',
The comma here makes DATABASE_ENGINE a tuple with one element instead of a string. Remove it and it should work.

Resources