Heroku, Shrine and Amazon S3: Blog post Images disappear after some time - heroku

I have a blog page I developed using rails 5.1. Everything works just fine, except that after I create a post in production and attach an image, the image stops showing after a while (say 30 minutes). I scouted around the internet looking for solutions and saw this which suggest the problem has to do with Heroku wiping the directory after every app restart. One solution offered is to host your images on a service like Amazon S3.
I have however set S3 up and the images are being sent to the bucket shown below:
But still, the blog post images still disappear. I need help as I cannot figure out what I am missing. Here are the relevant codes:
shrine.rb:
require "shrine"
require "shrine/storage/s3"
s3_options = {
access_key_id: ENV['S3_KEY'],
secret_access_key: ENV['S3_SECRET'],
region: ENV['S3_REGION'],
bucket: ENV['S3_BUCKET'],
}
if Rails.env.development?
require "shrine/storage/file_system"
Shrine.storages = {
cache: Shrine::Storage::FileSystem.new("public", prefix: "uploads/cache"), # temporary
store: Shrine::Storage::FileSystem.new("public", prefix: "uploads/store") # permanent
}
elsif Rails.env.test?
require 'shrine/storage/memory'
Shrine.storages = {
cache: Shrine::Storage::Memory.new,
store: Shrine::Storage::Memory.new
}
else
require "shrine/storage/s3"
Shrine.storages = {
cache: Shrine::Storage::S3.new(prefix: "cache", **s3_options),
store: Shrine::Storage::S3.new(prefix: "store", **s3_options)
}
end
Shrine.plugin :activerecord # or :activerecord
Shrine.plugin :cached_attachment_data # for retaining the cached file across form redisplays
gemfile:
....................................
# A rich text editor for everyday writing
gem 'trix', '~> 0.11.1'
# a toolkit for file attachments in Ruby applications
gem 'shrine', '~> 2.11'
# Tag a single model on several contexts, such as skills, interests, and awards
gem 'acts-as-taggable-on', '~> 6.0'
# frameworks for multiple-provider authentication.
gem 'omniauth-facebook'
gem 'omniauth-github'
# Simple Rails app key configuration
gem "figaro"
..............................
I use Figaro gem to mask the env files. They are fine since the S3 bucket responds, plus I already have OmniAuth up and running on the blog.
Here is the error it shows on the chrome console for the image:
I really need help to get this blog up and running. Thank you for your time.

Shrine generates expiring S3 URLs by default, so it's possible that the generated URLs are somehow getting cached, and then the images become unavailable once the URL has expired.
As a workaround, you can make S3 uploads public and generate public URLs instead. You can do that by telling the S3 storage to make uploads public (note that this will only affect new uploads, existing uploads will remain private so you'd have to make them public in another way), and to generate public URLs by default, by updating the initializer:
# ...
require "shrine/storage/s3"
Shrine.storages = {
cache: Shrine::Storage::S3.new(prefix: "cache", upload_options: { acl: "public-read" }, **s3_options),
store: Shrine::Storage::S3.new(prefix: "store", upload_options: { acl: "public-read" }, **s3_options)
}
# ...
Shrine.plugin :default_url_options, cache: { public: true }, store: { public: true }
# ...

Related

Unable to delete an image from the media-library in Strapi V4

I have a Strapi V4 dashboard deployed on heroku. Everything works fine, except for some images not being able to be deleted, with a Status code: 500 error.
plugins.js file below
screenshot of error on strapi
upload: {
config: {
provider: "cloudinary",
providerOptions: {
cloud_name: env("CLOUDINARY_NAME"),
api_key: env("CLOUDINARY_KEY"),
api_secret: env("CLOUDINARY_SECRET"),
},
actionOptions: {
upload: {},
delete: {},
},
},
},
The fix was to comment out config of cloudinary in plugins.js
That is not really SOLVING THE ISSUE:
Ok, after deactivating the cloudinary plugin the images in strapi “Media Library” can be deleted. But the same issue happens again when activating the cloudinary plugin.

404 on i18n json files

I'm trying to enable i18n json files with SSR on assets folder following this docs:
https://sap.github.io/spartacus-docs/i18n/
But when enabled, all files in PT folder results 404 error.
Here's my provideConfig on spartacus-configuration.module.ts file:
and my assets folder:
Thanks for your time, have a nice day!
Looks like it's trying to load a bunch of json files that aren't in your directories.
What I did on mine was I provided original Spartacus translations then I added mine below that:
provideConfig(<I18nConfig>{
i18n: {
resources: translations,
chunks: translationChunksConfig,
fallbackLang: 'en'
},
}),
provideConfig(<I18nConfig>{
i18n: {
backend: {
loadPath: 'assets/i18n-assets/{{lng}}/{{ns}}.json',
chunks: {
footer: ['footer']
}
}
},
})
otherwise, you can try to add those files its complaining about (orderApproval.json, savedCart.json, etc) to your 'pt' folder (not sure what language that is but perhaps Spartacus doesn't come with translations for it)

Fog/aws gem for IBM Cloud Object Storage is not working

As Softlayer or IBM Cloud has moved from Swift based Object Storage to S3 based Cloud Object Storage. I am using fog/aws instead of fog/softlayer.
The below is the code:
require 'fog/aws'
fog_properties = {
provider: 'AWS',
aws_access_key_id: username,
aws_secret_access_key: api_key
}
#client = Fog::Storage.new(fog_properties)
#client.directories
But it failed even with valid key and id.
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.\</Message><AWSAccessKeyId>####</AWSAccessKeyId><RequestId>####</RequestId><HostId>##</HostId></Error>
End Point IBM COS uses is "https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints"
When I tried to use fog alone(require 'fog'). It throws the below error:
Unable to activate google-api-client-0.23.9, because mime-types-2.99.3 conflicts with mime-types (~> 3.0) (Gem::ConflictError)
Please suggest how to resolve these issues.
https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints"
This is not an endpoint but a list of endpoints in JSON.
Choose the endpoint for your bucket location.
For example if your bucket is in us-south the public endpoint is
https://s3.us-south.cloud-object-storage.appdomain.cloud
The following code worked for IBM Cloud Objects Storage
properties = {
region: region,
endpoint: URI('https://s3.us-south.cloud-object-storage.appdomain.cloud'),
credentials: Aws::Credentials.new(access_key_id, secret_access_key)
}
Aws.config.update(properties)
#client = Aws::S3::Client.new
Properties for the config can also be set as ENV variables.
Below are few basic operations performed on COS.
List all the bucker names
#client.list_buckets.buckets.map(&:name)
Create Bucket
#client.create_bucket(bucket: )
Upload a file
#client.put_object(bucket: , key: , body: )
Download a file
#client.get_object(bucket: , key: )
Delete a file
#client.delete_object(bucket: , key: )
Delete a Bucket
#client.delete_bucket(bucket: )

How do you create an AWS Cloudfront Distribution that points to an S3 (static hosted) Website Endpoint using the SDK?

I have an S3 bucket configured as a website endpoint to host a static web page.
I want to put Cloudfront in front of it.
I copied the "Endpoint" from the S3 Bucket's "Properties" :: "Static Website Hosting."
It is of the form: "example.com.s3-website-us-east-1.amazonaws.com"
When I try to create_distribution using the Aws SDK CloudFront Client I get this Error:
Aws::CloudFront::Errors::InvalidArgument
The parameter Origin DomainName does not refer to a valid S3 bucket.
Example Ruby Code is as follows:
cloudfront = Aws::CloudFront::Client.new()
cloudfront.create_distribution({
distribution_config: {
...
origins: {
quantity: 1,
items: [{
id: "Custom-example.com.s3-website-us-east-1.amazonaws.com",
domain_name: "example.com.s3-website-us-east-1.amazonaws.com",
s3_origin_config: {
origin_access_identity: ""
},
origin_path: ""
}]
},
...
}
})
I am able to create a distribution with the same "Origin Domain Name" through the GUI as well as through the CLI
aws cloudfront create-distribution \
--origin-domain-name example.com.s3-website-us-east-1.amazonaws.com \
--default-root-object index.html
Websites Endpoints that are statically hosted on an S3 bucket need to be configured as an "Origin Type" "custom_origin" and NOT S3_Origin. You can see that this is the case under the "Origins" Tab for the Distribution in the GUI.
Sample Ruby Code:
distribution_config: {
...
origins: {
quantity: 1,
items: [{
id: "Custom-example.com.s3-website-us-east-1.amazonaws.com",
domain_name: "example.com.s3-website-us-east-1.amazonaws.com",
custom_origin_config: {
http_port: 80, # required
https_port: 443, # required
origin_protocol_policy: "http-only", # required, accepts http-only, match-viewer, https-only
},
}]
...
}

How I can require script from data folder

I want to load js file from page and require it in background page.
I try use two copy in lib and in data folder, but have problem with review.
I can load it from lib folder in page, but it uncomfortable for other browsers.
I can load it via loader:
mono = require('toolkit/loader').main(require('toolkit/loader').Loader({
paths: {
'sdk/': 'resource://gre/modules/commonjs/sdk/',
'data/': self.data.url('js/'),
'': 'resource:///modules/'
},
name: self.name,
prefixURI: 'resource://'+self.id.slice(1, -1)+'/'
}), "data/mono");
But have problem with:
require('net/xhr').XMLHttpRequest
I try use for options it, but have same problems.
require('#loader/options')
Now I use it, but all require objects I send via arguments.
Have ideas?
upd
Now I use this code, it allow require modules and don't store it in memory, as I think. But need to declare all modules previously.
mono = require('toolkit/loader').main(require('toolkit/loader').Loader({
paths: {
'data/': self.data.url('js/')
},
name: self.name,
prefixURI: 'resource://'+self.id.slice(1, -1)+'/',
globals: {
console: console,
_require: function(path) {
switch (path) {
case 'sdk/timers':
return require('sdk/timers');
case 'sdk/simple-storage':
return require('sdk/simple-storage');
case 'sdk/window/utils':
return require('sdk/window/utils');
case 'sdk/self':
return require('sdk/self');
default:
console.log('Module not found!', path);
}
}
}
}), "data/mono");
I think this blogpost from erikvold addresses the problem you are facing: http://work.erikvold.com/jetpack/2014/09/23/jp-pro-tip-reusing-js.html

Resources