Storing Phoenix static assets on Amazon S3 - heroku

I need to provide some context before asking the question:
CONTEXT
I have a Phoenix application that is being deployed to Heroku. As default, Brunch is being used to compile the Static Assets like .js,.css and images.
Those assets are stored on ./assets (as of Phoenix 1.3).
Those assets are compiled to ./priv/static/.
The compilation process generates a cache_manifest.json, after the assets are digested using MD5 fingerprinting.
It maybe important to notice I'm using CloudFlare's free version as a CDN.
I'm not concerned about user uploaded assets, I'm talking about the app's assets
Relevant part of the apps config/prod.exs
config :bespoke_work, BespokeWork.Web.Endpoint,
on_init: {BespokeWork.Web.Endpoint, :load_from_system_env, []},
http: [port: {:system, "PORT"}],
url: [scheme: "https", host: System.get_env("HEROKU_HOST"), port: System.get_env("HEROKU_PORT")],
static_url: [scheme: "https", host: System.get_env("STATIC_ASSETS"), port: 443],
force_ssl: [rewrite_on: [:x_forwarded_proto]],
cache_static_manifest: "priv/static/cache_manifest.json",
secret_key_base: System.get_env("SECRET_KEY_BASE")```
QUESTION
How can I prevent Heroku from building the assets and, instead, during deploy, automatically upload the digested assets to an Amazon S3 Bucket?
Will that make Heroku's slug smaller?
POSSIBLE SOLUTION
Reducing Heroku's Slug Size:
• On the Procfile redirect mix phx.digest to output digested items to /dev/null.
or
• Redefine mix deps.compile for Prod, not generating the assets.
Generate the assets locally.
Either Manually upload them or use a Shell Script to upload them to S3. (In my case probably using Codeship's Amazing Amazon S3 deployment)
Use static_url to generate paths "pointing" to the S3 Bucket.
• Is there any simpler way to accomplish this? •

This is not really a good idea... it might even be considered an anti-pattern.
But, if for some reason, that is absolutely necessary, this is one way to do it:
Reducing the Slug Size on Heroku:
Heroku allows the use of a .slugignore file for basic use cases (like .pdf or .psd)
and / or
The slug cleaner buildpack cleans any non-required static assets after they've been processed.
Automating Amazon S3 Uploads
• This should be done through a shell script.

Related

Cache busting is not working in Production

I am trying to solve a caching issue that really has me scratching my head.
We have a Laravel app built in Docker that is deployed to Staging and Production.
Assets are built with Laravel Mix which is using Webpack. All of our
files are versioned.
Staging is in debug mode while Production is not.
Staging is a single instance behind a load balancer, Production is
two instances behind a load balancer.
Both are using the same Nginx Configs
Both sites are using Cloud Front, with the same caching behaviors. Origin Cache headers do not forward cookies, white lists query string using ID
When we deploy to Staging Everything works as expected.
When we deploy to Production, the CSS and Javascript are cached with the previous version.
I checked the asset hashes and they are the same, they both have hits from cloud front.
When I download the CSS files and diff production to stating they are not the same.
What really boggles my mind, is when I change the cache key on production I can see a miss from cloud front, reload hit from cloud front. Download and diff staging and production, they are now the same.
I have NO idea what is going on or where to look, Any ideas would be much appreciated!

Where are my assets files stored?

As I have read, Heroku recommends pointing your CDN directly at your dynos rather than using asset_sync, therefore I did this:
# config/environments/production.rb
config.action_controller.asset_host = "<MY DISTRIBUTION SUBDOMAIN>.cloudfront.net"
The origin of my distribution is dcaclab.com
The CNAMEs of my distribution is assets.dcaclab.com
So, after I successfully pushed my rails 4 app to heroku, I found that all my assets are served from my cloudfront's distribution, like:
http://<MY DISTRIBUTION SUBDOMAIN>.cloudfront.net/assets/features/Multiple%20Circuits-edbfca60b3a74e6e57943dc72745a98c.jpg
What I don't understand, is how my assets files got uploaded to the my cloudfront's distribution?! also, where I can find them?
I thought they would be uploaded to my s3 bucket assets but it was just empty.
Can some enlighten me please?
If your files are being served from your CloudFront distribution and they are not being pulled from your S3 bucket "assets" as you previously thought, then you probably have set up your CloudFront distribution to use a custom origin such as your web server (S3 buckets are the normal/standard origins).
If you have set up your CloudFront distribution to use your web server as the origin, then CloudFront will pull the files from your web server before caching them on different edge locations (the pulling and caching is automatically done by CloudFront when a user access resources through the "distributionname.cloudfront.net" domain).
By the way, just a side note, I ran "dig assets.dcaclab.com" which resolves to "assets.dcaclab.com.s3.amazonaws.com".
If I read the intro docs correctly; you don't necessarily see them on Cloudfront, at least not outside of the management console. They're cached on edge nodes, and requested from your Origin if they're not found or expired. They're "uploaded" on-demand; the edge requests the file from the origin if it doesn't have it.

Asset_pipeline in heroku using the wrong asset hash for precompiled javascript

I'm trying to setup my app to serve assets over the Amazon S3/Cloudfront CDN. Its a rails app and I use the asset_sync gem to achieve this as per this heroku document.
I push my project up to heroku, and then afterwards run a heroku run rake assets:precompile. This gives me output that looks like this:
I, [2013-09-20T21:19:06.506796 #2] INFO -- : Writing /app/public/assets/application-cb6347d3ce9380e02c37364b541fd8ae.js
I, [2013-09-20T21:19:19.979570 #2] INFO -- : Writing /app/public/assets/application-9dc3068c1bf9290c7eb0493fd36b3587.css
[WARNING] fog: followed redirect to abc123.s3-us-west-1.amazonaws.com, connecting to the matching region will be more performant
[WARNING] fog: followed redirect to abc123.s3-us-west-1.amazonaws.com, connecting to the matching region will be more performant
Take note that the hash it writes for the JS file cb6347d3ce9380e02c37364b541fd8ae.js is correct (as I also ran this in staging under my localhost).
The problem though, is that when I hit my app on heroku and inspect the source, the JS that it is including 50460076f4c6eb614a44b6b17323efa7.js is different than the one that was compiled earlier...
Why isn't heroku picking up the right precompiled asset to use? I deployed locally and did all the same steps, my local server picks up the correct JS with no problem.
Thanks for your help!
After some time, I realized that this was because previously I had compiled assets locally and pushed it up. Because of this, Heroku didn't try to precompile in production and just used the old manifest.json that was previously checked in.
You can either recompile locally and push it up, or run rake assets:clobber to delete all precompiled assets, then commit/push and heroku will realize that it needs to precompile. Afterwards, it should be using the correct manifest file and assets should show as normal.
I found this blog post really useful in understanding the situation: http://www.rubycoloredglasses.com/2013/08/precompiling-rails4-assets-when-deploying-to-heroku/

how to migrate heroku file storage to S3

I am an idiot, and very new to Heroku. I used the heroku file system to store paperclip attached files to my models.
Have I lost these files? And can I unload them to S3 somehow and have better access?
Its a low traffic site but its causing problems as it should for me to have it setup to store locally on the server.
You can assume you've lost the files - if the app has been restarted/scaled/deployed to then they'll have gone.
You'll want to get it setup to save the files to S3 in the future.

How to upload images on heroku server using attachment_fu plugin

I have an app in Heroku, I need simple file storage for uploaded images for this I
used send_data using attachment_fu plugin.
After that I have used the tmp/ directory to write this file and want to display on browser, But these files are not displayed in browser.
How can I display these images on browser?
What is the alternate solution to store and retrieve images?
Thanks!
You cannot store uploaded files on Heroku.
You must use an alternative strategy. A couple alternative strategies:
Store uploaded files in your database on Heroku. Because database systems are not specifically optimized for storing files, you should test out how well your application performs under load before you use this strategy.
Store uploaded files in an external storage service. Amazon S3 is a popular cloud storage service that anyone can use, and it has pay-as-you-go pricing just like Heroku. (In fact, Heroku itself runs on top of another of Amazon's cloud services, EC2.)

Resources