I have what i imagine is a fairly common setup.
My rails 3 app is hosted on Heroku, and i use Paperclip to manage file uploading, of videos and images, with all files saved on Amazon S3. The model that the files are attached to is Entry, and the attachments themselves are called 'media'. So, I have paperclip set up like this:
class Entry < ActiveRecord::Base
has_attached_file :media, {:storage=>:s3,
:bucket=>"mybucketname",
:s3_credentials=> <credentials hash>}
This is all working fine. But, I now want to add download links to the files, so the user can download the videos for editing for example. I've done this as follows:
Download link on the page:
<p><%= link_to "Download", download_entry_path(entry) %></p>
This just calls a download action in EntriesController which looks like this:
def download
#entry = Entry.find(params[:id])
if #entry.media.file?
send_file #entry.media.to_file, :type => #entry.media_content_type,
:disposition => 'attachment',
:filename => #entry.media_file_name,
:x_sendfile => true
else
flash[:notice] = "Sorry, there was a problem downloading this file"
redirect_to report_path(#entry.report) and return
end
end
Since some of the downloads will be very large, i'd like to hive the download off to the server to avoid tying up a dyno. That's why i'm using the x_sendfile option. However, i don't think it's set up properly: in the heroku log i can see this:
2011-06-30T11:57:33+00:00 app[web.1]: X-Accel-Mapping header missing
2011-06-30T11:57:33+00:00 app[web.1]:
2011-06-30T11:57:33+00:00 app[web.1]: Started GET "/entries/7/download" for 77.89.149.137 at 2011-06-30 04:57:33 -0700
2011-06-30T11:57:33+00:00 app[web.1]: ### params = {"action"=>"download", "controller"=>"entries", "id"=>"7"}
2011-06-30T11:57:33+00:00 heroku[router]: GET <my-app>/entries/7/download dyno=web.1 queue=0 wait=0ms service=438ms status=200 bytes=94741
The "X-Accel-Mapping header missing" message suggests that something's not right, but i don't know what. Basically i don't know if heroku's nginx server takes on file downloading automatically, and if not then how to tell it to, and i can't find anything in heroku's documentation about it (i might be looking for the wrong thing).
Can anyone set me straight? Grateful for any advice - max
I'm not sure why you're sending files via the server. If they're stored on S3, why not just link right to them?
<%= link_to "Download", entry.media.url %>
That way the downloads bypass your Heroku server altogether.
Related
I have a web app working on heroku right now, I configure my app to stored in assets the fav-icon and company logo that I and use only in the login.
The problem is that Im trying to use activestorage and aws s3 to start uploading images of my employees in heroku.
I follow all documentation to use activestorage and all docs about how to configure Heroku and AWS S3.
runing my app local works with activestorage and s3 I can upload images to my S3 bucket and all looks great, the problem is when I try to deploy this version to heroku the upload (when i use "git push heroku master") don't mark any error but when i try to access my app my app these do not work.
My heroku logs show me
2020-03-27T16:38:47.835694+00:00 app[web.1]: from bin/rails:9:in `<main>'
2020-03-27T16:38:47.889395+00:00 app[web.1]: => Booting Puma
2020-03-27T16:38:47.889418+00:00 app[web.1]: => Rails 5.2.4.1 application starting in production
2020-03-27T16:38:47.889419+00:00 app[web.1]: => Run `rails server -h` for more startup options
2020-03-27T16:38:47.889419+00:00 app[web.1]: Exiting
2020-03-27T16:38:57.236728+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/admin/client" host=admin.ttpn.com.mx request_id=6568febe-d894-4751-bf2c-c6d8d1539146 fwd="189.237.90.141" dyno= connect= service= status=503 bytes= protocol=https
My employee model have the fields to use with railsandmin and the code to use activestorage :
class Employee < ApplicationRecord
has_one_attached :avatar
attr_accessor :remove_avatar
after_save { avatar.purge if remove_avatar == '1' }
The rails_admin configuration to use images is:
rails_admin do
create do
field :avatar, :active_storage
field ...
end
edit do
field :avatar, :active_storage do
delete_method :remove_avatar
end
field ...
end
end
end
My storege.yml code is:
local:
service: S3
access_key_id: <%= Rails.application.credentials.amazon[:access_key_id] %>
secret_access_key: <%= Rails.application.credentials.amazon[:secret_access_key] %>
region: <%= Rails.application.credentials.test[:region] %>
bucket: <%= Rails.application.credentials.test[:bucket] %>
amazon:
service: S3
access_key_id: <%= ENV['AWS_ACCESS_KEY_ID'] %>
secret_access_key: <%= ENV['AWS_SECRET_ACCESS_KEY'] %>
region: us-east-2
bucket: <%= ENV['BUCKET_NAME'] %>
All ENV[] variable are configured right now in Heroku.
Some one can help me to found a solution why my app dont work in heroku
Tks
#antonio-castellanos-loya considering you stated that the application works fine in your local environment, is uploading images to s3 in development, AND is deploying without failing, I'm going to make the assumption that this is an issue with your heroku instance.
did you try running the migrations on the heroku instance?
heroku run rails db:migrate
This is the first thing I'd check, especially since your app is exiting immediately upon booting as per your logs.
First-time poster, I tried to follow the rules but please tell me if anything is missing and/or needs to be edited in this question.
TL;DR : using ActiveStorage, pictures associated to items display fine on localhost but not on Heroku
Initial setup
In an educational context, my team and I are building a simple Ruby On Rails application. We have to use Ruby 2.5.1 and Rails 5.2.4 with a postgresql 9.5 database in development, and we have to use Heroku for production.
The source code is hosted on a GitLab instance name Framagit. We are using the GitLab pipeline to deploy the code to Heroku everytime a merge is completed in the staging branch.
Procfile is as follows :
realease: rails db:migrate && rails db:seed
web: rails server
On this app, the main model is Compost. Each instance has one picture. These pictures are stored under :
app/assets/images/compost_pictures/compost_{001-007}.jpg
ActiveStorage setup
ActiveStorage has been installed by running $ rails active_storage:install followed by $ rails db:migrate which created the active_storage_attachments and active_storage_blobs tables in db/schema.rb.
app/models/compost.rb was updated with :
# Active Storage picture association
has_one_attached :picture
app/views/composts/show.html.erb now includes :
<div class="col-8">
<% if #compost.picture.attached? %>
<%= image_tag #compost.picture, style:"height: auto; max-width:100%" %>
<% else %>
<h3> Hey, why don't you provide a nice compost picture ?</h3>
<% end %>
</div>
(also tried with image_tag url_for(#compost.picture) with no success on Heroku, although it works on localhost)
(#compost is set in the controller and works fine for other composts attributes display)
and db/seeds.rb was updated with :
def compost_seed(user)
new_compost = user.owned_composts.create!( all the attributes...)
picture_file_name = 'compost_' + (format '%03d', rand(1..7)) + '.jpg'
picture_path = Rails.root.join("app", "assets", "images", "compost_pictures", picture_file_name)
new_compost.picture.attach(
io: File.open(picture_path),
filename: picture_file_name,
content_type: "image/jpg"
)
end
Running the server
Locally
On localhost with the Rails server set to development, everything works fine, pictures are displayed.
I've set up a simulated production environment like this :
$ RAILS_ENV=production rails db:create db:migrate
# completed with no error
$ RAILS_ENV=production rails assets:precompile
# completed with no error
$ RAILS_ENV=production rails db:seed
# completed with no error
$ RAILS_ENV=production SERVE_STATIC_ASSETS=true rails server
# server runs smoothly
Everything works fine, pictures are displayed.
On Heroku
none of the pictures is displayed : they are replaced by a broken image symbol
when in Heroku rails console, > Compost.all.sample.picture.attached? returns true
in the browser, the URL of the image is https://ok-compost-staging.herokuapp.com/rails/active_storage/disk/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDRG9JYTJWNVNTSWRaV2huZW1SRGVESkRTRE5sWWxoSGJYSlJSakpTVG5OVUJqb0dSVlE2RUdScGMzQnZjMmwwYVc5dVNTSkphVzVzYVc1bE95Qm1hV3hsYm1GdFpUMGlZMjl0Y0c5emRGOHdNRFl1YW5Cbklqc2dabWxzWlc1aGJXVXFQVlZVUmkwNEp5ZGpiMjF3YjNOMFh6QXdOaTVxY0djR093WlVPaEZqYjI1MFpXNTBYM1I1Y0dWSklnOXBiV0ZuWlM5cWNHVm5CanNHVkE9PSIsImV4cCI6IjIwMTktMTItMDdUMDA6NDM6NTUuNTYzWiIsInB1ciI6ImJsb2Jfa2V5In19--ee616a99d031d361f2b63aeb869ca5d787ef44a8/compost_006.jpg?content_type=image%2Fjpeg&disposition=inline%3B+filename%3D%22compost_006.jpg%22%3B+filename%2A%3DUTF-8%27%27compost_006.jpg and trying to visit it ends up with a 'We're sorry, but something went wrong'.
The logs are :
2019-12-07T00:44:44.433822+00:00 heroku[router]: at=info method=GET path="/rails/active_storage/disk/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDRG9JYTJWNVNTSWRVMVI1ZERoQ1ZXUlFibUpTTVZGNFRWbEVUamhaVldsMkJqb0dSVlE2RUdScGMzQnZjMmwwYVc5dVNTSkphVzVzYVc1bE95Qm1hV3hsYm1GdFpUMGlZMjl0Y0c5emRGOHdNREV1YW5Cbklqc2dabWxzWlc1aGJXVXFQVlZVUmkwNEp5ZGpiMjF3YjNOMFh6QXdNUzVxY0djR093WlVPaEZqYjI1MFpXNTBYM1I1Y0dWSklnOXBiV0ZuWlM5cWNHVm5CanNHVkE9PSIsImV4cCI6IjIwMTktMTItMDdUMDA6NDk6MzcuMzI2WiIsInB1ciI6ImJsb2Jfa2V5In19--cae4ff5349cfa4a394c081da5827f0c35ce96c08/compost_001.jpg?content_type=image%2Fjpeg&disposition=inline%3B+filename%3D%22compost_001.jpg%22%3B+filename%2A%3DUTF-8%27%27compost_001.jpg" host=ok-compost-staging.herokuapp.com request_id=e8ca9a03-07c4-459d-ab20-4e096da30e87 fwd="78.201.127.110" dyno=web.1 connect=0ms service=4ms status=500 bytes=1827 protocol=https
2019-12-07T00:44:47.277080+00:00 app[web.1]: I, [2019-12-07T00:44:47.276974 #4] INFO -- : [f8400596-5b61-4569-ad22-b514afe4b32a] Started GET "/rails/active_storage/disk/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDRG9JYTJWNVNTSWRVMVI1ZERoQ1ZXUlFibUpTTVZGNFRWbEVUamhaVldsMkJqb0dSVlE2RUdScGMzQnZjMmwwYVc5dVNTSkphVzVzYVc1bE95Qm1hV3hsYm1GdFpUMGlZMjl0Y0c5emRGOHdNREV1YW5Cbklqc2dabWxzWlc1aGJXVXFQVlZVUmkwNEp5ZGpiMjF3YjNOMFh6QXdNUzVxY0djR093WlVPaEZqYjI1MFpXNTBYM1I1Y0dWSklnOXBiV0ZuWlM5cWNHVm5CanNHVkE9PSIsImV4cCI6IjIwMTktMTItMDdUMDA6NDk6MzcuMzI2WiIsInB1ciI6ImJsb2Jfa2V5In19--cae4ff5349cfa4a394c081da5827f0c35ce96c08/compost_001.jpg?content_type=image%2Fjpeg&disposition=inline%3B+filename%3D%22compost_001.jpg%22%3B+filename%2A%3DUTF-8%27%27compost_001.jpg" for 78.201.127.110 at 2019-12-07 00:44:47 +0000
2019-12-07T00:44:47.277980+00:00 app[web.1]: I, [2019-12-07T00:44:47.277897 #4] INFO -- : [f8400596-5b61-4569-ad22-b514afe4b32a] Processing by ActiveStorage::DiskController#show as JPEG
2019-12-07T00:44:47.278103+00:00 app[web.1]: I, [2019-12-07T00:44:47.278010 #4] INFO -- : [f8400596-5b61-4569-ad22-b514afe4b32a] Parameters: {"content_type"=>"image/jpeg", "disposition"=>"inline; filename=\"compost_001.jpg\"; filename*=UTF-8''compost_001.jpg", "encoded_key"=>"eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDRG9JYTJWNVNTSWRVMVI1ZERoQ1ZXUlFibUpTTVZGNFRWbEVUamhaVldsMkJqb0dSVlE2RUdScGMzQnZjMmwwYVc5dVNTSkphVzVzYVc1bE95Qm1hV3hsYm1GdFpUMGlZMjl0Y0c5emRGOHdNREV1YW5Cbklqc2dabWxzWlc1aGJXVXFQVlZVUmkwNEp5ZGpiMjF3YjNOMFh6QXdNUzVxY0djR093WlVPaEZqYjI1MFpXNTBYM1I1Y0dWSklnOXBiV0ZuWlM5cWNHVm5CanNHVkE9PSIsImV4cCI6IjIwMTktMTItMDdUMDA6NDk6MzcuMzI2WiIsInB1ciI6ImJsb2Jfa2V5In19--cae4ff5349cfa4a394c081da5827f0c35ce96c08", "filename"=>"compost_001"}
2019-12-07T00:44:47.279105+00:00 app[web.1]: I, [2019-12-07T00:44:47.279040 #4] INFO -- : [f8400596-5b61-4569-ad22-b514afe4b32a] Completed 500 Internal Server Error in 1ms (ActiveRecord: 0.0ms)
2019-12-07T00:44:47.279772+00:00 app[web.1]: F, [2019-12-07T00:44:47.279702 #4] FATAL -- : [f8400596-5b61-4569-ad22-b514afe4b32a]
2019-12-07T00:44:47.279842+00:00 app[web.1]: F, [2019-12-07T00:44:47.279773 #4] FATAL -- : [f8400596-5b61-4569-ad22-b514afe4b32a] Errno::ENOENT (No such file or directory # rb_file_s_mtime - /app/storage/ST/yt/STyt8BUdPnbR1QxMYDN8YUiv)
The images were properly displayed when we fetched them via their URL in the assets pipeline.
So, what now ?
According to the logs, we guess this is a path issue. What are we missing ? At what point are we mistaken ?
We'll update the post with any required code snippet or test result, this is an educational project and nothing about it is private or sensitive.
Any insight or pointer appreciated :)
Workaround : use cloud storage instead of local disk in production
Since all the other images (background, logos, etc.) were displayed properly, the problem had to lie with our Active Storage use.
According to Heroku doc on Active Storage they use Ephemeral Disks for Active Storage and files are not persisted neither across dyno nor over time. So our guess was that our attachments were not stored on the dyno that served our application.
Connecting the production storage to a free-tier AWS bucket as suggested in Heroku doc worked for us and images are displayed properly.
So i have setup parse-server, specifically parse-server-example on Heroku with mLab db and s3 file adapter for storage.
Ive been scouring GitHub and stack overflow trying to find a solution to the maxUploadSize limitations.
By default this is set to '20mb' and sources say that you can modify this option to be anything you like. Within my parse-server-example there are 4 files that contain this variable:
1. ParseServer.js
/parse-server-example/node_modules/parse-server/lib
2. FilesRouter.js
/parse-server-example/node_modules/parse-server/lib/Routers
3.parse-server.js
/parse-server-example/node_modules/parse-server/lib/cli/definitions
3. index.js
/parse-server-example
4. defaults.js
/parse-server-example/node_modules/parse-server/lib
I have replaced all instances of '20mb' with '100mb' in these 4 files and have added the maxUploadSize option variable to my index.js file. I think the only place where this matters is the index.js file. This is where i changed it originally and it did allow larger files, i then changed it in the other locations out of desperation and there has been no change in the result.
index.js:
var api = new ParseServer({
//**** General Settings ****//
databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
cloud: process.env.CLOUD_CODE_MAIN || __dirname + '/cloud/main.js',
serverURL: process.env.SERVER_URL || 'http://localhost:1337/parse',
maxUploadSize: '100mb',
Now i can upload files upto 40mb quite easily, but when i try upload anything larger it shows as uploading in the dashboard for quite a while then just stops.
I have also read that setting the client_max_body_size in .ebextensions may play a part but i think this is for elastic beanstalk on aws so don't think its relevant for me. Though i even tried adding the variable in a files.config file inside the .ebextensions folder in my /parse-server-example/ folder.
I am uploading files through the parse dashboard so they are then transferred to the s3bucket i have setup with the unique PFFIle name.
This is what my Heroku logs look like after trying a 50mb upload:
2017-02-06T06:58:14.188932+00:00 app[web.1]: } method=POST, url=/parse/classes/Part, host=parse-server-example.herokuapp.com, connection=close, content-type=text/plain, origin=http://0.0.0.0:4040, accept-encoding=gzip, deflate, accept=*/*, user-agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/602.3.12 (KHTML, like Gecko) Version/10.0.2 Safari/602.3.12, referer=http://0.0.0.0:4040/apps/parse-server-example/browser/Part, accept-language=en-au, x-request-id=df8ecd66-76f5-4592-9c82-44a86f6c4165, x-forwarded-for=101.165.231.37, x-forwarded-proto=https, x-forwarded-port=443, via=1.1 vegur, connect-time=1, x-request-start=1486364292657, total-route-time=0, content-length=235, __op=Delete
2017-02-06T06:58:14.210449+00:00 app[web.1]: [36mverbose[39m: RESPONSE from [POST] /parse/classes/Part: {
2017-02-06T06:58:14.210451+00:00 app[web.1]: "status": 201,
2017-02-06T06:58:14.210452+00:00 app[web.1]: "response": {
2017-02-06T06:58:14.210453+00:00 app[web.1]: "objectId": "vrIuYJCOfG",
2017-02-06T06:58:14.210455+00:00 app[web.1]: "createdAt": "2017-02-06T06:58:14.190Z"
2017-02-06T06:58:14.210456+00:00 app[web.1]: },
2017-02-06T06:58:14.210457+00:00 app[web.1]: "location": "https://parse-server-example.herokuapp.com/parse/classes/Part/vrIuYJCOfG"
2017-02-06T06:58:14.210459+00:00 app[web.1]: } status=201, objectId=vrIuYJCOfG, createdAt=2017-02-06T06:58:14.190Z, location=https://parse-server-example.herokuapp.com/parse/classes/Part/vrIuYJCOfG
2017-02-06T06:58:29.061410+00:00 heroku[router]: at=info method=OPTIONS path="/parse/files/South%20Park%20s09e02%20-%20Die%20Hippie,%20Die%20_%20480p%20UNCENSORED%20x264%20NIT158.mp4" host=parse-server-example.herokuapp.com request_id=0e76d848-3c8a-44e5-8efe-4d586ced8e8a fwd="111.111.111.11" dyno=web.1 connect=0ms service=1ms status=200 bytes=477
It seems to show everything as bytes=447 regardless of actual size?
There is another version of parse-server on GitHub that has a different FilesRouter.js file and others but they look to be the development master.
I am wondering if it may be advised to overwrite my existing setup with this one, if so how to do it without stuffing up what I've done so far? or even if this will fix the problem??
Is it something to do with the dashboard? I have not found another way to upload files and have them linked to the s3 bucket.
I should also add that i don't want users to upload files, this is just me attempting to populate the app with some media files that can be downloaded.
Thanks for any assistance here, this one has been a struggle for me to figure out despite looking for a long long time now.
Seems like my IP's abysmal upload speed is responsible for this.
gone back through and cleaned up my changes. The index.js is the only place that you need to set the maxUploadSize variable.
How do I configure my models to avoid their assets being deleted upon assets recompilation, when I push new code to Openshift?
At the moment my model looks like this:
class Slide < ActiveRecord::Base
attr_accessible :caption, :position, :visible, :photo
has_attached_file :photo, :styles => { :thumb => "190x90>" }
...
I have noticed that uploaded photos are deleted from /public directory when Openshift recompiles my assets upon pushing new code.
I have found some old code looking like this:
has_attached_file :attachment, :removable => true,
:url => "/attachments/:id/:style/:basename.:extension",
:path => ":rails_root/tmp/attachments/:id/:style/:basename.:extension"
Am I supposed to try something like in the above code snippet, or is there an option to change it in Openshift configuration?
The $OPENSHIFT_REPO_DIR structure will get replaced by your local git repo on every git push.
Best practice would be use $OPENSHIFT_DATA_DIR instead of $OPENSHIFT_REPO_DIR for handling uploads in your application.
For more details, please review: https://openshift.redhat.com/community/kb/kb-e1065-what-is-application-crud-and-how-should-i-handle-it-in-openshift
I am working in rails 4 and I am trying to authenticate using github. So in my Github application I have this:
URL: http:// localhost:4000
Callback URL: http:// localhost:4000/auth/github/callback
The callback url is the url that Github will try to reach when the authentication is done right?
So why do I get a Github page 404 error when I click on my link:
<%= link_to 'Sign in with Github', '/auth/github' %>
I am working on a localhost development enviroment so that might be the problem?
Also when i type http:// localhost:4000/auth/github/callback on my browser I get an OmniAuth::Strategies::OAuth2::CallbackError
why? I have this in my routes.rb
post 'auth/:provider/callback' => 'home#index'
Is Rails 4 and Omniauth bugged?
(added the space in localhost so stackoverflow accepts my post)
I have github working with the gem omniauth-github
and a file config/initializers/omniauth.rb containing
Rails.application.config.middleware.use OmniAuth::Builder do
provider :github, ENV['GITHUB_KEY'], ENV['GITHUB_SECRET']
end
However, when I enter http://localhost:3000/auth/github/callback on my browser I also get OmniAuth::Strategies::OAuth2::CallbackError so this shouldn't be the problem.
My config/environment.rb looks like
# Load the rails application
require File.expand_path('../application', __FILE__)
# Load the app's custom environment variables here, so that they are loaded before environments/*.rb
app_environment_variables = File.join(Rails.root, 'config', 'app_environment_variables.rb')
load(app_environment_variables) if File.exists?(app_environment_variables)
...
and my config/app/environment_variables.rb looks like
# OAuth Keys and Secrets
if Rails.env.production?
ENV['GITHUB_KEY'] = 'd1234a3a123a1a3a123c'
ENV['GITHUB_SECRET'] = '1234azer123azer1231209jeunsghezkndaz1234'
else
ENV['GITHUB_KEY'] = 'qsflkjkj685bg554456b'
ENV['GITHUB_SECRET'] = 'qslkfj7757kqfmlsdh675hlfsd587kjfdh687jsd'
end
See Is it possible to set ENV variables for rails development environment in my code? for more details on that.
I have 2 applications registered on github. One app_name-dev with key qsflk..., url http://localhost:3000 and callback url http://localhost:3000/auth/github/callback and one app_name with key d1234a....
Check that you have done that correctly. Maybe try to change localhost to 127.0.0.1.
For me it was Github's new stricter URI matching that was producing a 404 when trying to redirect to http://localhost:3000/auth/github/callback, I solved it by passing the redirect URI as a parameter with Omniauth.
Rails.application.config.middleware.use OmniAuth::Builder do
provider :github, ENV['GITHUB_KEY'], ENV['GITHUB_SECRET'],
:scope => 'user,public_repo',
:redirect_uri => ENV['GITHUB_REDIRECT']
end
If your on Linux/Mac you can add environment variables from the command line.
$ export GITHUB_REDIRECT=http://localhost:3000/auth/github/callback
Alternatively, you could use something like Foreman that will let you add a .env file which you can use to store your variables in.
Just remember to add the appropriate redirect URI to your production environment's variables, and you're good, to go.