I have a really basic Sinatra site working locally. I am using the "rackup" thing, where you define a config.ru like this:
require './web'
use Rack::ShowExceptions
run App.new
And then in the terminal you can run 'rackup' and a web server is fired up and all is well.
However, when I deploy this to heroku I don't get any error messages, but, when I visit the site, it says the standard "Sinatra does not know this ditty" error.
Here is a snippet of my web.rb in case it helps:
require 'sinatra'
require 'maruku'
require 'mustache/sinatra'
require 'nokogiri'
class App < Sinatra::Base
register Mustache::Sinatra
require './views/layout'
set :mustache, {
:views => './views/',
:templates => './templates/'
}
get '/' do
"FUUUUUUUUUUUUU"
end
Edit
Looking at the heroku logs, it appears like sinatra starts and then stops; it doesn't keep running. Then when someone makes a request obviously the server returns a 404
2012-01-20T12:39:23+00:00 app[web.1]: == Sinatra/1.1.0 has taken the stage on 16662 for development with backup from Thin
2012-01-20T12:39:23+00:00 app[web.1]: >> Thin web server (v1.2.7 codename No Hup)
2012-01-20T12:39:23+00:00 app[web.1]: >> Maximum connections set to 1024
2012-01-20T12:39:23+00:00 app[web.1]: >> Listening on 0.0.0.0:16662, CTRL+C to stop
2012-01-20T12:39:23+00:00 app[web.1]: == Sinatra has ended his set (crowd applauds)
2012-01-20T12:39:23+00:00 app[web.1]:
2012-01-20T12:39:23+00:00 app[web.1]: >> Stopping ...
2012-01-20T12:39:23+00:00 heroku[web.1]: Process exited
2012-01-20T12:39:24+00:00 heroku[router]: GET young-river-2245.herokuapp.com/ dyno=web.1 queue=0 wait=0ms service=48ms status=404 bytes=409
Whenever you inherit from Sinatra::Base you must require 'sinatra/base' rather than require 'sinatra' at the top of your web.rb file.
I just ran a simple test using your snippets and was able to replicate and then fix the error by doing this.
Related
First-time poster, I tried to follow the rules but please tell me if anything is missing and/or needs to be edited in this question.
TL;DR : using ActiveStorage, pictures associated to items display fine on localhost but not on Heroku
Initial setup
In an educational context, my team and I are building a simple Ruby On Rails application. We have to use Ruby 2.5.1 and Rails 5.2.4 with a postgresql 9.5 database in development, and we have to use Heroku for production.
The source code is hosted on a GitLab instance name Framagit. We are using the GitLab pipeline to deploy the code to Heroku everytime a merge is completed in the staging branch.
Procfile is as follows :
realease: rails db:migrate && rails db:seed
web: rails server
On this app, the main model is Compost. Each instance has one picture. These pictures are stored under :
app/assets/images/compost_pictures/compost_{001-007}.jpg
ActiveStorage setup
ActiveStorage has been installed by running $ rails active_storage:install followed by $ rails db:migrate which created the active_storage_attachments and active_storage_blobs tables in db/schema.rb.
app/models/compost.rb was updated with :
# Active Storage picture association
has_one_attached :picture
app/views/composts/show.html.erb now includes :
<div class="col-8">
<% if #compost.picture.attached? %>
<%= image_tag #compost.picture, style:"height: auto; max-width:100%" %>
<% else %>
<h3> Hey, why don't you provide a nice compost picture ?</h3>
<% end %>
</div>
(also tried with image_tag url_for(#compost.picture) with no success on Heroku, although it works on localhost)
(#compost is set in the controller and works fine for other composts attributes display)
and db/seeds.rb was updated with :
def compost_seed(user)
new_compost = user.owned_composts.create!( all the attributes...)
picture_file_name = 'compost_' + (format '%03d', rand(1..7)) + '.jpg'
picture_path = Rails.root.join("app", "assets", "images", "compost_pictures", picture_file_name)
new_compost.picture.attach(
io: File.open(picture_path),
filename: picture_file_name,
content_type: "image/jpg"
)
end
Running the server
Locally
On localhost with the Rails server set to development, everything works fine, pictures are displayed.
I've set up a simulated production environment like this :
$ RAILS_ENV=production rails db:create db:migrate
# completed with no error
$ RAILS_ENV=production rails assets:precompile
# completed with no error
$ RAILS_ENV=production rails db:seed
# completed with no error
$ RAILS_ENV=production SERVE_STATIC_ASSETS=true rails server
# server runs smoothly
Everything works fine, pictures are displayed.
On Heroku
none of the pictures is displayed : they are replaced by a broken image symbol
when in Heroku rails console, > Compost.all.sample.picture.attached? returns true
in the browser, the URL of the image is https://ok-compost-staging.herokuapp.com/rails/active_storage/disk/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDRG9JYTJWNVNTSWRaV2huZW1SRGVESkRTRE5sWWxoSGJYSlJSakpTVG5OVUJqb0dSVlE2RUdScGMzQnZjMmwwYVc5dVNTSkphVzVzYVc1bE95Qm1hV3hsYm1GdFpUMGlZMjl0Y0c5emRGOHdNRFl1YW5Cbklqc2dabWxzWlc1aGJXVXFQVlZVUmkwNEp5ZGpiMjF3YjNOMFh6QXdOaTVxY0djR093WlVPaEZqYjI1MFpXNTBYM1I1Y0dWSklnOXBiV0ZuWlM5cWNHVm5CanNHVkE9PSIsImV4cCI6IjIwMTktMTItMDdUMDA6NDM6NTUuNTYzWiIsInB1ciI6ImJsb2Jfa2V5In19--ee616a99d031d361f2b63aeb869ca5d787ef44a8/compost_006.jpg?content_type=image%2Fjpeg&disposition=inline%3B+filename%3D%22compost_006.jpg%22%3B+filename%2A%3DUTF-8%27%27compost_006.jpg and trying to visit it ends up with a 'We're sorry, but something went wrong'.
The logs are :
2019-12-07T00:44:44.433822+00:00 heroku[router]: at=info method=GET path="/rails/active_storage/disk/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDRG9JYTJWNVNTSWRVMVI1ZERoQ1ZXUlFibUpTTVZGNFRWbEVUamhaVldsMkJqb0dSVlE2RUdScGMzQnZjMmwwYVc5dVNTSkphVzVzYVc1bE95Qm1hV3hsYm1GdFpUMGlZMjl0Y0c5emRGOHdNREV1YW5Cbklqc2dabWxzWlc1aGJXVXFQVlZVUmkwNEp5ZGpiMjF3YjNOMFh6QXdNUzVxY0djR093WlVPaEZqYjI1MFpXNTBYM1I1Y0dWSklnOXBiV0ZuWlM5cWNHVm5CanNHVkE9PSIsImV4cCI6IjIwMTktMTItMDdUMDA6NDk6MzcuMzI2WiIsInB1ciI6ImJsb2Jfa2V5In19--cae4ff5349cfa4a394c081da5827f0c35ce96c08/compost_001.jpg?content_type=image%2Fjpeg&disposition=inline%3B+filename%3D%22compost_001.jpg%22%3B+filename%2A%3DUTF-8%27%27compost_001.jpg" host=ok-compost-staging.herokuapp.com request_id=e8ca9a03-07c4-459d-ab20-4e096da30e87 fwd="78.201.127.110" dyno=web.1 connect=0ms service=4ms status=500 bytes=1827 protocol=https
2019-12-07T00:44:47.277080+00:00 app[web.1]: I, [2019-12-07T00:44:47.276974 #4] INFO -- : [f8400596-5b61-4569-ad22-b514afe4b32a] Started GET "/rails/active_storage/disk/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDRG9JYTJWNVNTSWRVMVI1ZERoQ1ZXUlFibUpTTVZGNFRWbEVUamhaVldsMkJqb0dSVlE2RUdScGMzQnZjMmwwYVc5dVNTSkphVzVzYVc1bE95Qm1hV3hsYm1GdFpUMGlZMjl0Y0c5emRGOHdNREV1YW5Cbklqc2dabWxzWlc1aGJXVXFQVlZVUmkwNEp5ZGpiMjF3YjNOMFh6QXdNUzVxY0djR093WlVPaEZqYjI1MFpXNTBYM1I1Y0dWSklnOXBiV0ZuWlM5cWNHVm5CanNHVkE9PSIsImV4cCI6IjIwMTktMTItMDdUMDA6NDk6MzcuMzI2WiIsInB1ciI6ImJsb2Jfa2V5In19--cae4ff5349cfa4a394c081da5827f0c35ce96c08/compost_001.jpg?content_type=image%2Fjpeg&disposition=inline%3B+filename%3D%22compost_001.jpg%22%3B+filename%2A%3DUTF-8%27%27compost_001.jpg" for 78.201.127.110 at 2019-12-07 00:44:47 +0000
2019-12-07T00:44:47.277980+00:00 app[web.1]: I, [2019-12-07T00:44:47.277897 #4] INFO -- : [f8400596-5b61-4569-ad22-b514afe4b32a] Processing by ActiveStorage::DiskController#show as JPEG
2019-12-07T00:44:47.278103+00:00 app[web.1]: I, [2019-12-07T00:44:47.278010 #4] INFO -- : [f8400596-5b61-4569-ad22-b514afe4b32a] Parameters: {"content_type"=>"image/jpeg", "disposition"=>"inline; filename=\"compost_001.jpg\"; filename*=UTF-8''compost_001.jpg", "encoded_key"=>"eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDRG9JYTJWNVNTSWRVMVI1ZERoQ1ZXUlFibUpTTVZGNFRWbEVUamhaVldsMkJqb0dSVlE2RUdScGMzQnZjMmwwYVc5dVNTSkphVzVzYVc1bE95Qm1hV3hsYm1GdFpUMGlZMjl0Y0c5emRGOHdNREV1YW5Cbklqc2dabWxzWlc1aGJXVXFQVlZVUmkwNEp5ZGpiMjF3YjNOMFh6QXdNUzVxY0djR093WlVPaEZqYjI1MFpXNTBYM1I1Y0dWSklnOXBiV0ZuWlM5cWNHVm5CanNHVkE9PSIsImV4cCI6IjIwMTktMTItMDdUMDA6NDk6MzcuMzI2WiIsInB1ciI6ImJsb2Jfa2V5In19--cae4ff5349cfa4a394c081da5827f0c35ce96c08", "filename"=>"compost_001"}
2019-12-07T00:44:47.279105+00:00 app[web.1]: I, [2019-12-07T00:44:47.279040 #4] INFO -- : [f8400596-5b61-4569-ad22-b514afe4b32a] Completed 500 Internal Server Error in 1ms (ActiveRecord: 0.0ms)
2019-12-07T00:44:47.279772+00:00 app[web.1]: F, [2019-12-07T00:44:47.279702 #4] FATAL -- : [f8400596-5b61-4569-ad22-b514afe4b32a]
2019-12-07T00:44:47.279842+00:00 app[web.1]: F, [2019-12-07T00:44:47.279773 #4] FATAL -- : [f8400596-5b61-4569-ad22-b514afe4b32a] Errno::ENOENT (No such file or directory # rb_file_s_mtime - /app/storage/ST/yt/STyt8BUdPnbR1QxMYDN8YUiv)
The images were properly displayed when we fetched them via their URL in the assets pipeline.
So, what now ?
According to the logs, we guess this is a path issue. What are we missing ? At what point are we mistaken ?
We'll update the post with any required code snippet or test result, this is an educational project and nothing about it is private or sensitive.
Any insight or pointer appreciated :)
Workaround : use cloud storage instead of local disk in production
Since all the other images (background, logos, etc.) were displayed properly, the problem had to lie with our Active Storage use.
According to Heroku doc on Active Storage they use Ephemeral Disks for Active Storage and files are not persisted neither across dyno nor over time. So our guess was that our attachments were not stored on the dyno that served our application.
Connecting the production storage to a free-tier AWS bucket as suggested in Heroku doc worked for us and images are displayed properly.
I am running a small docker image that starts a ruby Sinatra API. Th point of the API is to receive an image, preprocess it using a script then use Tesseract OCR to return the text from the image.
The issue I'm having is that I'm getting a 503 error then a 300 success with the text, but it's of no use as my ios app has already received the 503 error and continued on its way.
require 'sinatra'
require "json"
require 'sinatra/base'
require 'sinatra'
require 'json'
require 'fileutils'
require 'tempfile'
require "base64"
require 'puma_worker_killer'
PumaWorkerKiller.enable_rolling_restart
set :protection, except: [ :json_csrf ]
port = ENV['PORT'] || 8080
set :port, port
set :bind, '0.0.0.0'
post '/extractText' do
begin
bas64Image = Base64.decode64(params[:image])
imageFile = Tempfile.new(['image', '.png'])
imageFile.write(bas64Image)
imageFile.close
`textcleaner #{imageFile.path} #{imageFile.path}`
# `textdeskew #{imageFile.path} #{imageFile.path}`
output = `tesseract #{imageFile.path} --psm 6 --oem 2 stdout`
p output
rescue
status 402
return "Error reading image"
end
status 200
return output
end
Heres the heroku read out i get:
2017-10-23T20:22:00.548029+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=POST path="/extractText" host=tesseractimageserver.herokuapp.com request_id=7b179829-ef1f-4e18-844b-42b90a5c5c69 fwd="82.32.79.208" dyno=web.1 connect=1ms service=30945ms status=503 bytes=0 protocol=https
2017-10-23T20:22:42.864098+00:00 app[web.1]: "Returned text from reading image using tesseract"
2017-10-23T20:22:42.872633+00:00 app[web.1]: 82.32.79.208 - - [23/Oct/2017:20:22:42 +0000] "POST /extractText HTTP/1.1" 200 467 72.2921
Id there a way around this?
Here is how it works. Heroku has 30 seconds timeout for any web request. When the new request is routed to your application dyno and there is no response within this 30 seconds time limit, Heroku is dropping the connection and reporting H12 error. But your dyno is still processing the web request, even nobody will ever use its response. The client is already gone. Because of such behavior it is recommended to put some kind of timeout mechanism on long-running actions. If you send many requests to such endpoint you can easily bring your whole application to a halt.
The work-around to this problem is to process your images in background job. So the first request will register a new job and return its id or some kind of identifier. Then your ios app can periodically ping server using this id to check if the job has finished.
I am working with an app and am looking to deploy to heroku. Full source here.
The primary error that I am seeing is
heroku[router]: at=info method=GET path="/" host=cheesyparts.herokuapp.com request_id=25d2dbb5-e13a-4146-bb3a-9386f997c44c fwd="54.234.191.55" dyno=web.1 connect=2 service=3 status=404 bytes=417
Locally when I attempt to lauch via foreman the same issue occurs. However, I am able to launch the server and run it if I use the ruby parts_server_control.rb run. Any tip is appreciated.
The config.ru looks like this
require './parts_server'
run Sinatra::Application
and the control script parts_server_control.rb looks like:
require "bundler/setup"
require "daemons"
require "pathological"
require "thin"
Daemons.run_proc("parts_server", :monitor => true) do
require "parts_server"
Thin::Server.start("0.0.0.0", PORT, CheesyParts::Server)
end
The control script is running the app class CheesyParts::Server, but your but your config.ru (used by foreman and Heroku) assumes the app written in the classic style and is using the class Sinatra::Application. See the Sinatra docs on modular and classic application styles. Since nothing is added to Sinatra::Application it is an “empty” app and so you will get 404 errors for any route.
The fix is to change the line
run Sinatra::Application
in your config.ru to
run CheesyParts::Server
so that that class will be used as the main app.
I have a modular Sinatra web app running using Thin, with EventMachine running additional tasks.
It works, but there's something a bit odd about the webserver: Any requests, whether successful or 404s, don't appear in the log output from Thin/Sinatra. And when I cancel the process, the server ends twice.
Here's the rough, basic structure of the app:
Procfile:
web: ruby app.rb
app.rb:
require 'thin'
require 'eventmachine'
require 'app/frontend'
EM.run do
# Start some background tasks here...
EM.add_periodic_timer(1200) do
# Do a repeating task here...
end
App::Frontend.run!
end
app/frontend.rb:
require 'sinatra/base'
module App
class Frontend < Sinatra::Base
get '/' do
# Display page
end
# etc
end
end
When I do foreman start then I get:
16:50:00 web.1 | started with pid 76423
16:50:01 web.1 | [messages about EventMachine background tasks starting]
16:50:01 web.1 | == Sinatra/1.4.3 has taken the stage on 5000 for development with backup from Thin
16:50:01 web.1 | >> Thin web server (v1.5.1 codename Straight Razor)
16:50:01 web.1 | >> Maximum connections set to 1024
16:50:01 web.1 | >> Listening on 0.0.0.0:5000, CTRL+C to stop
Nothing more is output when I request existing web pages (which load OK) or not-existing web pages. When I cancel the process I get:
^CSIGINT received
16:50:08 system | sending SIGTERM to all processes
SIGTERM received
16:50:08 web.1 | >> Stopping ...
16:50:08 web.1 | == Sinatra has ended his set (crowd applauds)
16:50:08 web.1 | >> Stopping ...
16:50:08 web.1 | == Sinatra has ended his set (crowd applauds)
16:50:08 web.1 | exited with code 0
That Sinatra finishes twice makes me think I'm somehow running it twice, and the one that's serving the web pages isn't being logged... but I don't know how I'm managing this!
The docs on Modular vs. Classic style mention that there are some changes to the default settings, and one of these is logging, which is turned off by default.
Adding settings.logging = true to the top of class Frontend < Sinatra::Base gives me a log in my terminal window for the localhost:5000 requests.
I don't think the second issue is that it's creating two processes, but rather that it's killing and re-starting the process right before closing the server. This can be solved by following the Sinatra recipe for using EventMachine with Sinatra, which is a little more complicated than what you've done. Here's their code, modified to fit your app:
The new app.rb:
EM.run do
server = 'thin'
host = '0.0.0.0'
port = ENV['PORT'] || '8181'
web_app = App::Frontend.new
# Start some background tasks here...
EM.add_periodic_timer(1200) do
# Do a repeating task here...
end
dispatch = Rack::Builder.app do
map '/' do
run web_app
end
end
Rack::Server.start({
app: dispatch,
server: server,
Host: host,
Port: port
})
end
(original source, from Sinatra Recipes)
The use of ENV['PORT'] in the app.rb lets you use multiple instances in foreman (e.g., foreman start -p 4000 -c web=2, which will run services on ports 4000 and 4001). And both of them appear in the log!
Hope that helps.
I've a very basic test app. When I execute this command the server ignores the port I specify and runs Thin on port 4567. Why is the port I specify ignored?
$ruby xxx.rb start -p 8000
== Sinatra/1.3.3 has taken the stage on 4567 for production with backup from Thin
>> Thin web server (v1.4.1 codename Chromeo)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:4567, CTRL+C to stop
xxx.rb file
require 'Thin'
rackup_file = "config.ru"
argv = ARGV
argv << ["-R", rackup_file ] unless ARGV.include?("-R")
argv << ["-e", "production"] unless ARGV.include?("-e")
puts argv.flatten
Thin::Runner.new(argv.flatten).run!
config.ru file
require 'sinatra'
require 'sinatra/base'
class SingingRain < Sinatra::Base
get '/' do
return 'hello'
end
end
SingingRain.run!
#\ -p 8000
put this at the top of the config.ru
Your problem is with the line:
SingingRain.run!
This is Sinatra’s run method, which tells Sinatra to start its own web server which runs on port 4567 by default. This is in your config.ru file, but config.ru is just Ruby, so this line is run as if it was in any other .rb file. This is why you see Sinatra start up on that port.
When you stop this server with CTRL-C, Thin will try to continue loading the config.ru file to determine what app to run. You don’t actually specify an app in your config.ru, so you’ll see something like:
^C>> Stopping ...
== Sinatra has ended his set (crowd applauds)
/Users/matt/.rvm/gems/ruby-1.9.3-p194/gems/rack-1.4.1/lib/rack/builder.rb:129:in `to_app': missing run or map statement (RuntimeError)
from config.ru:1:in `<main>'
...
This error is simply telling you that you didn’t actually specify an app to run in your config file.
Instead of SingingRain.run!, use:
run SingingRain
run is a Rack method that specifies which app to run. You could also do run SingingRain.new – Sinatra takes steps to enable you to use just the class itself here, or an instance.
The output to this should now just be:
>> Thin web server (v1.4.1 codename Chromeo)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:8000, CTRL+C to stop
You don’t get the == Sinatra/1.3.2 has taken the stage on 4567 for production with backup from Thin message because Sinatra isn’t running its built in server, it’s just your Thin server as you configured it.
in your config.ru add
set :port=> 8000
Also i would highly suggest using Sinatra with something like passenger+nginx which makes deploying to production a breeze. But You need not worry about this if you are going to deploy to heroku.