I updated one of my apps to Rails 5 and upgraded the Ruby version to 2.3.1 as well. The app already used Puma prior to the Rails 5 upgrade as well and was deployed on a Digital Ocean droplet.
When I start rails server locally, I get the normal output in my Rails log, which I've copied below.
=> Booting Puma
=> Rails 5.0.0 application starting in development on http://localhost:3000
=> Run `rails server -h` for more startup options
[14669] Puma starting in cluster mode...
[14669] * Version 3.4.0 (ruby 2.3.1-p112), codename: Owl Bowl Brawl
[14669] * Min threads: 5, max threads: 5
[14669] * Environment: development
[14669] * Process workers: 2
[14669] * Preloading application
[14669] * Listening on tcp://localhost:3000
[14669] Use Ctrl-C to stop
[14669] - Worker 1 (pid: 14684) booted, phase: 0
[14669] - Worker 0 (pid: 14683) booted, phase: 0
Everything looks normal to me. When I visit localhost:3000, the browser has a pending request that is pending indefinitely. There is no further activity in the Rails log acknowledging that any request is being received.
Has anyone encountered this type of issue, or know of any potential causes for that?
Resolved this issue, and confirmed by #marvindanig who was experiencing the same issue, that the 'tmp' folder needed to be cleared. There is a rake task in rails to do so...
rake tmp:clear
Related
I'm trying to implement a small WEBrick server that ends itself when there are no requests after x number of seconds. However, I'm getting nowhere. My very first attempt at simply exiting after 2 seconds fails. Here's the simple code
that doesn't work.
server = WEBrick::HTTPServer.new(:Port => 8000)
WEBrick::Utils::TimeoutHandler.register(2, Timeout::Error)
server.start
I thought that would simply exit the process after 2 seconds. Here's what actually happens:
[2020-01-19 15:41:10] INFO WEBrick 1.4.2
[2020-01-19 15:41:10] INFO ruby 2.5.1 (2018-03-29) [x86_64-linux-gnu]
[2020-01-19 15:41:10] INFO WEBrick::HTTPServer#start: pid=16622 port=8000
[2020-01-19 15:41:12] ERROR Timeout::Error: execution timeout
/usr/lib/ruby/2.5.0/webrick/server.rb:170:in `select'
And then the process keeps running. I have to ctrl-c to end it.
What's the correct way to shut down a server and end the process after a timeout?
I deployed a jekyll site to heroku. Logs indicate that the app status has changed from "starting to up" (shown below).
Starting process with command `bundle exec puma -t 8:32 -w 3 -p 3641`
[4] Puma starting in cluster mode...
[4] * Version 3.6.0 (ruby 2.3.1-p112), codename: Sleepy Sunday Serenity
[4] * Min threads: 8, max threads: 32
[4] * Environment: production
[4] * Process workers: 3
[4] * Phased restart available
[4] * Listening on tcp://0.0.0.0:3641
[4] Use Ctrl-C to stop
Configuration file: /app/_config.yml
Configuration file: /app/_config.yml
Generating site: /app -> /app/_site
[4] - Worker 0 (pid: 6) booted, phase: 0
Generating site: /app -> /app/_site
[4] - Worker 2 (pid: 14) booted, phase: 0
Configuration file: /app/_config.yml
Generating site: /app -> /app/_site
[4] - Worker 1 (pid: 10) booted, phase: 0
heroku[web.1]: State changed from starting to up
But when I hit my url it gives me "Jekyll is currently rendering the site.
Please try again shortly." No matter how long i wait it says the same thing. I repeated deployment several times but it still gives the same message.
Please advise.
I had this problem and fixed it by adding an assets:precompile rake task to my Rakefile. Originally, my Rakefile looked like this:
task :build do
system('bundle exec jekyll build')
end
My build task alone wasn't hooking into Heroku's build process, causing rack-jekyll to serve its wait page infinitely.
Here's the Rakefile that worked for me:
task :build do
system('bundle exec jekyll build')
end
namespace :assets do
task precompile: :build
end
I'm experiencing an unresponsive socket in with my Puma setup after random time. Up to this point I don't have a clue what's causing the issue. I was hoping somebody over here can help we with some answers or point me in the right direction. I'm having the following setup:
I'm using the official docker ruby-2.2.3-slim image together with the latest puma release 2.15.3, I've also installed Nginx as a reverse proxy. But I'm already sure Nginx isn't the problem over here because and I've tried to verify if the socket was working using this script. And the socket wasn't working, I got a timeout over there as well so I could ignore Nginx.
This is a testing environment so the server isn't experiencing any extreme load, I've also check memory consumption it has still several GB's of free space so that couldn't be the issue either.
What triggered me to look at the puma socket was the error message I got in my Nginx error logging:
upstream timed out (110: Connection timed out) while reading response header from upstream
Also I couldn't find anything in the logs of puma indicating what is going wrong, over here are my puma setup:
threads 0, 16
app_dir = ENV.fetch('APP_HOME')
environment ENV['RAILS_ENV']
daemonize
bind "unix://#{app_dir}/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log", "#{app_dir}/log/puma.stderr.log", true
pidfile "#{app_dir}/pids/puma.pid"
state_path "#{app_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require 'active_record'
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[ENV['RAILS_ENV']])
end
And this it the output in my puma state file:
---
pid: 43
config: !ruby/object:Puma::Configuration
cli_options:
conf:
options:
:min_threads: 0
:max_threads: 16
:quiet: false
:debug: false
:binds:
- unix:///APP/sockets/puma.sock
:workers: 1
:daemon: true
:mode: :http
:before_fork: []
:worker_timeout: 60
:worker_boot_timeout: 60
:worker_shutdown_timeout: 30
:environment: staging
:redirect_stdout: "/APP/log/puma.stdout.log"
:redirect_stderr: "/APP/log/puma.stderr.log"
:redirect_append: true
:pidfile: "/APP/pids/puma.pid"
:state: "/APP/pids/puma.state"
:control_url: unix:///tmp/puma-status-1449260516541-37
:config_file: config/puma.rb
:control_url_temp: "/tmp/puma-status-1449260516541-37"
:control_auth_token: cda8879717be7a645ea323d931b88d4b
:tag: APP
The application itself is a Rails app on the latest version 4.2.5, it's deployed on GCE (Google Container Engine).
If somebody could give me some pointer's on how to debug this any further would be very much appreciated. Because now I don't see any output anywhere which could help me any further.
EDIT
I replaced the unix socket with tcp connection to Puma with the same result, still hangs after x time
I'd start with:
How many requests get processed successfully per instance of puma?
Make sure you log the beginning and end of each request with the thread id of the thread executing it, what do you see?
Not knowing more about your application, I'd say it's likely the threads get stuck doing some long/blocking calls without timeouts or spinning on some computation until the whole thread pool gets depleted.
We'll see.
I finally found out why my application was behaving the way it was.
After trying to use a tcp connection and switching to Unicorn I start looking into other possible sources.
That's when I thought maybe my connection to Google Cloud SQL could be the problem. Once I read the faq of Cloud SQL, they mentioned that you have to tweak you Compute instances to ensure they keep open your DB connection. So I performed the next steps they recommend and that solved the problem for me, I added them just in case:
# Display the current tcp_keepalive_time value.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
# Set tcp_keepalive_time to 60 seconds and make it permanent across reboots.
$ echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
# Apply the change.
$ sudo /sbin/sysctl --load=/etc/sysctl.conf
# Display the tcp_keepalive_time value to verify the change was applied.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
How do I get Redmine to start fast on Linux (CentOS)?
I upgraded all last week: latest Redmine, Ruby, Passenger, etc...
I tried about all I could find in the Redmine forum and other posts of getting it to speed up faster, that is: when requesting the Redmine website after a few hours being idle, it starts slow, but then it's blazing fast.
I am using Apache web server with Passenger. Below my current apache config, please some advice, as I am out of ideas:
LoadModule passenger_module /usr/local/rvm/gems/ruby-1.9.3-p448/gems/passenger-4.0.19/buildout/apache2/mod_passenger.so
PassengerRoot /usr/local/rvm/gems/ruby-1.9.3-p448/gems/passenger-4.0.19
PassengerDefaultRuby /usr/local/rvm/wrappers/ruby-1.9.3-p448/ruby
# Refs:
# http://stackoverflow.com/questions/8235309/redmine-perfomance-inconsistency
# http://www.redmine.org/boards/2/topics/31783
# This option should be 0, but has an issue: https://code.google.com/p/phusion-passenger/issues/detail?id=904
PassengerPoolIdleTime 999999
PassengerMinInstances 2
PassengerHighPerformance on
PassengerPreStart https://myhost/redmine
PassengerMaxPoolSize 5
PassengerMaxInstancesPerApp 4
PassengerStatThrottleRate 10
RailsAppSpawnerIdleTime 0
PassengerMaxPreloaderIdleTime 0
RailsBaseURI /redmine
RailsEnv production
I solved this by setting up a cron job to request the redmine homepage every 15 minutes:
*/15 * * * * /usr/bin/curl http://redmine_server/ --stderr - > /dev/null
I can't get Akka schedule method to work properly in Heroku. It works fine locally and prints out "Heartbeat" to the log.
Here is the file in question: https://github.com/magnusart/actor-test/blob/master/app/Global.scala and snippet below.
override def onStart(app: Application) {
Logger.debug("Starting application")
Akka.system(app).scheduler.schedule(2 seconds, 10 seconds) {
Logger.debug("Heartbeat")
}
}
The full application is here (isolated for this purpose, also on actor-test.herokuapp.com).
https://github.com/magnusart/actor-test
What does happen after startup is that I see Starting application in the logs and then I don't see anything further after that:
2012-05-26T16:29:40+00:00 heroku[web.1]: Starting process with command `target/start -Dhttp.port=43943 -Xmx384m -Xss512k -XX:+UseCompressedOops`
2012-05-26T16:29:41+00:00 app[web.1]: Play server process ID is 3
2012-05-26T16:29:42+00:00 app[web.1]: [debug] application - Starting application
2012-05-26T16:29:42+00:00 app[web.1]: [info] play - Starting application default Akka system.
2012-05-26T16:29:42+00:00 app[web.1]: [info] play - Application started (Prod)
2012-05-26T16:29:42+00:00 app[web.1]: [info] play - Listening for HTTP on port 43943...
So the scheduled actor doesn't actually seem to start (which it of course does locally). I'm on Heroku Cedar. I grateful for any hints as to why this isn't working, what am I missing?
BR Magnus Andersson
Update
From what I've found, this seems to be a bug in Play 2 (I'm running version 2.0.1) and not be related to Heroku. I have updated a Play 2 Lighthouse ticket with relevant information. The ticket can be found here: https://play.lighthouseapp.com/projects/82401-play-20/tickets/448-play-dist-ignores-loggerxml#ticket-448-5
The problem seems to come from your logger setting, because in your Heartbeat you print a message with the "debug" level.
AFAIK, Heroku runs your Play app in "production" mode (= "play start"), ie the log level is set to "info" so the debug messages are never printed on Heroku.