jekyll heroku deployment issue - heroku

I deployed a jekyll site to heroku. Logs indicate that the app status has changed from "starting to up" (shown below).
Starting process with command `bundle exec puma -t 8:32 -w 3 -p 3641`
[4] Puma starting in cluster mode...
[4] * Version 3.6.0 (ruby 2.3.1-p112), codename: Sleepy Sunday Serenity
[4] * Min threads: 8, max threads: 32
[4] * Environment: production
[4] * Process workers: 3
[4] * Phased restart available
[4] * Listening on tcp://0.0.0.0:3641
[4] Use Ctrl-C to stop
Configuration file: /app/_config.yml
Configuration file: /app/_config.yml
Generating site: /app -> /app/_site
[4] - Worker 0 (pid: 6) booted, phase: 0
Generating site: /app -> /app/_site
[4] - Worker 2 (pid: 14) booted, phase: 0
Configuration file: /app/_config.yml
Generating site: /app -> /app/_site
[4] - Worker 1 (pid: 10) booted, phase: 0
heroku[web.1]: State changed from starting to up
But when I hit my url it gives me "Jekyll is currently rendering the site.
Please try again shortly." No matter how long i wait it says the same thing. I repeated deployment several times but it still gives the same message.
Please advise.

I had this problem and fixed it by adding an assets:precompile rake task to my Rakefile. Originally, my Rakefile looked like this:
task :build do
system('bundle exec jekyll build')
end
My build task alone wasn't hooking into Heroku's build process, causing rack-jekyll to serve its wait page infinitely.
Here's the Rakefile that worked for me:
task :build do
system('bundle exec jekyll build')
end
namespace :assets do
task precompile: :build
end

Related

After upgrade to Rails 5, app no longer receiving requests

I updated one of my apps to Rails 5 and upgraded the Ruby version to 2.3.1 as well. The app already used Puma prior to the Rails 5 upgrade as well and was deployed on a Digital Ocean droplet.
When I start rails server locally, I get the normal output in my Rails log, which I've copied below.
=> Booting Puma
=> Rails 5.0.0 application starting in development on http://localhost:3000
=> Run `rails server -h` for more startup options
[14669] Puma starting in cluster mode...
[14669] * Version 3.4.0 (ruby 2.3.1-p112), codename: Owl Bowl Brawl
[14669] * Min threads: 5, max threads: 5
[14669] * Environment: development
[14669] * Process workers: 2
[14669] * Preloading application
[14669] * Listening on tcp://localhost:3000
[14669] Use Ctrl-C to stop
[14669] - Worker 1 (pid: 14684) booted, phase: 0
[14669] - Worker 0 (pid: 14683) booted, phase: 0
Everything looks normal to me. When I visit localhost:3000, the browser has a pending request that is pending indefinitely. There is no further activity in the Rails log acknowledging that any request is being received.
Has anyone encountered this type of issue, or know of any potential causes for that?
Resolved this issue, and confirmed by #marvindanig who was experiencing the same issue, that the 'tmp' folder needed to be cleared. There is a rake task in rails to do so...
rake tmp:clear

Erlang/Webmachine doesn't start on heroku

I've been trying to setup a Webmachine app on Heroku, using the buildpack recommended. My Procfile is
# Procfile
web: sh ./rel/app_name/bin/app_name console
Unfortunately this doesn't start the dyno correctly, it fails with
2015-12-08T16:34:55.349362+00:00 heroku[web.1]: Starting process with command `sh ./rel/app_name/bin/app_name console`
2015-12-08T16:34:57.387620+00:00 app[web.1]: Exec: /app/rel/app_name/erts-7.0/bin/erlexec -boot /app/rel/app_name/releases/1/app_name -mode embedded -config /app/rel/app_name/releases/1/sys.config -args_file /app/rel/app_name/releases/1/vm.args -- console
2015-12-08T16:34:57.387630+00:00 app[web.1]: Root: /app/rel/app_name
2015-12-08T16:35:05.396922+00:00 app[web.1]: 16:35:05.396 [info] Application app_name started on node 'app_name#127.0.0.1'
2015-12-08T16:35:05.388846+00:00 app[web.1]: 16:35:05.387 [info] Application lager started on node 'app_name#127.0.0.1'
2015-12-08T16:35:05.399281+00:00 app[web.1]: Eshell V7.0 (abort with ^G)
2015-12-08T16:35:05.399283+00:00 app[web.1]: (app_name#127.0.0.1)1> *** Terminating erlang ('app_name#127.0.0.1')
2015-12-08T16:35:06.448742+00:00 heroku[web.1]: Process exited with status 0
2015-12-08T16:35:06.441993+00:00 heroku[web.1]: State changed from starting to crashed
But when I run the same command via heroku toolbelt, it starts up with the console.
$ heroku run "./rel/app_name/bin/app_name console"
Running ./rel/app_name/bin/app_name console on tp-api... up, run.4201
Exec: /app/rel/app_name/erts-7.0/bin/erlexec -boot /app/rel/app_name/releases/1/app_name -mode embedded -config /app/rel/app_name/releases/1/sys.config -args_file /app/rel/app_name/releases/1/vm.args -- console
Root: /app/rel/app_name
Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false]
16:38:43.194 [info] Application lager started on node 'app_name#127.0.0.1'
16:38:43.196 [info] Application app_name started on node 'app_name#127.0.0.1'
Eshell V7.0 (abort with ^G)
(app_name#127.0.0.1)1>
Is there way to start the node, maybe as a daemon on the dyno(s)?
Note I've tried to use start instead of console, but that did not yield any success.
So after much tinkering, trial and error, figured out what was wrong. Heroku does not like the interactive shell to be there - hence the crash on starting the Erlang app through console fails.
I've adjusted my Procfile, to the following:
# Procfile
web: erl -pa $PWD/ebin $PWD/deps/*/ebin -noshell -boot start_sasl -s reloader -s app_name -config ./rel/app_name/releases/1/sys
Which boots up the application app_name, using the the release's sys.config configuration file. What was crucial here, is to have the -noshell option in the command, that allows heroku to run the process as they expect it.

Akka Scheduling in Heroku with Play 2 Framework

I can't get Akka schedule method to work properly in Heroku. It works fine locally and prints out "Heartbeat" to the log.
Here is the file in question: https://github.com/magnusart/actor-test/blob/master/app/Global.scala and snippet below.
override def onStart(app: Application) {
Logger.debug("Starting application")
Akka.system(app).scheduler.schedule(2 seconds, 10 seconds) {
Logger.debug("Heartbeat")
}
}
The full application is here (isolated for this purpose, also on actor-test.herokuapp.com).
https://github.com/magnusart/actor-test
What does happen after startup is that I see Starting application in the logs and then I don't see anything further after that:
2012-05-26T16:29:40+00:00 heroku[web.1]: Starting process with command `target/start -Dhttp.port=43943 -Xmx384m -Xss512k -XX:+UseCompressedOops`
2012-05-26T16:29:41+00:00 app[web.1]: Play server process ID is 3
2012-05-26T16:29:42+00:00 app[web.1]: [debug] application - Starting application
2012-05-26T16:29:42+00:00 app[web.1]: [info] play - Starting application default Akka system.
2012-05-26T16:29:42+00:00 app[web.1]: [info] play - Application started (Prod)
2012-05-26T16:29:42+00:00 app[web.1]: [info] play - Listening for HTTP on port 43943...
So the scheduled actor doesn't actually seem to start (which it of course does locally). I'm on Heroku Cedar. I grateful for any hints as to why this isn't working, what am I missing?
BR Magnus Andersson
Update
From what I've found, this seems to be a bug in Play 2 (I'm running version 2.0.1) and not be related to Heroku. I have updated a Play 2 Lighthouse ticket with relevant information. The ticket can be found here: https://play.lighthouseapp.com/projects/82401-play-20/tickets/448-play-dist-ignores-loggerxml#ticket-448-5
The problem seems to come from your logger setting, because in your Heartbeat you print a message with the "debug" level.
AFAIK, Heroku runs your Play app in "production" mode (= "play start"), ie the log level is set to "info" so the debug messages are never printed on Heroku.

Deploying Play 2.0 app on Heroku

So I am kind of new to setting up servers. And I have been struggling with various sql issues all night. I think the only thing that sits between me and a successfully running play app is this: error
Starting process with command `target/start -Dhttp.port=80 `
2012-04-04T05:58:52+00:00 app[web.1]: Play server process ID is 1
2012-04-04T05:58:53+00:00 app[web.1]: [info] play - database [default] connected at jdbc:mysql://us-cdbr-east.cleardb.com/heroku_cd914b667dae168
2012-04-04T05:58:56+00:00 app[web.1]: [info] play - Application started (Prod)
2012-04-04T05:58:56+00:00 app[web.1]: Oops, cannot start the server.
2012-04-04T05:58:56+00:00 app[web.1]: org.jboss.netty.channel.ChannelException: Failed to bind to: /0.0.0.0:80
... more errors
Does anyone spot any problems? Do I need any java options arguments?
I tried specifying a user port in the Procfile, and got a different error message:
2012-04-04T07:01:36+00:00 heroku[web.1]: Starting process with command `target/start -Dhttp.port=2000`
2012-04-04T07:01:37+00:00 app[web.1]: Play server process ID is 1
2012-04-04T07:01:40+00:00 app[web.1]: [info] play - database [default] connected at jdbc:mysql://us-cdbr-east.cleardb.com/heroku_cd914b667dae168
2012-04-04T07:01:45+00:00 app[web.1]: [info] play - Application started (Prod)
2012-04-04T07:01:45+00:00 app[web.1]: [info] play - Listening for HTTP on port 2000...
2012-04-04T07:01:46+00:00 heroku[web.1]: Error R11 (Bad bind) -> Process bound to port 2000, should be 47248 (see environment variable PORT)
2012-04-04T07:01:46+00:00 heroku[web.1]: Stopping process with SIGKILL
2012-04-04T07:01:47+00:00 heroku[web.1]: Process exited with status 137
2012-04-04T07:01:47+00:00 heroku[web.1]: State changed from starting to crashed
I have no idea what is happening. How do I change this environment variable? This heroku process model is very confusing to me.
I think the problem is that you're not allowing Heroku to specify the port. Googling your error I find: https://devcenter.heroku.com/articles/error-codes#r11__bad_bind
So instead of doing this:
web: target/start -Dhttp.port=80
Do this
web: target/start -Dhttp.port=$PORT
James has a nice writeup on getting a more advanced Play 2.0 app deployed here.

Unicorn unable to write pid file

I am using deploying a Ruby on Rails app to a Linode VPS using Capistrano. I am using Unicorn as the application server and Nginx as the proxy. My problem is that I am not able to start Unicorn because of an apparent permissions issue, but I'm having a hard time tracking it down.
Unicorn is started using this Capistrano task:
task :start, :roles => :app, :except => { :no_release => true } do
run <<-CMD
cd #{current_path} && #{unicorn_bin} -c #{unicorn_config} -E #{rails_env} -D
CMD
end
I get back and ArgumentError indicating that the path to the pid file is not writeable.
cap unicorn:start master [d4447d3] modified
* executing `unicorn:start'
* executing "cd /home/deploy/apps/gogy/current && /home/deploy/apps/gogy/current/bin/unicorn -c /home/deploy/apps/gogy/shared/config/unicorn.rb -E production -D"
servers: ["66.228.52.4"]
[66.228.52.4] executing command
** [out :: 66.228.52.4] /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/configurator.rb:88:in `reload':
** [out :: 66.228.52.4] directory for pid=/home/deploy/apps/shared/pids/unicorn.pid not writable (ArgumentError)
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/configurator.rb:84:in `each'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/configurator.rb:84:in `reload'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/configurator.rb:65:in `initialize'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/http_server.rb:102:in `new'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/http_server.rb:102:in `initialize'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/bin/unicorn:121:in `new'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/bin/unicorn:121
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/current/bin/unicorn:16:in `load'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/current/bin/unicorn:16
** [out :: 66.228.52.4] master failed to start, check stderr log for details
command finished in 1032ms
failed: "rvm_path=/usr/local/rvm /usr/local/rvm/bin/rvm-shell 'default' -c 'cd /home/deploy/apps/gogy/current && /home/deploy/apps/gogy/current/bin/unicorn -c /home/deploy/apps/gogy/shared/config/unicorn.rb -E production -D'" on 66.228.52.4
Finally, here is the relevant sections of my Unicorn configuration file (unicorn.rb)
# Ensure that we're running in the production environment
rails_env = ENV['RAILS_ENV'] || 'production'
# User to run under
user 'deploy', 'deploy'
# We will spawn off two worker processes and one master process
worker_processes 2
# set the default working directory
working_directory "/home/deploy/apps/gogy/current"
# This loads the application in the master process before forking
# worker processes
# Read more about it here:
# http://unicorn.bogomips.org/Unicorn/Configurator.html
preload_app true
timeout 30
# This is where we specify the socket.
# We will point the upstream Nginx module to this socket later on
listen "/home/deploy/apps/shared/sockets/unicorn.sock", :backlog => 64
pid "/home/deploy/apps/shared/pids/unicorn.pid"
# Set the path of the log files
stderr_path "/home/deploy/apps/gogy/current/log/unicorn.stderr.log"
stdout_path "/home/deploy/apps/gogy/current/log/unicorn.stdout.log"
I'm deploying with Capistrano under the 'deploy' user and group and that's what Unicorn should be run under also.
Does anyone have any ideas why Unicorn can't write out the pid file? Any help would be greatly appreciated!!!
Mike
Actually, the error message has already told you why:
directory for pid=/home/deploy/apps/shared/pids/unicorn.pid not writable
So, does the directory /home/deploy/apps/shared/pids exist? If not, you should call mkdir to create it.
Unicorn process is running in background(-d),
type
ps aux | grep unicorn
and kill running unicorn process then start once again.
in capistrano 3; if we change roles to :all, then while deployment capistrano saying;
WARN [SKIPPING] No Matching Host for .....
and after deployment all symlinks not working anymore. And if tmp/pids folder in symlink array, then unicorn can't find the tmp/pids folder and saying unicorn.pid is not writable.
So we must use; roles: %w{web app db} instead of roles :all.
Sample server line at production.rb;
server 'YOUR_SERVER_IP', user: 'YOUR_DEPLOY_USER', roles: %w{web app db}, ssh_options: { forward_agent: true }

Resources