I am using deploying a Ruby on Rails app to a Linode VPS using Capistrano. I am using Unicorn as the application server and Nginx as the proxy. My problem is that I am not able to start Unicorn because of an apparent permissions issue, but I'm having a hard time tracking it down.
Unicorn is started using this Capistrano task:
task :start, :roles => :app, :except => { :no_release => true } do
run <<-CMD
cd #{current_path} && #{unicorn_bin} -c #{unicorn_config} -E #{rails_env} -D
CMD
end
I get back and ArgumentError indicating that the path to the pid file is not writeable.
cap unicorn:start master [d4447d3] modified
* executing `unicorn:start'
* executing "cd /home/deploy/apps/gogy/current && /home/deploy/apps/gogy/current/bin/unicorn -c /home/deploy/apps/gogy/shared/config/unicorn.rb -E production -D"
servers: ["66.228.52.4"]
[66.228.52.4] executing command
** [out :: 66.228.52.4] /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/configurator.rb:88:in `reload':
** [out :: 66.228.52.4] directory for pid=/home/deploy/apps/shared/pids/unicorn.pid not writable (ArgumentError)
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/configurator.rb:84:in `each'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/configurator.rb:84:in `reload'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/configurator.rb:65:in `initialize'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/http_server.rb:102:in `new'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/lib/unicorn/http_server.rb:102:in `initialize'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/bin/unicorn:121:in `new'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/shared/bundle/ruby/1.8/gems/unicorn-4.1.1/bin/unicorn:121
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/current/bin/unicorn:16:in `load'
** [out :: 66.228.52.4] from /home/deploy/apps/gogy/current/bin/unicorn:16
** [out :: 66.228.52.4] master failed to start, check stderr log for details
command finished in 1032ms
failed: "rvm_path=/usr/local/rvm /usr/local/rvm/bin/rvm-shell 'default' -c 'cd /home/deploy/apps/gogy/current && /home/deploy/apps/gogy/current/bin/unicorn -c /home/deploy/apps/gogy/shared/config/unicorn.rb -E production -D'" on 66.228.52.4
Finally, here is the relevant sections of my Unicorn configuration file (unicorn.rb)
# Ensure that we're running in the production environment
rails_env = ENV['RAILS_ENV'] || 'production'
# User to run under
user 'deploy', 'deploy'
# We will spawn off two worker processes and one master process
worker_processes 2
# set the default working directory
working_directory "/home/deploy/apps/gogy/current"
# This loads the application in the master process before forking
# worker processes
# Read more about it here:
# http://unicorn.bogomips.org/Unicorn/Configurator.html
preload_app true
timeout 30
# This is where we specify the socket.
# We will point the upstream Nginx module to this socket later on
listen "/home/deploy/apps/shared/sockets/unicorn.sock", :backlog => 64
pid "/home/deploy/apps/shared/pids/unicorn.pid"
# Set the path of the log files
stderr_path "/home/deploy/apps/gogy/current/log/unicorn.stderr.log"
stdout_path "/home/deploy/apps/gogy/current/log/unicorn.stdout.log"
I'm deploying with Capistrano under the 'deploy' user and group and that's what Unicorn should be run under also.
Does anyone have any ideas why Unicorn can't write out the pid file? Any help would be greatly appreciated!!!
Mike
Actually, the error message has already told you why:
directory for pid=/home/deploy/apps/shared/pids/unicorn.pid not writable
So, does the directory /home/deploy/apps/shared/pids exist? If not, you should call mkdir to create it.
Unicorn process is running in background(-d),
type
ps aux | grep unicorn
and kill running unicorn process then start once again.
in capistrano 3; if we change roles to :all, then while deployment capistrano saying;
WARN [SKIPPING] No Matching Host for .....
and after deployment all symlinks not working anymore. And if tmp/pids folder in symlink array, then unicorn can't find the tmp/pids folder and saying unicorn.pid is not writable.
So we must use; roles: %w{web app db} instead of roles :all.
Sample server line at production.rb;
server 'YOUR_SERVER_IP', user: 'YOUR_DEPLOY_USER', roles: %w{web app db}, ssh_options: { forward_agent: true }
Related
I deployed a jekyll site to heroku. Logs indicate that the app status has changed from "starting to up" (shown below).
Starting process with command `bundle exec puma -t 8:32 -w 3 -p 3641`
[4] Puma starting in cluster mode...
[4] * Version 3.6.0 (ruby 2.3.1-p112), codename: Sleepy Sunday Serenity
[4] * Min threads: 8, max threads: 32
[4] * Environment: production
[4] * Process workers: 3
[4] * Phased restart available
[4] * Listening on tcp://0.0.0.0:3641
[4] Use Ctrl-C to stop
Configuration file: /app/_config.yml
Configuration file: /app/_config.yml
Generating site: /app -> /app/_site
[4] - Worker 0 (pid: 6) booted, phase: 0
Generating site: /app -> /app/_site
[4] - Worker 2 (pid: 14) booted, phase: 0
Configuration file: /app/_config.yml
Generating site: /app -> /app/_site
[4] - Worker 1 (pid: 10) booted, phase: 0
heroku[web.1]: State changed from starting to up
But when I hit my url it gives me "Jekyll is currently rendering the site.
Please try again shortly." No matter how long i wait it says the same thing. I repeated deployment several times but it still gives the same message.
Please advise.
I had this problem and fixed it by adding an assets:precompile rake task to my Rakefile. Originally, my Rakefile looked like this:
task :build do
system('bundle exec jekyll build')
end
My build task alone wasn't hooking into Heroku's build process, causing rack-jekyll to serve its wait page infinitely.
Here's the Rakefile that worked for me:
task :build do
system('bundle exec jekyll build')
end
namespace :assets do
task precompile: :build
end
I am using the chef community java cookbook to install java on CentOS 7.2. I have an LWRP recipe that is not working
I build up my install parameters via the java_ark section
op_sys = node['os']
# Used to get the required java update from the environment file
java_ver_update = node['java_ver']
# Logic for each OS
if op_sys == 'linux'
# Java_ark, which is used to define the correct install attributes for each OS type (win/linux)
install_dir = node['install_dir']
java_ark "jdk" do
url 'http://sv-dc01.sv.local/install_artifacts/java/oracle/JRE/jre-'+"#{java_ver_update}"+'-linux-x64.tar.gz'
app_home install_dir
owner 'root'
group 'wheel'
app_home_mode 774
action :install
end
# Set the folder permissions
execute "chown-dir" do
command "chmod -R 774 #{install_dir}"
action :run
end
end
Here is my environment file where I have set some node attributes to be called in the main recipe
name 'env_workstation_dubbo'
description "Environment Workstation Dubbo"
cookbook_versions({
"ohai" => "> 0.0.1",
"java" => "> 0.1.0",
"install_java" => "> 0.0.1"
})
$environment = Hash.new{|h,k| h[k]=Hash.new(&h.default_proc) }
$override = Hash.new{|h,k| h[k]=Hash.new(&h.default_proc) }
$override['java']['jdk_version'] = '8'
$override['java']['install_flavor'] = 'oracle'
$override['java']['oracle']['accept_oracle_download_terms'] = true
$override['java']['set_default'] = false
# Custom attributes/variables to be placed here
$override['java_ver'] = '8u77'
$override['install_dir'] = '/applications/'
default_attributes(Chef::Mixin::DeepMerge.merge($_default_environment, $environment))
override_attributes($override)
And here is what happens during the sudo chef-client run on the CentOS machine I am using for testing:
Starting Chef Client, version 12.16.42
resolving cookbooks for run list: ["install_java"]
Synchronizing Cookbooks:
- install_java (0.2.0)
- java (1.43.0)
- compat_resource (12.16.2)
- ohai (4.2.2)
- seven_zip (2.0.2)
- homebrew (2.1.2)
- apt (5.0.0)
- build-essential (7.0.2)
- windows (2.1.1)
- mingw (1.2.4)
- ark (2.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
[2016-12-12T11:04:24+13:00] WARN: Chef::Provider::AptRepository already exists! Cannot create deprecation class for LWRP provider apt_repository from cookbook apt
[2016-12-12T11:04:24+13:00] WARN: AptRepository already exists! Deprecation class overwrites Custom resource apt_repository from cookbook apt
Converging 8 resources
Recipe: install_java::default
* java_ark[jdk] action install
================================================================================
Error executing action `install` on resource 'java_ark[jdk]'
================================================================================
Errno::ENOENT
-------------
No such file or directory # dir_s_mkdir -
Cookbook Trace:
---------------
/var/chef/cache/cookbooks/java/providers/ark.rb:116:in `block (2 levels) in class_from_file'
/var/chef/cache/cookbooks/java/providers/ark.rb:115:in `block in class_from_file'
Resource Declaration:
---------------------
# In /var/chef/cache/cookbooks/install_java/recipes/default.rb
18: java_ark "jdk" do
19: url 'http://sv-dc01.sv.local/install_artifacts/java/oracle/JRE/jre-'+"#{java_ver_update}"+'-linux-x64.tar.gz'
20: app_home install_dir
21: owner 'root'
22: group 'wheel'
23: app_home_mode 774
24: action :install
25: end
26:
Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/install_java/recipes/default.rb:18:in `from_file'
java_ark("jdk") do
action [:install]
supports {:report=>true, :exception=>true}
retries 0
retry_delay 2
default_guard_interpreter :default
declared_type :java_ark
cookbook_name "install_java"
recipe_name "default"
url "http://sv-dc01.sv.local/install_artifacts/java/oracle/JRE/jre-8u77-linux-x64.tar.gz"
app_home "/applications/"
owner "root"
group "wheel"
app_home_mode 774
end
Platform:
---------
x86_64-linux
Running handlers:
[2016-12-12T11:04:25+13:00] ERROR: Running exception handlers
Running handlers complete
[2016-12-12T11:04:25+13:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 17 seconds
[2016-12-12T11:04:25+13:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2016-12-12T11:04:25+13:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2016-12-12T11:04:25+13:00] ERROR: java_ark[jdk] (install_java::default line 18) had an error: Errno::ENOENT: No such file or directory # dir_s_mkdir -
[2016-12-12T11:04:25+13:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
I can't for the life of me figure out why it can't find the file or directory or get a completed install of Java working???
Maybe your environment is not applied correctly. I mean, if you machine is not using your 'env_workstation_dubbo' environment, the node['install_dir'] attribute will not be correctly set. You can read how to set the environment for a node here.
Another possibility is that you are using a modified version of the java cookbook that uses mkdir instead of mkdir_p. I say that because I have not been able to find your 2.0.0 java cookbook version in the supermarket. Where are you getting that cookbook from?
Update after downgraded to java cookbook version v1.43.0
The problem is that the install_dir must be at least 2 directory level deep, following the app_root/app_name format. For example "/applications/default".
If you use "/applications" as install_dir, app_name will be "applications", app_root will be empty and the latter will cause the mkdir error when trying to create the application root directory.
I've been trying to setup a Webmachine app on Heroku, using the buildpack recommended. My Procfile is
# Procfile
web: sh ./rel/app_name/bin/app_name console
Unfortunately this doesn't start the dyno correctly, it fails with
2015-12-08T16:34:55.349362+00:00 heroku[web.1]: Starting process with command `sh ./rel/app_name/bin/app_name console`
2015-12-08T16:34:57.387620+00:00 app[web.1]: Exec: /app/rel/app_name/erts-7.0/bin/erlexec -boot /app/rel/app_name/releases/1/app_name -mode embedded -config /app/rel/app_name/releases/1/sys.config -args_file /app/rel/app_name/releases/1/vm.args -- console
2015-12-08T16:34:57.387630+00:00 app[web.1]: Root: /app/rel/app_name
2015-12-08T16:35:05.396922+00:00 app[web.1]: 16:35:05.396 [info] Application app_name started on node 'app_name#127.0.0.1'
2015-12-08T16:35:05.388846+00:00 app[web.1]: 16:35:05.387 [info] Application lager started on node 'app_name#127.0.0.1'
2015-12-08T16:35:05.399281+00:00 app[web.1]: Eshell V7.0 (abort with ^G)
2015-12-08T16:35:05.399283+00:00 app[web.1]: (app_name#127.0.0.1)1> *** Terminating erlang ('app_name#127.0.0.1')
2015-12-08T16:35:06.448742+00:00 heroku[web.1]: Process exited with status 0
2015-12-08T16:35:06.441993+00:00 heroku[web.1]: State changed from starting to crashed
But when I run the same command via heroku toolbelt, it starts up with the console.
$ heroku run "./rel/app_name/bin/app_name console"
Running ./rel/app_name/bin/app_name console on tp-api... up, run.4201
Exec: /app/rel/app_name/erts-7.0/bin/erlexec -boot /app/rel/app_name/releases/1/app_name -mode embedded -config /app/rel/app_name/releases/1/sys.config -args_file /app/rel/app_name/releases/1/vm.args -- console
Root: /app/rel/app_name
Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false]
16:38:43.194 [info] Application lager started on node 'app_name#127.0.0.1'
16:38:43.196 [info] Application app_name started on node 'app_name#127.0.0.1'
Eshell V7.0 (abort with ^G)
(app_name#127.0.0.1)1>
Is there way to start the node, maybe as a daemon on the dyno(s)?
Note I've tried to use start instead of console, but that did not yield any success.
So after much tinkering, trial and error, figured out what was wrong. Heroku does not like the interactive shell to be there - hence the crash on starting the Erlang app through console fails.
I've adjusted my Procfile, to the following:
# Procfile
web: erl -pa $PWD/ebin $PWD/deps/*/ebin -noshell -boot start_sasl -s reloader -s app_name -config ./rel/app_name/releases/1/sys
Which boots up the application app_name, using the the release's sys.config configuration file. What was crucial here, is to have the -noshell option in the command, that allows heroku to run the process as they expect it.
I am having issues getting a unicorn server up and running. I try to run unicorn using:
bundle exec unicorn -c /var/www/docninja/unicorn.rb -E development -D -p 8080
I get the following error:
I, [2015-04-14T23:54:52.117609 #123] INFO -- : listening on addr=/var/www/docninja/tmp/sockets/unicorn.docninja.sock fd=10
I, [2015-04-14T23:54:52.118624 #123] INFO -- : listening on addr=0.0.0.0:8080 fd=11
I, [2015-04-14T23:54:52.119553 #123] INFO -- : worker=0 spawning...
I, [2015-04-14T23:54:52.127642 #123] INFO -- : master process ready
I, [2015-04-14T23:54:52.129109 #126] INFO -- : worker=0 spawned pid=126
I, [2015-04-14T23:54:52.129559 #126] INFO -- : Refreshing Gem list
F, [2015-04-14T23:59:07.536943 #130] FATAL -- : error adding listener addr=/var/www/docninja/tmp/sockets/unicorn.docninja.sock
/usr/local/rvm/gems/ruby-2.1.2/gems/unicorn-4.8.3/lib/unicorn/socket_helper.rb:152:in `bind_listen': socket=/var/www/docninja/tmp/sockets/unicorn.docninja.sock specified but it is not a socket! (Argument$
from /usr/local/rvm/gems/ruby-2.1.2/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:242:in `listen'
from /usr/local/rvm/gems/ruby-2.1.2/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:809:in `block in bind_new_listeners!'
from /usr/local/rvm/gems/ruby-2.1.2/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:809:in `each'
from /usr/local/rvm/gems/ruby-2.1.2/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:809:in `bind_new_listeners!'
from /usr/local/rvm/gems/ruby-2.1.2/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:138:in `start'
from /usr/local/rvm/gems/ruby-2.1.2/gems/unicorn-4.8.3/bin/unicorn:126:in `<top (required)>'
from /usr/local/rvm/gems/ruby-2.1.2/bin/unicorn:23:in `load'
from /usr/local/rvm/gems/ruby-2.1.2/bin/unicorn:23:in `<main>'
from /usr/local/rvm/gems/ruby-2.1.2/bin/ruby_executable_hooks:15:in `eval'
from /usr/local/rvm/gems/ruby-2.1.2/bin/ruby_executable_hooks:15:in `<main>'
Here is my unicorn.rb file located in my main app directory (/var/www/docninja):
app_dir = "/var/www/docninja"
# Set the working application directory
working_directory app_dir
# Unicorn PID file location
pid "#{app_dir}/tmp/pids/unicorn.pid"
# Path to logs
stderr_path "#{app_dir}/log/unicorn.stderr.log"
stdout_path "#{app_dir}/log/unicorn.stdout.log"
# Path to socket file for nginx
listen "#{app_dir}/tmp/sockets/unicorn.docninja.sock", :backlog => 64
worker_processes 1
timeout 30
All the paths here seem to be correct because it finds the sockets directory and the logs are where they should be; however, it just is not able to add some listener on that?
I'm not sure if this helps but I am also setting this up in a Docker container using a Dockerfile.
Would really appreciate any help!
I am using PostgreSQL, Rails 3.1.3 and Ruby 1.9.3. I am struggling to use db:migrate as outlined here.
This is what I am seeing in the terminal:
funkdified#funkdified-laptop:~/railsprojects/hartl$ bundle exec rake db:migrate --trace
** Invoke db:migrate (first_time)
** Invoke environment (first_time)
** Execute environment
** Invoke db:load_config (first_time)
** Invoke rails_env (first_time)
** Execute rails_env
** Execute db:load_config
** Execute db:migrate
== AddEmailUniquenessIndex: migrating ========================================
-- add_index(:users, :email, {:unique=>true})
and then the code hangs at this point. Any ideas why?
From: development.log
[1m[36m (0.1ms)[0m [1mSHOW search_path[0m
[1m[35m (0.5ms)[0m SELECT "schema_migrations"."version" FROM "schema_migrations"
Migrating to CreateUsers (20120124022843)
Migrating to AddEmailUniquenessIndex (20120124093922)
[1m[36m (0.1ms)[0m [1mBEGIN[0m
[1m[35m (3.6ms)[0m SELECT distinct i.relname, d.indisunique, d.indkey, t.oid
FROM pg_class t
INNER JOIN pg_index d ON t.oid = d.indrelid
INNER JOIN pg_class i ON d.indexrelid = i.oid
WHERE i.relkind = 'i'
AND d.indisprimary = 'f'
AND t.relname = 'users'
AND i.relnamespace IN (SELECT oid FROM pg_namespace WHERE nspname = ANY (current_schemas(false)) )
ORDER BY i.relname
I just had a similar problem, where a very simple migration was stalling for no apparent reason. I believe the problem has to do with not being able to get a database connection. I exited a rails console session that I had open in another terminal and then the migration immediately finished with no problems.
I had same problem .. I found out that there was idle transaction which blocked further queries on this table ..
Run:
heroku pg:ps --app=...
To view database processes. You will have to kill idle process:
heroku pg:kill 913 --force --app=...
(913 is ID of idle process -> change it to your needs)
I just made two migrations. The first one created a new table, the second one removed fields from an existing table. The second migration was hanging, and the reason turned out to be a rails console session (rails console --sandbox) running in another terminal windows.
I had the same issue and after trying a bunch of solutions what worked was...shutting down all of my terminal sessions, restarting the computer, and trying again. Sometimes it's just power cycle magic.