I am trying to start up sidekiq and am using:
bundle exec sidekiq
from the directory my script and gemfile are located.
This is what I am getting:
2015-11-17T19:20:48.801Z 78733 TID-owl77getk INFO: ==================================================================
2015-11-17T19:20:48.801Z 78733 TID-owl77getk INFO: Please point sidekiq to a Rails 3/4 application or a Ruby file
2015-11-17T19:20:48.801Z 78733 TID-owl77getk INFO: to load your worker classes with -r [DIR|FILE].
2015-11-17T19:20:48.801Z 78733 TID-owl77getk INFO: ==================================================================
2015-11-17T19:20:48.801Z 78733 TID-owl77getk INFO: sidekiq [options]
-c, --concurrency INT processor threads to use
-d, --daemon Daemonize process
-e, --environment ENV Application environment
-g, --tag TAG Process tag for procline
-i, --index INT unique process index on this machine
-q, --queue QUEUE[,WEIGHT] Queues to process with optional weights
-r, --require [PATH|DIR] Location of Rails application with workers or file to require
-t, --timeout NUM Shutdown timeout
-v, --verbose Print more verbose output
-C, --config PATH path to YAML config file
-L, --logfile PATH path to writable logfile
-P, --pidfile PATH path to pidfile
-V, --version Print version and exit
-h, --help Show help
The current directory must be a Rails app OR you need to use -r to load your Ruby script so it can configure Sidekiq properly.
bundle exec sidekiq -r ./script.rb
Related
I am attempting to migrate from Heroku to AWS, but my Sidekiq jobs keep failing with the following error:
Errno::EPIPE: Broken pipe # io_write - <STDOUT>
I can successfully run jobs from the console using perform_now, and everything works just fine in Heroku, so I am presuming the issue lies somewhere with my AWS setup. I have seen references to improper daemonization around Stack Overflow and Github but not sure how to solve the problem.
Right now I am launching my processes with the following command:
foreman start -f Procfile -p 3000 -e $VAR_FILES &
and I have tried the command both with and without the & at the end.
My Procfile looks like this:
web: bundle exec puma -t 1:2 -p ${PORT:-3000} -e ${RACK_ENV:-production}
worker: bundle exec sidekiq -C config/sidekiq.yml
log: tail -f log/production.log
and I have also tried it like this, following the instructions here (https://github.com/mperham/sidekiq/wiki/Logging#syslog):
worker: bundle exec sidekiq -C config/sidekiq.yml 2>&1 | logger -t sidekiq
My sidekiq.yml has logfile set to ./log/sidekiq.log, which I believe is supposed to redirecting logs away from STDOUT anyway.
I have seen the discussion here (https://github.com/mperham/sidekiq/issues/3188) and can verify that the rails12factor gem is not in my Gemfile.
But still the error persists... Can anyone lend a hand?
UPDATE: I can finally get a stack trace and see it is coming from a puts statement inside of the Neo4j.rb gem:
2017-04-07T15:46:53.553Z 697 TID-12a6r4 WARN: Errno::EPIPE: Broken pipe # io_write - <STDOUT>
2017-04-07T15:46:53.553Z 697 TID-12a6r4 WARN: /var/lib/gems/2.3.0/bundler/gems/neo4j-c804cb33bef8/lib/neo4j/session_manager.rb:60:in `write'
/var/lib/gems/2.3.0/bundler/gems/neo4j-c804cb33bef8/lib/neo4j/session_manager.rb:60:in `puts'
/var/lib/gems/2.3.0/bundler/gems/neo4j-c804cb33bef8/lib/neo4j/session_manager.rb:60:in `puts'
But still not sure how I can mitigate the issue. I have tried with RAILS_LOG_TO_STDOUT=enabled both set and unset.
I spoke to the gem maintainers and they removed the puts statements in v 8.0.13. It fixed the problem for me!
I am installing the Laravel Installer as part of a Docker container using Composer. Laravel is installed globally meaning it goes to ~/.composer/vendor and then add an executable under ~/.composer/vendor/bin.
I am adding the directory ~/.composer/vendor/bin to the $PATH in a Dockerfile as follow:
ENV PATH="~/.composer/vendor/bin:${PATH}"
If I run the command docker exec -it php-fpm bash and from inside the container I run echo $PATH I got the following:
# echo $PATH
/opt/remi/php71/root/usr/bin:/opt/remi/php71/root/usr/sbin:~/.composer/vendor/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
If I run the command laravel inside the container I got the following:
# laravel
Laravel Installer 1.3.3
Usage:
command [options] [arguments]
Options:
-h, --help Display this help message
-q, --quiet Do not output any message
-V, --version Display this application version
--ansi Force ANSI output
--no-ansi Disable ANSI output
-n, --no-interaction Do not ask any interactive question
-v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug
Available commands:
help Displays help for a command
list Lists commands
new Create a new Laravel application.
So everything seems to be working fine. But if I run the following command (outside the container) meaning from the host:
$ docker exec -it php-fpm laravel
I got the following error:
rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"laravel\\\": executable file not found in $PATH\"\n"
What I am missing here? Can the command laravel be run from within the host?
The ~ is the problem here, it's not a valid path character for some shells exec doesn't process it as you'd hope, and it isn't expanded for you by the Dockerfile. Be explicit with your path with the following:
ENV PATH=/root/.composer/vendor/bin:${PATH}
I want to start sidekiq with capistrano. Below is code for that
namespace :sidekiq do
task :start do
run "cd #{current_path} && bundle exec sidekiq -c 10 -e production -L log/sidekiq.log &"
p capture("ps aux | grep sidekiq | awk '{print $2}' | sed -n 1p").strip!
end
end
It executes successfully but still sidekiq is not started on server.
output:
$ cap sidekiq:start
triggering load callbacks
* 2014-06-03 15:03:01 executing `sidekiq:start'
* executing "cd /home/project/current && bundle exec sidekiq -c 10 -e production -L log/sidekiq.log &"
servers: ["x.x.x.x"]
[x.x.x.x] executing command
command finished in 1229ms
* executing "ps aux | grep sidekiq | awk '{print $2}' | sed -n 1p"
servers: ["x.x.x.x"]
[x.x.x.x] executing command
command finished in 1229ms
"19291"
Your problem lies here:
cd /home/project/current && bundle exec sidekiq -c 10 -e production -L log/sidekiq.log &
When you add & at the end command is being executed in a separate process, but this process is still a child of a current process and is terminated when current process stops. Instead you need to run sidekiq as a deamon.
bundle exec sidekiq -c 10 -e production -L log/sidekiq.log -d
Note the extra -d option
Without making use of any gem, here is my solution working perfectly with Capistrano 3.4.0
namespace :sidekiq do
task :restart do
invoke 'sidekiq:stop'
invoke 'sidekiq:start'
end
before 'deploy:finished', 'sidekiq:restart'
task :stop do
on roles(:app) do
within current_path do
pid = p capture "ps aux | grep sidekiq | awk '{print $2}' | sed -n 1p"
execute("kill -9 #{pid}")
end
end
end
task :start do
on roles(:app) do
within current_path do
execute :bundle, "exec sidekiq -e #{fetch(:stage)} -C config/sidekiq.yml -d"
end
end
end
end
Just in case somehow trying to start/restart/stop environment with capistrano:
bundle exec cap production sidekiq:start
bundle exec cap production sidekiq:stop
bundle exec cap production sidekiq:restart
#staging
bundle exec cap staging sidekiq:start
bundle exec cap staging sidekiq:stop
bundle exec cap staging sidekiq:restart
#same with other dependencies
bundle exec cap production puma:restart
bundle exec cap staging puma:stop
Brief explanation
(in case you are hitting a repo online, like github remember to run your ssh agent to connect via ssh in the repo and pull latest version of the code/branch)
setup your own github ssh key locally
run ssh agent with the key eval $(ssh-agent) && ssh-add ~/.ssh/id_rsa
check agent with ssh -T git#github.com
After that i always use this to deploy
run capistrano targeting env bundle exec cap staging deploy
And these are really handy when you are already in prod and had issues but specially for staging, you could do individual exec depending on your Capfile (for instance most of the time i use puma as rack middleware server and sidekiq for scheculed-jobs)
Capfile
require "capistrano/setup"
# Include default deployment tasks
require "capistrano/deploy"
# Load the SCM plugin appropriate to your project:
#
# require "capistrano/scm/hg"
# install_plugin Capistrano::SCM::Hg
# or
# require "capistrano/scm/svn"
# install_plugin Capistrano::SCM::Svn
# or
require "capistrano/scm/git"
install_plugin Capistrano::SCM::Git
# Include tasks from other gems included in your Gemfile
#
# For documentation on these, see for example:
#
# https://github.com/capistrano/rvm
# https://github.com/capistrano/rbenv
# https://github.com/capistrano/chruby
# https://github.com/capistrano/bundler
# https://github.com/capistrano/rails
# https://github.com/capistrano/passenger
#
require "capistrano/rvm"
# require "capistrano/rbenv"
# require "capistrano/chruby"
require "capistrano/bundler"
require "capistrano/rails/assets"
require "capistrano/rails/migrations"
require "capistrano/yarn"
require "capistrano/puma"
install_plugin Capistrano::Puma # Default puma tasks
require 'capistrano/sidekiq'
require 'slackistrano/capistrano'
require_relative 'lib/capistrano/slack_deployment_message'
# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob("lib/capistrano/tasks/*.rake").each { |r| import r }
So in the end for executing start|stop|restart on these features enabled/installed/configured by capistrano
I could always restart puma with capistrano in production:
bundle exec cap production sidekiq:restart
bundle exec cap production puma:restart
As well as in staging:
bundle exec cap staging sidekiq:restart
bundle exec cap staging puma:restart
Hope this helps!
:D
How does one restart the Ruby clockwork gem?
After reading the Wiki, it seems you can only start it, not stop or restart it.
I don't want to manually kill the process and run it again.
The modern syntax to restart clockworkd is:
bin/clockworkd -c periodic-jobs.rb reload
If you've bundled your gems, as you should:
bundle exec bin/clockworkd -c periodic-jobs.rb reload
…where periodic-jobs.rb is your clockwork jobs config file.
Full options:
bin/clockworkd help
Usage: clockworkd -c FILE [options] start|stop|restart|run
--pid-dir=DIR Alternate directory in which to store the process ids. Default is /Users/jm3/Code/soakcity/tmp.
-i, --identifier=STR An identifier for the process. Default is clock file name.
-l, --log Redirect both STDOUT and STDERR to a logfile named clockworkd[.<identifier>].output in the pid-file directory.
--log-dir=DIR A specific directory to put the log files into (default location is pid directory).
-m, --monitor Start monitor process.
-c, --clock=FILE Clock .rb file. Default is /Users/jm3/Code/soakcity/clock.rb.
-d, --dir=DIR Directory to change to once the process starts
-h, --help Show this message
Learn more in the Demonization section of the clockwork source on GitHub. Hope that helps!
Assuming you started it as a daemon, then 'clockworkd -c YOUR_CLOCK.rb stop' should do the trick.
I have a Procfile setup that is running a number of processes successfully:
# /Procfile
redis: bundle exec redis-server
sidekiq: bundle exec sidekiq -v -C ./config.yml
forward: forward 4567 mock-api
I need to add one more process - a Sinatra app that lives in a different directory on my machine. If I cd to the directory, I can start it from the Terminal with:
$ rackup -p 4567
And I can start it from a different directory using the Terminal with:
$ sh -c 'cd /Path/to/project/ && exec rackup -p 4567'
But how should I do this using foreman. I have tried adding the following, but it fails silently:
mock-api: sh -c 'cd /Path/to/project/ && exec rackup -p 4567'
Is this even possible? And if so, how?
Of all the stupid things ...
It was failing because of the hyphen in the process name.