How do I create a resque worker automatically at bootup? - ruby

Ok, I'm making my first ruby app. Who know moving everything over to 'production' is so fugging complicated. So far I've struggled my way through configuring passenger, getting it to run on startup, then getting redis to run on startup.
My last task is on startup to add 1 worker. Right now, I have to ssh in and run my rake command rake workers:start. Obviously this is no good when I want to close ssh.. so I just dont really know how or what the next step is.
I tried copying resque default config to config.ru and it just blows up Passenger with errors. I also looked into resque-pool which some people mentioned but that is over my head.
all i have to do is add 1 worker on bootup. This isnt that serious of an app so simpler would be best at this point.

I don't use the god gem because (1) I've seen a project that was very thrashed by the complexity of setup it introduced, and (2) I'm personally really comfortable with the standard Linux (Ubuntu) tools that handle this kind of thing.
To start the Resque workers on bootup
I have this code in my /etc/rc.local file. I have a deploy user on the system:
# Start Resque
su -l deploy -c "/home/deploy/start-resque-workers"
su -l deploy -c "/home/deploy/start-resque-webui"
Then, in those scripts I set up the ruby environment and run the rake task:
# Load RVM into a shell session *as a function*
if [[ -s "$HOME/.rvm/scripts/rvm" ]] ; then
# First try to load from a user install
source "$HOME/.rvm/scripts/rvm"
elif [[ -s "/usr/local/rvm/scripts/rvm" ]] ; then
# Then try to load from a root install
source "/usr/local/rvm/scripts/rvm"
else
printf "ERROR: An RVM installation was not found.\n"
fi
# Use rvm to switch to the default ruby.
rvm use default
# Now launch the app
cd /home/deploy/app-name-here/current
nohup rake QUEUE=* RAILS_ENV=production environment resque:work &
I've been using this kind of set up for years, and it's solid. The servers don't crash. I don't yet need the overhead of installing another system (like the god gem) to watch over these other servers.
Additionally, I use a capistrano gem to handle restarting the workers on deploy.

In production you should be using god to watch your processes. Even if this project is a small one, I strongly recommend investing your time and setting it up.
Another big a must is Capistrano.
So, if you were using god, here's a config file that would help you.
You could also try scheduling rake resque:work at system startup, using a proper script in /etc/init.d/ or /etc/init/ or another (depends on what system you use). I tried this some time ago and I gave up (don't remember why).
I understand that this my answer isn't exactly what you're looking for right now. But imagine this: if everything is set up, then deploying next version is as easy as running rake deploy on your development machine. And it will take care of pulling your code from repository, running migrations, restarting workers and webservers and what not.

Related

How to find the right version of Ruby

I'm currently working on a build pipeline that uses Jenkins and GitLab to trigger builds for the project. Basically, the build is triggered when someone pushes to the repository. Also, some Ruby scripts are executed as part of the build process. These scripts run some checks on the projects and perform some fixes, like synchronizing an Xcode project with added and deleted files from the source directory - in this case they are not the same.
I'm using several tools to configure the pipeline. The builds run on a machine that is physically located on the build slave. Jenkins is deployed to an AWS machine. For this reason, I used pritunl to connect the two on a virtual network. I can use local IPs to communicate between the machines and SSH is working fine both ways.
When I push to the remote the build starts correctly on the slave, but it fails to complete. However, if I manually access using SSH through the terminal, the build performs fine. This is the output I get from Jenkins:
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- xcodeproj (LoadError)
from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
from /Users/jenkins/workspace/Core/platform/ios/scripts/pbxsync.rb:58:in `<main>'
As you can see, it fails to require Xcodeproj, causing the build to fail. Still, this only happens if the build is triggered by Jenkins, not manually.
This makes me think that Jenkins is using some different installation of Ruby, or at least a different environment. Basically what I need is to install gems for the same Ruby environment that Jenkins is using, but I don't know which one that is. Any ideas?
Jenkins has a console that runs Groovy scripts on the remote slave. I've been playing with it a bit, but not many conclusions so far. Maybe that helps.
This may be important; this is the shebang I'm using for the Ruby scripts: #!/usr/bin/env ruby
On the terminal, I'm using the same user as Jenkins is to access the slave machine. It's called "jenkins".
One thing I forgot to mention is that the output is telling me the right version: /Users/jenkins/.rvm/rubies/ruby-2.4.0. At least that's the path it's indicating it's trying to load the gem from. So I tried the following:
: /Users/jenkins/.rvm/rubies/ruby-2.4.0/bin/ruby
require 'xcodeproj'
Then I press ctrl+D and get no output - that installation of ruby is finding the gem properly.
If you are using Jenkins Slave plugin to communicate between Jenkins Master and Jenkins Slave, every command that u specify will be run in non-interactive shell. That means that Jenkins will only have access to system ruby in your case.
So if you want to install something that needs to be installed you have to do it in system ruby. You are using rvm so: rvm use system and you can install gem to system ruby.
If you want to use different Ruby version than system ruby you need to add RVM to $PATH for non-interactive shell. Here is basic setup that should help: https://rvm.io/rvm/basics
I finally managed it. As #Cosaquee indicated in another response, it's important to distinguish between interactive and non-interactive shells. The main reason for this is because, depending on how you call SSH, it makes a difference. As the man page indicates:
If command is specified, it is executed on the remote host instead of
a login shell.
This is meaningful, because the Launch Command for the node I have set for Jenkins is this one:
ssh jenkins#x.x.x.x java -jar ~/bin/slave.jar
In the meanwhile, I was logging in with the standard ssh jenkins#x.x.x.x from the terminal, which starts a login shell. It makes sense that I was getting different results because the two shells load different initial scripts. Basically, if you use ssh jenkins#x.x.x.x to login into the machine ~/.bash_profile is loaded, while if you specify a command, such as ssh jenkins#x.x.x.x whatever, then ~/.bashrc is loaded instead. As such, I added this line to ~/.bashrc:
[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm"
Without it I got:
RVM is not a function, selecting rubies with 'rvm use ...' will not
work.
The advantage was that I could now use RVM from the same environment Jenkins was using. The rest is easy:
ssh jenkins#x.x.x.x rvm --default use 2.3
And:
ssh jenkins#x.x.x.x
rvm --default use 2.3
And both are now using the same version of ruby.

How to run auto restart in heroku for ruby scripts

In my Dev box on Nitrous, I am able to run God -c scripts.god -D to restart the two .rb files if they die.
I just run that and the processes for the most part stay alive.
But I cannot do the same in heroku. It seems when I run the god command the .god file does not open and generates an error in heroku.
Question:
How can I run God to restart failed processes in heroku as I do on my development Nitrous environment?
Or is there a recommended alternative way to watch heroku processes and restart them automatically when they fail?
On Heroku you shouldn't need to use a process supervisor like god. If all you need is to ensure your process is restarted if it crashes, Heroku can manage that fine.
It should be as simple as adding two entries in your procfile as workers. https://devcenter.heroku.com/articles/background-jobs-queueing
worker: bundle exec sidekiq
clock: bundle exec clockwork lib/clock.rb
slack_listener: bundle exec ruby lib/slack_bot.rb
You could possibly have issues, if your processing are crashing quite often. Dyno Crash Restart Policy
Your processes should start automatically when you access your website.
However, Heroku does provide commands to manage your processes, check out https://devcenter.heroku.com/articles/dynos for the complete list. E.g., to restart all processes, use the toolbelt command:
heroku ps:restart --app yourappname

Rake task has wrong environment within cron job on Openshift

I'm trying to setup a cron job on Openshift due to import emails in a Redmine application. Therefore, I prepared a minutely script like this:
#!/bin/bash
rake RAILS_ENV=production -f ${OPENSHIFT_REPO_DIR}/Rakefile redmine:email:receive_imap host=imap.googlemail.com port=993 ssl=1 username=xxx#artistii.com password=yyy ...
It runs without problems when launched by hand on a ssh connection. When run by cron, instead, rake could not be found.
Making some debugging, I found that the path is not the same as the login shell; and even if I used a full path for rake, ruby that is found is version 1.8 (not 1.9 as per the cartridge), and whenever I set the very same path as the shell, then libruby-1.9 is not found.
Following some other advice I tried to add the following line in place of setting a custom PATH:
source /usr/bin/rhcsh
but nevertheless rake is still not found. I also tries to use bundle exec.
What is the right way to set an environment for cron on Openshift so that it can run like a login shell?
You may need to cd to the directory where your bundle is installed first (where your Gemfile is) something like this maybe?
cd $OPENSHIFT_REPO_DIR && bundle exec rake .....
This is a bug in the cron cartridge. You can refer to this question in SO. It is actually a question with the Python cartridge and the cron cartridge. But it is the cron cartridge which will affect all. There is also a OpenShift Bug Report mentioned within.
The bug is as you have observed, the cron cartridge uses Ruby 1.8 instead of Ruby 1.9. Thus, the gems installed with Ruby 1.9 are not available to the cron cartridge using Ruby 1.8.
There is already a bugfix for this bug, you can refer to the OpenShift Bug Report. But not too sure if it is pushed out already.
Meanwhile, there is a temporary workaround, by exporting the PATH and LD_LIBRARY_PATH in the cron script. You can refer to the OpenShift Bug Report.
Hope this helps.
If you are using rvm, openshift may getting some problem to shift to default rvm.You can also try something like this so it will set rvm to default before running bundle and can also generate your cron log as well to get the exact status of your cron job:
https://rvm.io/rvm/install
use bundle exec to get rid from more than one version of rake
cd $OPENSHIFT_REPO_DIR && rvm gemset use "yourgemsetname" && RAILS_ENV=production bundle exec rake cron_job:cron_job --silent >> log/cron_log

How start multiple rails app servers with a ruby script

I'm writing a ruby script to start more than one rails server, but I'm running into a few problems:
When I programmatically cd into different projects, their respective .rvmrc files aren't triggered. My projects all use different versions of ruby and have unique gemsets, so I need my script to recognize which environment it's in for everything to work correctly. I tried changing gemsets programmatically, but received this error from rvm:
RVM is not a function, selecting rubies with 'rvm use ...' will not work.
I'm using foreman to start each app, which is great for distilling more than one startup command into a nice and simple foreman start -p $PORT, but I would also like each app's logs to be displayed in their own terminal window, or even better, their own tab. I've seen others achieve things like this through applescript, but is there a better way?
Thank you all for your help. I ended up using consular, which handles scripting in a way that respects different .rvmrc files. Please see this post for more information on my specific solution.
It should be simple as:
rvm . do foreman start -p $PORT

System commands dont work when running over passenger

I have a sinatra app with a page that shows some information about the application. Some of which is generated by running commands on page load. Everything works fine on my MacBook when running in unicorn and everything works fine on the production server when running in unicorn but swap to Apache/Passenger and suddenly the commands start returning nil.
For example to get a list of committers I use:
comitters = `cd /path/to/app && git shortlog -s -n`
This works perfectly until run in the apache/passenger setup.
Is there some option within passenger that disables system commands?
The problem lies in your $PATH environment variable, which the system uses to look for commands. You run Unicorn from the shell don't you? So Unicorn inherit $PATH from your shell. But app processes started from Phusion Passenger are started from Apache/Nginx, which in turn are usually started from some system init service, which have completely different environment variables than your shell. Read http://blog.phusion.nl/2008/12/16/passing-environment-variables-to-ruby-from-phusion-passenger/ for a solution.

Resources