How do I create a monit loop for multiple processes to monitor? - resque

This example shows how to monitor a single resque queue
check process resque_worker_QUEUE
with pidfile /data/APP_NAME/current/tmp/pids/resque_worker_QUEUE.pid
start program = "/usr/bin/env HOME=/home/user RACK_ENV=production PATH=/usr/local/bin:/usr/local/ruby/bin:/usr/bin:/bin:$PATH /bin/sh -l -c 'cd /data/APP_NAME/current; nohup bundle exec rake environment resque:work RAILS_ENV=production QUEUE=queue_name VERBOSE=1 PIDFILE=tmp/pids/resque_worker_QUEUE.pid >> log/resque_worker_QUEUE.log 2>&1'" as uid deploy and gid deploy
stop program = "/bin/sh -c 'cd /data/APP_NAME/current && kill -9 $(cat tmp/pids/resque_worker_QUEUE.pid) && rm -f tmp/pids/resque_worker_QUEUE.pid; exit 0;'"
if totalmem is greater than 300 MB for 10 cycles then restart # eating up memory?
group resque_workers
where QUEUE is typically the index of the queue. Does monit itself have the ability to create a loop so that QUEUE can be the index or iterator so if I have 6 workers to create I can still have a single block of configuration code inside a block? Or must I create a monit configuration builder that does the iterating to produce a hardcoded set of worker monitors as an output?
So instead of
check process resque_worker_0
with pidfile /data/APP_NAME/current/tmp/pids/resque_worker_0.pid
start program = "/usr/bin/env HOME=/home/user RACK_ENV=production PATH=/usr/local/bin:/usr/local/ruby/bin:/usr/bin:/bin:$PATH /bin/sh -l -c 'cd /data/APP_NAME/current; nohup bundle exec rake environment resque:work RAILS_ENV=production QUEUE=queue_name VERBOSE=1 PIDFILE=tmp/pids/resque_worker_0.pid >> log/resque_worker_0.log 2>&1'" as uid deploy and gid deploy
stop program = "/bin/sh -c 'cd /data/APP_NAME/current && kill -9 $(cat tmp/pids/resque_worker_0.pid) && rm -f tmp/pids/resque_worker_0.pid; exit 0;'"
if totalmem is greater than 300 MB for 10 cycles then restart # eating up memory?
group resque_workers
check process resque_worker_1
with pidfile /data/APP_NAME/current/tmp/pids/resque_worker_1.pid
start program = "/usr/bin/env HOME=/home/user RACK_ENV=production PATH=/usr/local/bin:/usr/local/ruby/bin:/usr/bin:/bin:$PATH /bin/sh -l -c 'cd /data/APP_NAME/current; nohup bundle exec rake environment resque:work RAILS_ENV=production QUEUE=queue_name VERBOSE=1 PIDFILE=tmp/pids/resque_worker_1.pid >> log/resque_worker_1.log 2>&1'" as uid deploy and gid deploy
stop program = "/bin/sh -c 'cd /data/APP_NAME/current && kill -9 $(cat tmp/pids/resque_worker_1.pid) && rm -f tmp/pids/resque_worker_1.pid; exit 0;'"
if totalmem is greater than 300 MB for 10 cycles then restart # eating up memory?
group resque_workers
I could do something like this (pseudo-code for the loop I know)
[0..1].each |QUEUE|
check process resque_worker_QUEUE
with pidfile /data/APP_NAME/current/tmp/pids/resque_worker_QUEUE.pid
start program = "/usr/bin/env HOME=/home/user RACK_ENV=production PATH=/usr/local/bin:/usr/local/ruby/bin:/usr/bin:/bin:$PATH /bin/sh -l -c 'cd /data/APP_NAME/current; nohup bundle exec rake environment resque:work RAILS_ENV=production QUEUE=queue_name VERBOSE=1 PIDFILE=tmp/pids/resque_worker_QUEUE.pid >> log/resque_worker_QUEUE.log 2>&1'" as uid deploy and gid deploy
stop program = "/bin/sh -c 'cd /data/APP_NAME/current && kill -9 $(cat tmp/pids/resque_worker_QUEUE.pid) && rm -f tmp/pids/resque_worker_QUEUE.pid; exit 0;'"
if totalmem is greater than 300 MB for 10 cycles then restart # eating up memory?
group resque_workers
end

I couldn't find any evidence that monit can do this on its own, therefore I wrote a ruby monit resque config file builder and inserted into the capistrano deployment tasks.
in config/deploy/production.rb
set :resque_worker_count, 6
in lib/capistrano/tasks/monit.rake
def build_entry(process_name,worker_pid_file,worker_config_file,start_command,stop_command)
<<-END_OF_ENTRY
check process #{process_name}
with pidfile #{worker_pid_file}
start program = \"#{start_command}\" with timeout 90 seconds
stop program = \"#{stop_command}\" with timeout 90 seconds
if totalmem is greater than 500 MB for 4 cycles then restart # eating up memory?
group resque
END_OF_ENTRY
end
namespace :monit do
desc "Build monit configuration file for monitoring resque workers"
task :build_resque_configuration_file do
on roles(:app) do |host|
# Setup the reusable variables across all worker entries
rails_env = fetch(:rails_env)
app_name = fetch(:application)
monit_resque_config_file_path = "#{shared_path}/config/monit/resque"
resque_control_script = "#{shared_path}/bin/resque-control"
monit_wrapper_script = "/usr/local/sbin/monit-wrapper"
config_file_content = []
(0..((fetch(:resque_worker_count)).to_i - 1)).each do |worker|
# Setup the variables for the worker entry
process_name = "resque_#{worker}"
worker_config_file = "resque_#{worker}.conf"
worker_pid_file = "/var/run/resque/#{app_name}/resque_#{worker}.pid"
start_command = "#{monit_wrapper_script} #{resque_control_script} #{app_name} start #{rails_env} #{worker_config_file}"
stop_command = "#{monit_wrapper_script} #{resque_control_script} #{app_name} stop #{rails_env} #{worker_config_file}"
# Build the config file entry for the worker
config_file_content << build_entry(process_name,worker_pid_file,worker_config_file,start_command,stop_command)
end
# Save the file locally for inspection (debugging)
temp_file = "/tmp/#{app_name}_#{rails_env}_resque"
File.delete(temp_file) if File.exist?(temp_file)
File.open(temp_file,'w+') {|f| f.write config_file_content.join("\n") }
# Upload the results to the server
upload! temp_file, monit_resque_config_file_path
end
end
end

Related

Chain `top` output redirection and other commands

I'm trying to record system metrics using top while other processes are running. I'm hoping to chain things together, like so:
#!/bin/bash
# Redirect `top` output
top -U $USER > file.txt &&
# Then run a process that just sleeps for 4 seconds
python3 -c 'import time;time.sleep(4)' &&
# Then run another process that does the same
python3 -c 'import time;time.sleep(4)'
When I run this, however, the latter two (Python) processes never complete. My goal is to start recording from top before any of the other processes start, then once those processes complete, stop recording from top.
#run the first command in background
top -U $USER > file.txt &
# Then run a process that just sleeps for 4 seconds
python3 -c 'import time;time.sleep(4)' &&
# Then run another process that does the same
python3 -c 'import time;time.sleep(4)' &&
# kill the command in background
kill %1

Upstart service in start/killed state

I have a python application that needs to run as a service on Ubuntu 14.04. This application needs to have the following capabilities:
When then service is started, a cron entry is created in the crontab, which will periodically run the application.
When the service is stopped, the crontab entry is removed.
When the system/server is rebooted, the service needs to be started.
I have the following upstart script to run my service:
start on [2345]
stop on [!2345]
script
LOGDIR=/usr/local/etc/myservice/logs/
CFGFILE=/usr/local/etc/myservice/myservice.conf
echo $$ > /var/run/myservice.pid
# If there is no cronjob by the name myservice, then add a cronjob to the crontab
set -x
exec bash -c '
if (( $(crontab -l | grep -c myservice) == 0 )); then
(crontab -l ; echo "1 * * * * myservice) | crontab -
fi'
end script
pre-start script
set -x
echo "[`date`] Starting myservice Service" >> /var/log/myservice.log
# Testing to see if myservice has been installed, else exit
[ -x /usr/local/bin/myservice ] || exit 0
mkdir -p /usr/local/etc/myservice/logs/
end script
pre-stop script
set -x
echo "[`date`] Stopping myservice Service" >> /var/log/myservice.log
end script
post-stop script
set -x
rm /var/run/myservice.pid
# If there is at least 1 cronjob by the name myservice, remove all such entries from crontab
exec bash -c '
if (( $(crontab -l | grep -c myservice) >= 0 )); then
(crontab -l | grep -v myservice) | crontab -
fi'
pkill -f myservice
end script
However, when I try to run the service, it hangs and I have to hit ctrl+c to get back the command line. Similarly with the stopping of the service. Am I missing something here? Any help will be appreciated!

Grep process using initctl

Below is my initctl script. with this i am trying to bring up a process when system comes up.
The problem is, initctl is giving me job failed status though my process is up and running.
if i do ps -aef | grep process_name, will it work?
If so, how can i do ps after executing DAEMON.
console output
respawn
env DAEMON="./BRINGUP"
env PKILL="pkill BRINGUP"
pre-start script
su -s /bin/sh -c "$DAEMON" 12345
end script
pre-stop script
/bin/sh -c "$PKILL"
end script
script
sleepWhileAppIsUp(){
while pidof $1 >/dev/null; do
sleep 1
done
}
sleepWhileAppIsUp $DAEMON
end script
Thanks in advance:)

Run shell scripts in order

I have 3 commands that I am trying to run when the system start as a cronjob.
# Sleep at startup
sleep 2m
#command num 1:
./trace.out
sleep 5
#Command num 2:
java -jar file.jar
sleep 5
#Command num 3:
sh ./script.sh
is there any way to make this script more efficient using a loop, some way to make sure every script is running before executing the next one.
I would use && between each command as it executes each command, only if the previous one succeeded! For example:
# Sleep at startup
sleep 2m
./trace.out && java -jar file.jar && sh ./script.sh

Can't start service with sudo since root user has no access to Ruby

tl;dr
Trying to run a service which needs ruby to run. But, Ruby is installed with RVM where the root user can't seem to access it, producting the error /usr/bin/env: ruby: No such file or directory. rvmsudo doesn't work.
Background
I have an init.d script which is supposed to start a unicorn server. I keep the script in the config directory of my rails application and symlink to it from /etc/init.d/busables_unicorn.
$ ls -l /etc/init.d/busables_unicorn
-> lrwxrwxrwx 1 root root 62 2012-01-12 15:02 busables_unicorn -> /home/dtuite/dev/rails/busables/current/config/unicorn_init.sh
This script (which is appended to the bottom) essentially just runs the following command:
$APP_ROOT/bin/unicorn -D -c $APP_ROOT/config/unicorn.rb -E production
where $APP_ROOT is the path to the root of my rails application. Every time that command is executed in that init.d script, it is supposed to do so as the dtuite (my deploy) user. To accomplish that, I call su -c "$CMD" - dtuite rather than just $CMD.
/bin/unicorn is a "binscript" which was generated by Bundler and config/unicorn.rb contains some configuration options which are passed to it.
The unicorn binscript looks like this:
#!/usr/bin/env ruby
#
# This file was generated by Bundler.
#
# The application 'unicorn' is installed as part of a gem, and
# this file is here to facilitate running it.
#
require 'pathname'
ENV['BUNDLE_GEMFILE'] ||= File.expand_path("../../Gemfile",
Pathname.new(__FILE__).realpath)
require 'rubygems'
require 'bundler/setup'
load Gem.bin_path('unicorn', 'unicorn')
Now, I'm trying to start my unicorn service by running:
sudo service busables_unicorn start
That however produces the error:
/usr/bin/env: ruby: No such file or directory
I believe that this is happening because I'm running the service as the root user but RVM has installed ruby under the dtuite user's home directory and the root user has no access to it.
dtuite#localhost:$ which ruby
-> /home/dtuite/.rvm/rubies/ruby-1.9.3-p0/bin/ruby
dtuite#localhost:$ su
Password:
root#localhost:$ which ruby
root#localhost:$
Question
What do I need to do to make this work?
My Setup
- ubuntu 11.10
- ruby 1.9.3p0 (2011-10-30 revision 33570) [i686-linux]
- nginx: nginx version: nginx/1.0.5
What I've tried
rvmsudo
$ rvmsudo service busables_unicorn start
/usr/bin/env: ruby: No such file or directory
rvm-auto-ruby
$ sudo service cakes_unicorn start
-> [sudo] password for dtuite:
-> -su: /home/dtuite/dev/rails/cakes/current/bin/unicorn: rvm-auto-ruby: bad interpreter: No such file or directory
This other question may help but to be honest I don't really understand it.
Appendix
The busables_unicorn script in it's entirety.
# INFO: This file is based on the example found at
# https://github.com/defunkt/unicorn/blob/master/examples/init.sh
# Modifications are courtesy of Ryan Bate's Unicorn Railscast
# Install Instructions:
# sudo ln -s full-path-to-script /etc/init.d/APP_NAME_unicorn
# Once installed, an app's unicorn can be reloaded by running
# sudo service APP_NAME_unicorn restart
#!/bin/sh
set -e
# Example init script, this can be used with nginx, too,
# since nginx and unicorn accept the same signals
# Feel free to change any of the following variables for your app:
TIMEOUT=${TIMEOUT-60}
APP_ROOT=/home/dtuite/dev/rails/busables/current
PID=$APP_ROOT/tmp/pids/unicorn.pid
# in order to access this, we need to first run
# 'bundle install --binstubs'. THis will fill our
# app/bin directory with loads of stubs for executables
# this is the command that is run when we run this script
CMD="$APP_ROOT/bin/unicorn -D -c $APP_ROOT/config/unicorn.rb -E production"
# we don't need an init config because this file does it's job
action="$1"
set -u
old_pid="$PID.oldbin"
cd $APP_ROOT || exit 1
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $old_pid && kill -$1 `cat $old_pid`
}
case $action in
start)
sig 0 && echo >&2 "Already running" && exit 0
# NOTE: We have to change all these lines.
# Otherwise, the app will run as the root user
su -c "$CMD" - dtuite
;;
stop)
sig QUIT && exit 0
echo >&2 "Not running"
;;
force-stop)
sig TERM && exit 0
echo >&2 "Not running"
;;
restart|reload)
sig HUP && echo reloaded OK && exit 0
echo >&2 "Couldn't reload, starting '$CMD' instead"
su -c "$CMD" - dtuite
;;
upgrade)
if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
then
n=$TIMEOUT
while test -s $old_pid && test $n -ge 0
do
printf '.' && sleep 1 && n=$(( $n - 1 ))
done
echo
if test $n -lt 0 && test -s $old_pid
then
echo >&2 "$old_pid still exists after $TIMEOUT seconds"
exit 1
fi
exit 0
fi
echo >&2 "Couldn't upgrade, starting '$CMD' instead"
su -c "$CMD" - dtuite
;;
reopen-logs)
sig USR1
;;
*)
echo >&2 "Usage: $0 <start|stop|restart|upgrade|force-stop|reopen-logs>"
exit 1
;;
esac
It sounds like su isn't spawning a shell that reads the profile files that normally setup the rvm environment.
I'd try changing the command you run to
source "/home/dtuite/.rvm/scripts/rvm" && $APP_ROOT/bin/unicorn...
Try adding your ruby path somewhere at the beginning of the start script, in an export statement like this:
export PATH=/home/dtuite/.rvm/rubies/ruby-1.9.3-p0/bin:$PATH

Resources