I have a smoke test that I run against my servers before making them live. At the moment it runs in serial form and takes around 60s per server. I can run these in parallel and I've done it with Thread.new which is great as it runs them a lot faster but I lose track of whether the test actually passed or not.
I'm trying to improve this by using Process.spawn to manage my processes.
pids = []
uris.each do |uri|
command = get_http_tests_command("Smoke")
update_http_tests_config( uri )
pid = Process.spawn( system( command ) )
pids.push pid
Process.detach pid
end
# make sure all pids return a passing status code
# results = Process.waitall
I'd like to kick off all my tests but then afterwards make sure that all the tests return a passing status code.
I tried using Process.waitall but I believe that to be incorrect and used for forks, not spawns.
After all the process have completed I'd like to return the status of true if they all pas or false if any one of them fails.
Documentation here
Try:
statuses = pids.map { |pid| Process.wait(pid, 0); $? }
This waits for each of the process ids to finish, and checks for the result status set in $? for each process
Related
I have this worker process in Heroku, which does some cleaning. It runs every two hours and listens to Heroku terminate signals. It works fine, but I'm seeing 100% dyno load all the time.
My question is: How to run this kind of worker process in Heroku without 100% dyno load? The loop causes the dyno load, but what to use instead of the infinite loop?
# Scheduler here
cleanup = Rufus::Scheduler.new
cleanup.cron '* */2 * * *' do
do_some_cleaning
end
# Signal trapping
Signal.trap("TERM") {
terminate = true
shut_down
exit 0
}
# Infinite loop
while terminate == false
end
It's because you're doing an infinite loop with no sleep cycles. This means you're basically telling the CPU that every single cycle you should be immediately executing a loop condition.
This will quickly use up your CPU.
Instead, try throwing a sleep statement into your infinite loop -- this will pause execution and bring your usage down to 0% =)
while terminate == false
sleep 1
end
I should have thought about it sooner. You can actually simply join rufus-scheduler's loop:
cleanup_scheduler = Rufus::Scheduler.new
cleanup_scheduler.cron '* */2 * * *' do
do_some_cleaning
end
Signal.trap('TERM') do
shut_down
exit 0
end
cleanup_scheduler.join
That joins rufus-scheduler scheduling thread and is pseudo equivalent to:
while !terminated
sleep 0.3
trigger_schedules_if_any
end
I'm working on a production app that has multiple rails servers behind nginx loadbalancer. We are monitoring sidekiq processes with monit, and it works just fine - when sidekiq proces dies monit starts it right back.
However recently encountered a situation where one of these processes was running and visible to monit, but for some reason not visible to sidekiq. That resulted in many failed jobs and took us some time to notice that we're missing one process in sidekiq Web UI, since monit was telling us everything was fine and all processes were running. Simple restart fixed the problem.
And that bring me to my question: how do you monitor your sidekiq processes? I know i can use something like rollbar to notify me when jobs fail, but i'd like to know if there is a way to monitor process count and preferably send mail when one dies. Any suggestions?
Something that would ping sidekiq/stats and verify response.
My super simple solution to a similar problem looks like this:
# sidekiq_check.rb
namespace :sidekiq_check do
task rerun: :environment do
if Sidekiq::ProcessSet.new.size == 0
exec 'bundle exec sidekiq -d -L log/sidekiq.log -C config/sidekiq.yml -e production'
end
end
end
and then using cron/whenever
# schedule.rb
every 5.minutes do
rake 'sidekiq_check:rerun'
end
We ran into this problem where our sidekiq processes had stopped working off jobs overnight and we had no idea. It took us about 30 minutes to integrate http://deadmanssnitch.com by following these instructions.
It's not the prettiest or cheapest option but it gets the job done (integrates nicely with Pagerduty) and has saved our butt twice in the last few months.
On of our complaints with the service is the shortest grace interval we can set is 15 minutes which is too long for us. So we're evaluating similar services like Healthchecks, etc.
My approach is the following:
create a background job that does something
call the job regularly
check that the thing is being done!
so; using a cron script (or something like whenever) every 5 mins, I run :
CheckinJob.perform_later
It's now up to sidekiq (or delayed_job, or whatever active job you're using) to actually run the job.
The job just has to do something which you can check.
I used to get the job to update a record in my Status table (essentially a list of key/value records). Then I'd have a /status page which returns a :500 status code if the record hasn't been updated in the last 6 minutes.
(obviously your timing may vary)
Then I use a monitoring service to monitor the status page! (something like StatusCake)
Nowdays I have a simpler approach; I just get the background job to check in with a cron monitoring service like
IsItWorking
Dead Mans Snitch
Health Checks
The monitoring service which expects your task to check in every X mins. If your task doesn't check in - then the monitoring service will let you know.
Integration is dead simple for all the services. For Is It Working it would be:
IsItWorkingInfo::Checkin.ping(key:"CHECKIN_IDENTIFIER")
full disclosure: I wrote IsItWorking !
I use god gem to monitor my sidekiq processes. God gem makes sure that your process is always running and also can notify the process status on various channels.
ROOT = File.dirname(File.dirname(__FILE__))
God.pid_file_directory = File.join(ROOT, "tmp/pids")
God.watch do |w|
w.env = {'RAILS_ENV' => ENV['RAILS_ENV'] || 'development'}
w.name = 'sidekiq'
w.start = "bundle exec sidekiq -d -L log/sidekiq.log -C config/sidekiq.yml -e #{ENV['RAILS_ENV']}"
w.log = "#{ROOT}/log/sidekiq_god.log"
w.behavior(:clean_pid_file)
w.dir = ROOT
w.keepalive
w.restart_if do |restart|
restart.condition(:memory_usage) do |c|
c.interval = 120.seconds
c.above = 100.megabytes
c.times = [3, 5] # 3 out of 5 intervals
end
restart.condition(:cpu_usage) do |c|
c.interval = 120.seconds
c.above = 80.percent
c.times = 5
end
end
w.lifecycle do |on|
on.condition(:flapping) do |c|
c.to_state = [:start, :restart]
c.times = 5
c.within = 5.minute
c.transition = :unmonitored
c.retry_in = 10.minutes
c.retry_times = 5
c.retry_within = 1.hours
end
end
end
I sometimes get this warning when using Parallel::ForkManager but only in Windows, not on a Unix based system. What does it mean and should I worry about it?
child process '-17108' disappeared. A call to waitpid outside of
Parallel::ForkManager might have reaped it.
Here is the sample code from the docs that my code is based on:
use LWP::Simple;
use Parallel::ForkManager;
my #links=(
["http://www.foo.bar/rulez.data","rulez_data.txt"],
["http://new.host/more_data.doc","more_data.doc"],
);
# Max 30 processes for parallel download
my $pm = Parallel::ForkManager->new(30);
LINKS:
foreach my $linkarray (#links) {
$pm->start and next LINKS; # do the fork
my ($link, $fn) = #$linkarray;
warn "Cannot get $fn from $link"
if getstore($link, $fn) != RC_OK;
$pm->finish; # do the exit in the child process
}
$pm->wait_all_children;
I had the similar issue and placing a sleep 1 before "$pm->start and next LINKS;"
fixed the issue. I guess its due to continues forking, where Perl lost track of the fork processes. I may be wrong!
I have follwing ruby scripts
rubyScript.rb:
require "rScript"
t1 = Thread.new{LongRunningOperation(); puts "doneLong"}
sleep 1
shortOperation()
puts "doneShort"
t1.join
rScript.rb:
def LongRunningOperation()
puts "In LongRunningOperation method"
for i in 0..100000
end
return 0
end
def shortOperation()
puts "IN shortOperation method"
return 0
end
THE OUTPUT of above script i.e.(ruby rubyScript.rb)
1) With use of sleep function
In veryLongRunningOperation method
doneLong
IN shortOperation method
doneShort
2) Without use of sleep function i.e. removing sleep function.(ruby rubyScript.rb)
In veryLongRunningOperation method
IN shortOperation method
doneShort
doneLong
why there is difference in output. What sleep does in ablve case. Thannks in advance.
The sleep lets the main thread sleep for 1 second.
Your long running function runs longer than your short running function but it is still faster than one second.
If you remove the sleep, then your long running function starts in a new thread and the main thread continues without any wait. It then starts the short running function, which finishes nearly immediatly, while the long running function is still running.
In the case of the none removed sleep it goes as following:
Your long running function starts in a new Thread and the main thread continues. Now the main thread encounters the sleep command and waits for 1 second. In this time the long running function in the other thread is still running and finishes. The main thread continues after its sleep time and starts the short running function.
sleep 1 makes the current thread sleep (i.e. do nothing) for one second. So veryLongRunningOperation (which despite being a very long running operation still takes less than a second) has enough time to finish before shortOperation even starts.
sleep 1
Makes the main thread to wait for 1 second, that allows t1 to finish before shortOperation is executed.
If there is more than one way, please list them. I only know of one, but I'm wondering if there is a cleaner, in-Ruby way.
The difference between the Process.getpgid and Process::kill approaches seems to be what happens when the pid exists but is owned by another user. Process.getpgid will return an answer, Process::kill will throw an exception (Errno::EPERM).
Based on that, I recommend Process.getpgid, if just for the reason that it saves you from having to catch two different exceptions.
Here's the code I use:
begin
Process.getpgid( pid )
true
rescue Errno::ESRCH
false
end
If it's a process you expect to "own" (e.g. you're using this to validate a pid for a process you control), you can just send sig 0 to it.
>> Process.kill 0, 370
=> 1
>> Process.kill 0, 2
Errno::ESRCH: No such process
from (irb):5:in `kill'
from (irb):5
>>
#John T, #Dustin: Actually, guys, I perused the Process rdocs, and it looks like
Process.getpgid( pid )
is a less violent means of applying the same technique.
For child processes, other solutions like sending a signal won't behave as expected: they will indicate that the process is still running when it actually exited.
You can use Process.waitpid if you want to check on a process that you spawned yourself. The call won't block if you're using the Process::WNOHANG flag and nil is going to be returned as long as the child process didn't exit.
Example:
pid = Process.spawn('sleep 5')
Process.waitpid(pid, Process::WNOHANG) # => nil
sleep 5
Process.waitpid(pid, Process::WNOHANG) # => pid
If the pid doesn't belong to a child process, an exception will be thrown (Errno::ECHILD: No child processes).
The same applies to Process.waitpid2.
This is how I've been doing it:
def alive?(pid)
!!Process.kill(0, pid) rescue false
end
You can try using
Process::kill 0, pid
where pid is the pid number, if the pid is running it should return 1.
Under Linux you can obtain a lot of attributes of running programm using proc filesystem:
File.read("/proc/#{pid}/cmdline")
File.read("/proc/#{pid}/comm")
A *nix-only approach would be to shell-out to ps and check if a \n (new line) delimiter exists in the returned string.
Example IRB Output
1.9.3p448 :067 > `ps -p 56718`
" PID TTY TIME CMD\n56718 ttys007 0:03.38 zeus slave: default_bundle \n"
Packaged as a Method
def process?(pid)
!!`ps -p #{pid.to_i}`["\n"]
end
I've dealt with this problem before and yesterday I compiled it into the "process_exists" gem.
It sends the null signal (0) to the process with the given pid to check if it exists. It works even if the current user does not have permissions to send the signal to the receiving process.
Usage:
require 'process_exists'
pid = 12
pid_exists = Process.exists?(pid)