Why Heroku worker has 100% dyno load? - ruby

I have this worker process in Heroku, which does some cleaning. It runs every two hours and listens to Heroku terminate signals. It works fine, but I'm seeing 100% dyno load all the time.
My question is: How to run this kind of worker process in Heroku without 100% dyno load? The loop causes the dyno load, but what to use instead of the infinite loop?
# Scheduler here
cleanup = Rufus::Scheduler.new
cleanup.cron '* */2 * * *' do
do_some_cleaning
end
# Signal trapping
Signal.trap("TERM") {
terminate = true
shut_down
exit 0
}
# Infinite loop
while terminate == false
end

It's because you're doing an infinite loop with no sleep cycles. This means you're basically telling the CPU that every single cycle you should be immediately executing a loop condition.
This will quickly use up your CPU.
Instead, try throwing a sleep statement into your infinite loop -- this will pause execution and bring your usage down to 0% =)
while terminate == false
sleep 1
end

I should have thought about it sooner. You can actually simply join rufus-scheduler's loop:
cleanup_scheduler = Rufus::Scheduler.new
cleanup_scheduler.cron '* */2 * * *' do
do_some_cleaning
end
Signal.trap('TERM') do
shut_down
exit 0
end
cleanup_scheduler.join
That joins rufus-scheduler scheduling thread and is pseudo equivalent to:
while !terminated
sleep 0.3
trigger_schedules_if_any
end

Related

How do You monitor sidekiq processes?

I'm working on a production app that has multiple rails servers behind nginx loadbalancer. We are monitoring sidekiq processes with monit, and it works just fine - when sidekiq proces dies monit starts it right back.
However recently encountered a situation where one of these processes was running and visible to monit, but for some reason not visible to sidekiq. That resulted in many failed jobs and took us some time to notice that we're missing one process in sidekiq Web UI, since monit was telling us everything was fine and all processes were running. Simple restart fixed the problem.
And that bring me to my question: how do you monitor your sidekiq processes? I know i can use something like rollbar to notify me when jobs fail, but i'd like to know if there is a way to monitor process count and preferably send mail when one dies. Any suggestions?
Something that would ping sidekiq/stats and verify response.
My super simple solution to a similar problem looks like this:
# sidekiq_check.rb
namespace :sidekiq_check do
task rerun: :environment do
if Sidekiq::ProcessSet.new.size == 0
exec 'bundle exec sidekiq -d -L log/sidekiq.log -C config/sidekiq.yml -e production'
end
end
end
and then using cron/whenever
# schedule.rb
every 5.minutes do
rake 'sidekiq_check:rerun'
end
We ran into this problem where our sidekiq processes had stopped working off jobs overnight and we had no idea. It took us about 30 minutes to integrate http://deadmanssnitch.com by following these instructions.
It's not the prettiest or cheapest option but it gets the job done (integrates nicely with Pagerduty) and has saved our butt twice in the last few months.
On of our complaints with the service is the shortest grace interval we can set is 15 minutes which is too long for us. So we're evaluating similar services like Healthchecks, etc.
My approach is the following:
create a background job that does something
call the job regularly
check that the thing is being done!
so; using a cron script (or something like whenever) every 5 mins, I run :
CheckinJob.perform_later
It's now up to sidekiq (or delayed_job, or whatever active job you're using) to actually run the job.
The job just has to do something which you can check.
I used to get the job to update a record in my Status table (essentially a list of key/value records). Then I'd have a /status page which returns a :500 status code if the record hasn't been updated in the last 6 minutes.
(obviously your timing may vary)
Then I use a monitoring service to monitor the status page! (something like StatusCake)
Nowdays I have a simpler approach; I just get the background job to check in with a cron monitoring service like
IsItWorking
Dead Mans Snitch
Health Checks
The monitoring service which expects your task to check in every X mins. If your task doesn't check in - then the monitoring service will let you know.
Integration is dead simple for all the services. For Is It Working it would be:
IsItWorkingInfo::Checkin.ping(key:"CHECKIN_IDENTIFIER")
full disclosure: I wrote IsItWorking !
I use god gem to monitor my sidekiq processes. God gem makes sure that your process is always running and also can notify the process status on various channels.
ROOT = File.dirname(File.dirname(__FILE__))
God.pid_file_directory = File.join(ROOT, "tmp/pids")
God.watch do |w|
w.env = {'RAILS_ENV' => ENV['RAILS_ENV'] || 'development'}
w.name = 'sidekiq'
w.start = "bundle exec sidekiq -d -L log/sidekiq.log -C config/sidekiq.yml -e #{ENV['RAILS_ENV']}"
w.log = "#{ROOT}/log/sidekiq_god.log"
w.behavior(:clean_pid_file)
w.dir = ROOT
w.keepalive
w.restart_if do |restart|
restart.condition(:memory_usage) do |c|
c.interval = 120.seconds
c.above = 100.megabytes
c.times = [3, 5] # 3 out of 5 intervals
end
restart.condition(:cpu_usage) do |c|
c.interval = 120.seconds
c.above = 80.percent
c.times = 5
end
end
w.lifecycle do |on|
on.condition(:flapping) do |c|
c.to_state = [:start, :restart]
c.times = 5
c.within = 5.minute
c.transition = :unmonitored
c.retry_in = 10.minutes
c.retry_times = 5
c.retry_within = 1.hours
end
end
end

Ruby: running external processes in parallel and keeping track of exit codes

I have a smoke test that I run against my servers before making them live. At the moment it runs in serial form and takes around 60s per server. I can run these in parallel and I've done it with Thread.new which is great as it runs them a lot faster but I lose track of whether the test actually passed or not.
I'm trying to improve this by using Process.spawn to manage my processes.
pids = []
uris.each do |uri|
command = get_http_tests_command("Smoke")
update_http_tests_config( uri )
pid = Process.spawn( system( command ) )
pids.push pid
Process.detach pid
end
# make sure all pids return a passing status code
# results = Process.waitall
I'd like to kick off all my tests but then afterwards make sure that all the tests return a passing status code.
I tried using Process.waitall but I believe that to be incorrect and used for forks, not spawns.
After all the process have completed I'd like to return the status of true if they all pas or false if any one of them fails.
Documentation here
Try:
statuses = pids.map { |pid| Process.wait(pid, 0); $? }
This waits for each of the process ids to finish, and checks for the result status set in $? for each process

rlimit_cpu not working on spawn call

I am trying to create a function that will execute a process and will kill it after 2 seconds:
def execute(command, input_file, output_file)
pid = Kernel.spawn(command,{
STDIN => input_file,
STDOUT => output_file,
:rlimit_cpu => [2,2], #2 secs,
:rlimit_nproc => 0,
:rlimit_as => 16 * 1024 * 1024
}
);
Process.wait(pid)
puts "exit status = " + $?.exitstatus.to_s
return File.read(output_file)
end
I tested this function with short-running processes and long-running processes (with a "sleep" call). The command always completes. I need that the spawn call kills the "command" after 2 seconds using rlimit_cpu. How I can do that?
EDIT: seems like rlimit_cpu does not works as I thought. According to this question:
The CPU limit is a limit on CPU seconds rather than elapsed time
Also:
When you do the fib call, you hammer the CPU so that elapsed and CPU time are close (most of the process time is spent using the CPU). That's not the case when printing since most time there is spent in I/O.
I will use another approach, since I need to kill the process regardless if the process is CPU-bound or IO-bound
To kill the process after two seconds, run a timer for two seconds, and then try to call Process.kill on the child.

Stop very long running delayed jobs

I have some jobs which take very-very long time to finish. Usually 1-3 hours.
I use the gem "daemons" to daemonize the delayed workers.
If I use the rake task: "script/delayed_job stop" to stop them, then they won't stop until the
workers finish...
Is there a nice and simple solution to stop the jobs?
I have managed to do one, but any other suggestions are also welcome.
The idea is to:
trap the "TERM" signal (because daemons sends "TERM" for its spawned processes) and save the old
when the "TERM" signal comes, then finish the job (but don't yet quit)
after terminating the job setting back the old handler for "TERM"
sending the "TERM" to the process I am in (The old handler will be called)
Here is the code:
def self.prepare_to_be_stopped_by_daemons
self.old_trap = Signal.trap("TERM") do
self.should_terminate = true
end
end
def check_the_stop_signal_by_daemons
if self.class.should_terminate
Delayed::Worker.logger.debug "Asked to be TERMINATED. Calling the old trap." if Delayed::Worker.logger
self.class.restore_old_trap
Process.kill("TERM", Process.pid)
raise StopCrawlingException.new
end
end
def self.restore_old_trap
Signal.trap("TERM", old_trap)
end

why their is difference in output of ruby script?

I have follwing ruby scripts
rubyScript.rb:
require "rScript"
t1 = Thread.new{LongRunningOperation(); puts "doneLong"}
sleep 1
shortOperation()
puts "doneShort"
t1.join
rScript.rb:
def LongRunningOperation()
puts "In LongRunningOperation method"
for i in 0..100000
end
return 0
end
def shortOperation()
puts "IN shortOperation method"
return 0
end
THE OUTPUT of above script i.e.(ruby rubyScript.rb)
1) With use of sleep function
In veryLongRunningOperation method
doneLong
IN shortOperation method
doneShort
2) Without use of sleep function i.e. removing sleep function.(ruby rubyScript.rb)
In veryLongRunningOperation method
IN shortOperation method
doneShort
doneLong
why there is difference in output. What sleep does in ablve case. Thannks in advance.
The sleep lets the main thread sleep for 1 second.
Your long running function runs longer than your short running function but it is still faster than one second.
If you remove the sleep, then your long running function starts in a new thread and the main thread continues without any wait. It then starts the short running function, which finishes nearly immediatly, while the long running function is still running.
In the case of the none removed sleep it goes as following:
Your long running function starts in a new Thread and the main thread continues. Now the main thread encounters the sleep command and waits for 1 second. In this time the long running function in the other thread is still running and finishes. The main thread continues after its sleep time and starts the short running function.
sleep 1 makes the current thread sleep (i.e. do nothing) for one second. So veryLongRunningOperation (which despite being a very long running operation still takes less than a second) has enough time to finish before shortOperation even starts.
sleep 1
Makes the main thread to wait for 1 second, that allows t1 to finish before shortOperation is executed.

Resources