setrlimit in Ruby - ruby

I am trying to limit the execution time of a ruby process using the following code:
trap("XCPU") do
abort "Max Time exceeded"
end
Process.setrlimit(:CPU, 5)
loop do
end
The process does end but the trap code does not run (I just get 'killed' on the command line). However when I set the hard limit to a value greater than 5 the trap code runs
trap("XCPU") do
abort "Max Time exceeded"
end
Process.setrlimit(:CPU, 5, 6)
loop do
end
Why is the first code not working?

The XCPU signal (SIGXCPU) is only sent upon a soft limit. When the hard limit is reached, a KILL signal (SIGKILL) is sent instead. KILL signals cause the program to terminate immediately, and cannot be caught.
Taken from here:
The XCPU signal is sent to a process when it has used up the CPU for a
duration that exceeds a certain predetermined user-settable value. The
arrival of an XCPU signal provides the receiving process a chance to
quickly save any intermediate results and to exit gracefully, before
it is terminated by the operating system using the SIGKILL signal.
The KILL signal is sent to a process to cause it to terminate
immediately. In contrast to SIGTERM and SIGINT, this signal cannot be
caught or ignored, and the receiving process cannot perform any
clean-up upon receiving this signal.
By calling Process.setrlimit without a second parameter, the hard limit defaults to being equal to the soft limit. As such, at least on your operating system, it seems the SIGKILL is being sent before a SIGXCPU can be handled by your trap block.
Here is a quick demo to show why the second approach always works:
t = Time.now
trap("XCPU") do
abort "Max Time exceeded. Total running time: #{(Time.now - t).round} seconds"
end
Process.setrlimit(:CPU, 2, 5)
loop do
end
# => "Max Time exceeded. Total running time: 2 seconds"
The hard limit is not reached before the trap block executes, so the code runs as you expect.

Related

How to make gevent sleep precise?

I'm developing a load testing tool with gevent.
I create a testing script like the following
while True:
# send http request
response = client.sendAndRecv()
gevent.sleep(0.001)
The send/receive action completed very quick, like 0.1ms
So the expected rate should be close to 1000 per second.
But actually I got it like about 500 per second on both Ubuntu and Windows platform.
Most likely the gevent sleep is not accuate.
Gevent use libuv or libev for internal loop. And I got the following description about how libuv handle poll timeout from here
If the loop was run with the UV_RUN_NOWAIT flag, the timeout is 0.
If the loop is going to be stopped (uv_stop() was called), the timeout is 0.
If there are no active handles or requests, the timeout is 0.
If there are any idle handles active, the timeout is 0.
If there are any handles pending to be closed, the timeout is 0.
If none of the above cases matches, the timeout of the closest timer is taken, or if there are no active timers, infinity.
It seems when we have gevent sleep , actually it will setup a timer, and libuv loop use the timeout of the closest timer.
I really doubt that is the root cause : the OS system select timeout is not precise !!
I noticed libuv loop could run with UV_RUN_NOWAIT mode, and it will make loop timeout 0. That is no sleeping if no iOS event.
It may cause the load of one CPU core to 100%, but it is acceptable to me.
So I modify the function run of gevent code hub.py, as the following
loop.run(nowait=True)
But when I run the tool, I got the complain 'This operation would block forever', like the following
gevent.sleep(0.001)
File "C:\Python37\lib\site-packages\gevent\hub.py", line 159, in sleep
hub.wait(t)
File "src\gevent\_hub_primitives.py", line 46, in gevent.__hub_primitives.WaitOperationsGreenlet.wait
File "src\gevent\_hub_primitives.py", line 55, in gevent.__hub_primitives.WaitOperationsGreenlet.wait
File "src\gevent\_waiter.py", line 151, in gevent.__waiter.Waiter.get
File "src\gevent\_greenlet_primitives.py", line 60, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src\gevent\_greenlet_primitives.py", line 60, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src\gevent\_greenlet_primitives.py", line 64, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src\gevent\__greenlet_primitives.pxd", line 35, in gevent.__greenlet_primitives._greenlet_switch
gevent.exceptions.LoopExit: This operation would block forever
So what should I do?
Yes, I finally found the trick.
if libuv loop run mode is not UV_RUN_DEFAULT, gevent will do some checking and if libuv loop is 'nowait' mode, It will say "This operation would block forever".
That's wired, actually it will not blcok forever.
Anyway, I just modify the line 473 of the file libuv/loop.py as the following
if mode == libuv.UV_RUN_DEFAULT:
while self._ptr and self._ptr.data:
self._run_callbacks()
self._prepare_ran_callbacks = False
# here, change from UV_RUN_ONCE to UV_RUN_NOWAIT
ran_status = libuv.uv_run(self._ptr, libuv.UV_RUN_NOWAIT)
After that, run the load tool, Wow..... exactly as what I expected, TPS is very close to what I set, but one core load is 100%.
That totally acceptable, because it is load testing tool.
So if we have real time OS kenel, we don't bother to do that.

subprocess32.Popen crashes (cpu 100%)

I have been trying to use subprocess32.Popen but this causes my system to crash (CPU 100%). So, I have the following code:
import subprocess32 as subprocess
for i in some_iterable:
output = subprocess.Popen(['/path/to/sh/file/script.sh',i[0],i[1],i[2],i[3],i[4],i[5]],shell=False,stdin=None,stdout=None,stderr=None,close_fds=True)
Before this, I had the following:
import subprocess32 as subprocess
for i in some_iterable:
output subprocess.check_output(['/path/to/sh/file/script.sh',i[0],i[1],i[2],i[3],i[4],i[5]])
.. and I had no problems with this - except that it was dead slow.
With Popen I see that this is fast - but my CPU goes too 100% in a couple of secs and the system crashes - forcing a hard reboot.
I am wondering what it is I am doing which is making Popen to crash?
On Linux,Python2.7 if that helps at all.
Thanks.
The problem is that you are trying to start 2 millon processes at once, which is blocking your system.
A solution would be to use a Pool to limit the maximum number of processes that can run at a time, and wait for each process to finish. For this cases where you're starting subprocesses and waiting for them (IO bound), a thread pool from the multiprocessing.dummy module would do:
import multiprocessing.dummy as mp
import subprocess32 as subprocess
def run_script(args):
args = ['/path/to/sh/file/script.sh'] + args
process = subprocess.Popen(args, close_fds=True)
# wait for exit and return exit code
# or use check_output() instead of Popen() if you need to process the output.
return process.wait()
# use a pool of 10 to allow at most 10 processes to be alive at a time
threadpool = mp.Pool(10)
# pool.imap or pool.imap_unordered should be used to avoid creating a list
# of all 2M return values in memory
results = threadpool.imap_unordered(run_script, some_iterable)
for result in results:
... # process result if needed
I've left out most of the arguments to Popen because you are using the default values anyway. The size of the pool should probably be in the range of your available CPU cores if your script is doing comutational work, if it's doing mostly IO (network access, writing files, ...) then probably more.

How can I use sleep for cursor method to avoid rate limiting?

I'm trying to get all of a user's follower ids (75K+) without hitting the rate limit. I figured you can put a sleep method on the cursor to prevent over 15 calls per 15 minutes. Any idea how to do that? Thanks in advance. :)
I guess you are using the twitter gem for interacting with the Twitter API. There is exactly your scenario described in one of their wikis:
follower_ids = client.follower_ids('justinbieber')
begin
follower_ids.to_a
rescue Twitter::Error::TooManyRequests => error
# NOTE: Your process could go to sleep for up to 15 minutes but if you
# retry any sooner, it will almost certainly fail with the same exception.
sleep error.rate_limit.reset_in + 1
retry
end
The idea is to simply sleep an amount of time if the rate limit has been reached, then retry the API call.
If you would like to avoid the rate limiting altogether, you can take limit - 1 elements from the returned cursor every x seconds. In your case, take 15 elements, then sleep for 15 minutes. Here's an example:
follower_ids = client.follower_ids('justinbieber')
loop do
follower_ids.take(15)
break if follower_ids.last?
sleep 15 * 60 # 15 minutes
end

Getting Thread not to run until join in ruby

I am getting into ruby and have been using threads for a little while now with out fully understanding them. I notice that when adding a thread to an array and if I add a sleep() command as the first command the thread does not run until I do a join which is mostly what I want. So I have 2 questions.
1.Is that suppose to happen?
2.Is there a better way to do that other then the way I'm doing it. Here is a sample code that I have to show what I'm talking about.
job = Array.new
10.times do |n|
job << Thread.new do
sleep 0.001
puts "done #{n}"
end
end
#job.each do |t|
#t.join
#end
puts "End of script"
Output is
End of script
If I remove the comments output is
done 1
done 0
done 7
done 6
done 5
done 4
done 3
done 2
done 9
done 8
End of script
So I use this now but I don't understand why it does that. Sometimes I notice even doing something like `echo hi` instead of sleep does the trick.
Thanks in advance.
Timing of threads isn't a defined behavior. Once you put them to sleep, they will be put in a queue to be run later. You can't ever expect it to run one way or another.
Your main program doesn't take very long to run, so it is likely to happen to finish before your other threads get picked back up to run again. Really, when you think about it, 0.001 seconds is quite a long time to computer, so spinning off 10 threads in that time is likely to happen -- but even if it takes longer, there is no guarantee the thread will resume immediately after .001 seconds. Often there's really no guarantee it won't start before .001 seconds, either, but sleep calls usually don't end early.
When you add the join calls, you are introducing additional time into your main thread which allows the other threads time to run, so this behavior is expected.

Watir ... difference between sleep and wait

Is there any notable difference between
sleep 10
and
wait_until(10)
They both seem to do the same thing: wait 10 seconds then proceed to the next step
sleep just does nothing for the specified time. wait_until takes a block. It waits until the block evaluates to true or times out. If no block is given they act the same.

Resources