sub-millisecond time measurement in Ruby - ruby

I'm running Ruby 2.2 in Rubymine 8.0.3
My machine is running Windows 7 Pro with an Intel core i7-4710MQ
I've been able to achieve ~411 ns precision with C++, Java, Python and JS on this machine, but can't seem to find a way to attain this performance in Ruby, as the built in Time library is good for ms only.
I can program my tests to tolerate this reduced precision, but is it possible to incorporate the windows QPC API for improved evaluation of execution time?
My test code for determining clock tick precision is below:
numTimes = 10000
times = Array.new(numTimes)
(0...(numTimes)).each do |i|
times[i] = Time.new
end
durations = []
(0...(numTimes - 1)).each do |i|
durations[i] = times[i+1] - times[i]
end
# Output duration only if the clock ticked over
durations.each do |duration|
if duration != 0
p duration.to_s + ','
end
end
The below code incorporates the QPC as found here
require "Win32API"
QueryPerformanceCounter = Win32API.new("kernel32",
"QueryPerformanceCounter", 'P', 'I')
QueryPerformanceFrequency = Win32API.new("kernel32",
"QueryPerformanceFrequency", 'P', 'I')
def get_ticks
tick = ' ' * 8
get_ticks = QueryPerformanceCounter.call(tick)
tick.unpack('q')[0]
end
def get_freq
freq = ' ' * 8
get_freq = QueryPerformanceFrequency.call(freq)
freq.unpack('q')[0]
end
def get_time_diff(a, b)
# This function takes two QPC ticks
(b - a).abs.to_f / (get_freq)
end
numTimes = 10000
times = Array.new(numTimes)
(0...(numTimes)).each do |i|
times[i] = get_ticks
end
durations = []
(0...(numTimes - 1)).each do |i|
durations[i] = get_time_diff(times[i+1], times[i])
end
durations.each do |duration|
p (duration * 1000000000).to_s + ','
end
This code returns durations between ticks of ~22-75 microseconds on my machine

You can get higher precision by using Process::clock_gettime:
Returns a time returned by POSIX clock_gettime() function.
Here's an example with Time.now
times = Array.new(1000) { Time.now }
durations = times.each_cons(2).map { |a, b| b - a }
durations.sort.group_by(&:itself).each do |time, elements|
printf("%5d ns x %d\n", time * 1_000_000_000, elements.count)
end
Output:
0 ns x 686
1000 ns x 296
2000 ns x 12
3000 ns x 2
12000 ns x 2
18000 ns x 1
And here's the same example with Process.clock_gettime:
times = Array.new(1000) { Process.clock_gettime(Process::CLOCK_MONOTONIC) }
Output:
163 ns x 1
164 ns x 1
164 ns x 9
165 ns x 6
165 ns x 22
166 ns x 39
166 ns x 174
167 ns x 13
167 ns x 129
168 ns x 95
168 ns x 32
169 ns x 203
169 ns x 141
170 ns x 23
170 ns x 37
171 ns x 30
171 ns x 3
172 ns x 24
172 ns x 10
174 ns x 1
175 ns x 2
180 ns x 1
194 ns x 1
273 ns x 1
2565 ns x 1
And here's a quick side-by-side comparison:
array = Array.new(12) { [Time.now, Process.clock_gettime(Process::CLOCK_MONOTONIC)] }
array.shift(2) # first elements are always inaccuate
base_t, base_p = array.first # baseline
printf("%-11.11s %-11.11s\n", 'Time.now', 'Process.clock_gettime')
array.each do |t, p|
printf("%.9f %.9f\n", t - base_t, p - base_p)
end
Output:
Time.now Process.clo
0.000000000 0.000000000
0.000000000 0.000000495
0.000001000 0.000000985
0.000001000 0.000001472
0.000002000 0.000001960
0.000002000 0.000002448
0.000003000 0.000002937
0.000003000 0.000003425
0.000004000 0.000003914
0.000004000 0.000004403
This is Ruby 2.3 on OS X running on an Intel Core i7, not sure about Windows.
To avoid precision loss due to floating point conversion, you can specify another unit, e.g.:
Process.clock_gettime(Process::CLOCK_MONOTONIC, :nanosecond)
#=> 191519383463873

Time#nsec:
numTimes = 10000
times = Array.new(numTimes)
(0...(numTimes)).each do |i|
# nsec ⇓⇓⇓⇓
times[i] = Time.new.nsec
end
durations = (0...(numTimes - 1)).inject([]) do |memo, i|
memo << times[i+1] - times[i]
end
puts durations.reject(&:zero?).join $/

Ruby Time objects store the number of nanoseconds since the epoch.
Since Ruby 1.9.2, Time implementation uses a signed 63 bit integer, Bignum or Rational. The integer is a number of nanoseconds since the Epoch which can represent 1823-11-12 to 2116-02-20.
You can access the nanosecond part most accurately with Time#nsec.
$ ruby -e 't1 = Time.now; puts t1.to_f; puts t1.nsec'
1457079791.351686
351686000
As you can see, on my OS X machine it's only precise down to the microsecond. This could be because OS X lacks clock_gettime().

Related

Inverse of cumsum in Julia

The matrix Y is defined as
Y = cumsum(cumsum(X,dims=1), dims=2)
For example,
julia> X = [1 4 2 3; 2 4 5 2; 4 3 4 1; 2 5 4 2];
julia> Y = cumsum(cumsum(X,dims=1), dims=2)
4x4 Matrix{Int64}:
1 5 7 10
3 11 18 23
7 18 29 35
9 25 40 48
I want to reproduce the matrix X from Y. It seems that function diff is helpful. However, as you can see below, we cannot reproduce the first line and first column of X.
julia> diff(diff(y, dims=1), dims=2)
3x3 Matrix{Int64}:
4 5 2
3 4 1
5 4 2
So, I concatenate zeros. Then, it works.
julia> y00 = vcat(zeros(5)',hcat(zeros(4), y))
5x5 Matrix{Int64}:
0 0 0 0 0
0 1 5 7 10
0 3 11 18 23
0 7 18 29 35
0 9 25 40 48
julia> diff(diff(y00, dims=1), dims=2)
4x4 Matrix{Int64}:
1 5 7 10
3 11 18 23
7 18 29 35
9 25 40 48
But I think concatenating takes time and memory.
Is there any better idea to reproduce X from Y?
Context
I want to expand the above matrices X and Y to any dimensional array. For example, I want to reconstruct a three-dimensional array X from given three-dimensional array
Y = cumsum( cumsum( cumsum(X, dims=1), dims=2), dims=3)
When both speed and succinctness are required, it's hard to beat powerful Julia packages like Tullio.jl. Here is a one-liner that's about 4X faster than the fastest solution by #DanGetz.
using Tullio
cumdiff(Y) = #tullio X[i,j] = Y[i,j] - Y[i,j-1] - Y[i-1,j] + Y[i-1,j-1]
Benchmarking with a 100-by-100 matrix gives:
X = rand(0:100,100,100)
Y = cumsum(cumsum(X,dims=1), dims=2)
#btime cumdiff($Y)
#btime decumsum3($Y)
4.957 μs (17 allocations: 464 bytes)
21.300 μs (2 allocations: 78.17 KiB)
Fix: The code above was using the predefined X instead of creating a new one. This is fixed below, and the speedup is more like 3.5X and not 4X.
function cumdiff(Y)
X = similar(Y)
X[1] = Y[1]
for i = 2:size(Y,1) X[i,1] = Y[i,1] - Y[i-1,1] end
for j = 2:size(Y,2) X[1,j] = Y[1,j] - Y[1,j-1] end
#tullio X[i,j] = Y[i,j] - Y[i,j-1] - Y[i-1,j] + Y[i-1,j-1]
end
#btime cumdiff($Y)
#btime decumsum3($Y)
6.000 μs (4 allocations: 78.23 KiB)
21.300 μs (2 allocations: 78.17 KiB)
See EDIT section below.
Some options so far:
decumsum1(X) = begin
Z = copy(X)
Z[2:end,:] .-= Z[1:end-1,:]
Z[:,2:end] .-= Z[:,1:end-1]
return Z
end
decumsum2(X) = begin # This is from question #
r,c = size(X)
Z = vcat(zeros(eltype(X),r+1)',
hcat(zeros(eltype(X),c), X))
return diff(diff(Z, dims=1), dims=2)
end
decumsum3(Y) = [Y[I]-(I[2]==1 ? 0 : Y[I[1],I[2]-1])-
(I[1]==1 ? 0 : Y[I[1]-1,I[2]])+
((I[1]==1 || I[2]==1) ? 0 : Y[I[1]-1,I[2]-1])
for I in CartesianIndices(Y)]
function decumsum5(Y)
R = similar(Y)
h,w = size(Y)
R[1,1] = Y[1,1]
#inbounds for i=2:h R[i,1] = Y[i,1]-Y[i-1,1] ; end
#inbounds for j=2:w R[1,j] = Y[1,j]-Y[1,j-1] ; end
#inbounds for i=2:h,j=2:w R[i,j] = Y[i,j]-Y[i-1,j]-Y[i,j-1]+Y[i-1,j-1] ; end
return R
end
Giving the following benchmarks:
julia> using BenchmarkTools
julia> decumsum1(Y) == decumsum2(Y) == decumsum3(Y) == X
true
julia> #btime decumsum1($Y);
352.571 ns (5 allocations: 832 bytes)
julia> #btime decumsum2($Y);
475.438 ns (9 allocations: 1.14 KiB)
julia> #btime decumsum3($Y);
96.875 ns (1 allocation: 192 bytes)
julia> #btime decumsum5($Y);
60.805 ns (1 allocation: 192 bytes)
EDIT: Perhaps the prettier solutions is:
decumsum(Y; dims) = [Y[I] - (
I[dims]==1 ? 0 : Y[(ifelse(k == dims,I[k]-1,I[k])
for k in 1:ndims(Y))...]
) for I in CartesianIndices(Y)]
and with it, the cumsum can be walked back:
julia> decumsum(decumsum(Y, dims=1), dims=2)
4×4 Matrix{Int64}:
1 4 2 3
2 4 5 2
4 3 4 1
2 5 4 2
julia> decumsum(decumsum(Y, dims=1), dims=2) == X
true
julia> #btime decumsum(decumsum($Y, dims=1), dims=2);
165.656 ns (2 allocations: 384 bytes)
with nice performance and also generalized to any Array dimension.
Update: another version decumsum5 added. Still faster.

How to optimize a function and minimize allocations

The following function generates primes up to N. For large N, this becomes quite slow, my Julia implementation is 5X faster for N = 10**7. I guess the creation of a large integer array and using pack to collect the result is the slowest part. I tried counting .true.s first, then allocating res(:) and populating it using a loop, but the speedup was negligible (4%) as I iterate the prims array twice in this case. In Julia, I used findall which does exactly what I did; iterating the array twice, first counting trues and allocationg result then populating it. Any ideas? Thank you.
Compiler:
Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 18.0.3.210 Build 20180410 (on Windows 10)
Options: ifort -warn /O3 -heap-arrays:8000000
program main
implicit none
integer, allocatable :: primes(:)
integer :: t0, t1, count_rate, count_max
call system_clock(t0, count_rate, count_max)
primes = do_primes(10**7)
call system_clock(t1)
print '(a,f7.5,a)', 'Elapsed time: ', real(t1-t0)/count_rate, ' seconds'
print *, primes(1:10)
contains
function do_primes(N) result (res)
integer, allocatable :: res(:), array(:)
logical, allocatable :: prims(:)
integer :: N, i, j
allocate (prims(N))
prims = .true.
i = 3
do while (i * i < N)
j = i
do while (j * i < N)
prims(j*i) = .false.
j = j + 2
end do
i = i + 2
end do
prims(1) = .false.
prims(2) = .true.
do i = 4, N, 2
prims(i) = .false.
end do
allocate (array(N))
do i = 1, N
array(i) = i
end do
res = pack(array, prims)
end
end
Timing (147 runs):
Elapsed time: 0.14723 seconds
Edit:
I converted the do whiles to straight dos as per #IanBush comment like this, still no speedup:
do i = 3, sqrt(dble(N)), 2
do j = i, N/i, 2
prims(j*i) = .false.
end do
end do
The Julia implementation:
function do_primes(N)
prims = trues(N)
i = 3
while i * i < N
j = i
while j * i < N
prims[j*i] = false
j = j + 2
end
i = i + 2
end
prims[1] = false
prims[2] = true
prims[4:2:N] .= false
return findall(prims)
end
Timing:
using Benchmarktools
#benchmark do_primes(10^7)
BenchmarkTools.Trial:
memory estimate: 6.26 MiB
allocs estimate: 5
--------------
minimum time: 32.227 ms (0.00% GC)
median time: 32.793 ms (0.00% GC)
mean time: 34.098 ms (3.92% GC)
maximum time: 94.479 ms (65.46% GC)
--------------
samples: 147
evals/sample: 1

AMPL: Syntax for sets?

I'm spinning up on high level language for mixed integer linear programs (MILPs). The language is A Modeling Language for A Mathematical Programming Language (AMPL).
Chapter 4, page 65, Figure 4-7 shows the following syntax:
set PROD := bands coils plate ;
However, Chapter 5, page 74, shows the following syntax:
set PROD = {"bands", "coils", "plate"};
Can anyone please explain this difference in syntax?
I put the latter into a *.dat file, and AMPL complains expected ; ( : or symbol where the { is. Wondering if it is just a mistake in the manual.
Thanks.
The syntax in Chapter 4 --
set PROD := bands coils plate;
-- is used in data files, while the syntax in Chapter 5 --
set PROD = {"bands", "coils", "plate"};
-- is used in model files. It's a little weird (IMO) that the syntax for sets is different in model and data files, but it is. For another example of this difference, see this question and answer.
Complete working example code modified from AMPL manual
Added by the original poster of the question.
dietu.mod:
# dietu.mod
#----------
# set MINREQ; # nutrients with minimum requirements
# set MAXREQ; # nutrients with maximum requirements
set MINREQ = {"A", "B1", "B2", "C", "CAL"};
set MAXREQ = {"A", "NA", "CAL"};
set NUTR = MINREQ union MAXREQ; # nutrients
set FOOD; # foods
param cost {FOOD} > 0;
param f_min {FOOD} >= 0;
param f_max {j in FOOD} >= f_min[j];
param n_min {MINREQ} >= 0;
param n_max {MAXREQ} >= 0;
param amt {NUTR,FOOD} >= 0;
var Buy {j in FOOD} >= f_min[j], <= f_max[j];
minimize Total_Cost: sum {j in FOOD} cost[j] * Buy[j];
subject to Diet_Min {i in MINREQ}:
sum {j in FOOD} amt[i,j] * Buy[j] >= n_min[i];
subject to Diet_Max {i in MAXREQ}:
sum {j in FOOD} amt[i,j] * Buy[j] <= n_max[i];
The explicit definitions of setes MINREQ and MAXREQ and their members is taken from the *.dat file below (where their definitions have been commented out). Matlab users, observe above & beware that you need commas between members in a set.
dietu.dat:
# dietu.dat
#----------
data;
# set MINREQ := A B1 B2 C CAL ;
# set MAXREQ := A NA CAL ;
set FOOD := BEEF CHK FISH HAM MCH MTL SPG TUR ;
param: cost f_min f_max :=
BEEF 3.19 2 10
CHK 2.59 2 10
FISH 2.29 2 10
HAM 2.89 2 10
MCH 1.89 2 10
MTL 1.99 2 10
SPG 1.99 2 10
TUR 2.49 2 10 ;
param: n_min n_max :=
A 700 20000
C 700 .
B1 0 .
B2 0 .
NA . 50000
CAL 16000 24000 ;
param amt (tr): A C B1 B2 NA CAL :=
BEEF 60 20 10 15 938 295
CHK 8 0 20 20 2180 770
FISH 8 10 15 10 945 440
HAM 40 40 35 10 278 430
MCH 15 35 15 15 1182 315
MTL 70 30 15 15 896 400
SPG 25 50 25 15 1329 370
TUR 60 20 15 10 1397 450 ;
Solve the model using the following at the AMPL prompt:
reset data;
reset;
model dietu.mod;
data dietu.dat;
solve;

Julia: why doesn't shared memory multi-threading give me a speedup?

I want to use shared memory multi-threading in Julia. As done by the Threads.#threads macro, I can use ccall(:jl_threading_run ...) to do this. And whilst my code now runs in parallel, I don't get the speedup I expected.
The following code is intended as a minimal example of the approach I'm taking and the performance problem I'm having: [EDIT: See later for even more minimal example]
nthreads = Threads.nthreads()
test_size = 1000000
println("STARTED with ", nthreads, " thread(s) and test size of ", test_size, ".")
# Something to be processed:
objects = rand(test_size)
# Somewhere for our results
results = zeros(nthreads)
counts = zeros(nthreads)
# A function to do some work.
function worker_fn()
work_idx = 1
my_result = results[Threads.threadid()]
while work_idx > 0
my_result += objects[work_idx]
work_idx += nthreads
if work_idx > test_size
break
end
counts[Threads.threadid()] += 1
end
end
# Call our worker function using jl_threading_run
#time ccall(:jl_threading_run, Ref{Cvoid}, (Any,), worker_fn)
# Verify that we made as many calls as we think we did.
println("\nCOUNTS:")
println("\tPer thread:\t", counts)
println("\tSum:\t\t", sum(counts))
On an i7-7700, a typical single threaded result is:
STARTED with 1 thread(s) and test size of 1000000.
0.134606 seconds (5.00 M allocations: 76.563 MiB, 1.79% gc time)
COUNTS:
Per thread: [999999.0]
Sum: 999999.0
And with 4 threads:
STARTED with 4 thread(s) and test size of 1000000.
0.140378 seconds (1.81 M allocations: 25.661 MiB)
COUNTS:
Per thread: [249999.0, 249999.0, 249999.0, 249999.0]
Sum: 999996.0
Multi-threading slows things down! Why?
EDIT: A better minimal example can be created #threads macro itself.
a = zeros(Threads.nthreads())
b = rand(test_size)
calls = zeros(Threads.nthreads())
#time Threads.#threads for i = 1 : test_size
a[Threads.threadid()] += b[i]
calls[Threads.threadid()] += 1
end
I falsely assumed that the #threads macro's inclusion in Julia would mean that there was a benefit to be had.
The problem you have is most probably false sharing.
You can solve it by separating the areas you write to far enough like this (here is a "quick and dirty" implementation to show the essence of the change):
julia> function f(spacing)
test_size = 1000000
a = zeros(Threads.nthreads()*spacing)
b = rand(test_size)
calls = zeros(Threads.nthreads()*spacing)
Threads.#threads for i = 1 : test_size
#inbounds begin
a[Threads.threadid()*spacing] += b[i]
calls[Threads.threadid()*spacing] += 1
end
end
a, calls
end
f (generic function with 1 method)
julia> #btime f(1);
41.525 ms (35 allocations: 7.63 MiB)
julia> #btime f(8);
2.189 ms (35 allocations: 7.63 MiB)
or doing per-thread accumulation on a local variable like this (this is a preferred approach as it should be uniformly faster):
function getrange(n)
tid = Threads.threadid()
nt = Threads.nthreads()
d , r = divrem(n, nt)
from = (tid - 1) * d + min(r, tid - 1) + 1
to = from + d - 1 + (tid ≤ r ? 1 : 0)
from:to
end
function f()
test_size = 10^8
a = zeros(Threads.nthreads())
b = rand(test_size)
calls = zeros(Threads.nthreads())
Threads.#threads for k = 1 : Threads.nthreads()
local_a = 0.0
local_c = 0.0
for i in getrange(test_size)
for j in 1:10
local_a += b[i]
local_c += 1
end
end
a[Threads.threadid()] = local_a
calls[Threads.threadid()] = local_c
end
a, calls
end
Also note that you are probably using 4 treads on a machine with 2 physical cores (and only 4 virtual cores) so the gains from threading will not be linear.

Infinite loop in algorithm to match clocks running at different speeds

I'm trying to solve this problem:
Two clocks, which show the time in hours and minutes using the 24 hour clock, are running at different
speeds. Each clock is an exact number of minutes per hour fast. Both clocks start showing the same time
(00:00) and are checked regularly every hour (starting after one hour) according to an accurate timekeeper.
What time will the two clocks show on the first occasion when they are checked and show the same time?
NB: For this question we only care about the clocks matching when they are checked.
For example, suppose the first clock runs 1 minute fast (per hour) and the second clock runs 31 minutes
fast (per hour).
• When the clocks are first checked after one hour, the first clock will show 01:01 and the second clock
will show 01:31;
• When the clocks are checked after two hours, they will show 02:02 and 03:02;
• After 48 hours the clocks will both show 00:48.
Here is my code:
def add_delay(min,hash)
hash[:minutes] = (hash[:minutes] + min)
if hash[:minutes] > 59
hash[:minutes] %= 60
if min < 60
add_hour(hash)
end
end
hash[:hour] += (min / 60)
hash
end
def add_hour(hash)
hash[:hour] += 1
if hash[:hour] > 23
hash[:hour] %= 24
end
hash
end
def compare(hash1,hash2)
(hash1[:hour] == hash2[:hour]) && (hash1[:minutes] == hash2[:minutes])
end
#-------------------------------------------------------------------
first_clock = Integer(gets) rescue nil
second_clock = Integer(gets) rescue nil
#hash1 = if first_clock < 60 then {:hour => 1,:minutes => first_clock} else {:hour => 1 + (first_clock/60),:minutes => (first_clock%60)} end
#hash2 = if second_clock < 60 then {:hour => 1,:minutes => second_clock} else {:hour => 1 + (second_clock/60),:minutes => (second_clock%60)} end
hash1 = {:hour => 0, :minutes => 0}
hash2 = {:hour => 0, :minutes => 0}
begin
hash1 = add_hour(hash1)
hash1 = add_delay(first_clock,hash1)
hash2 = add_hour(hash2)
p hash2.to_s
hash2 = add_delay(second_clock,hash2)
p hash2.to_s
end while !compare(hash1,hash2)
#making sure print is good
if hash1[:hour] > 9
if hash1[:minutes] > 9
puts hash1[:hour].to_s + ":" + hash1[:minutes].to_s
else
puts hash1[:hour].to_s + ":0" + hash1[:minutes].to_s
end
else
if hash1[:minutes] > 9
puts "0" + hash1[:hour].to_s + ":" + hash1[:minutes].to_s
else
puts "0" + hash1[:hour].to_s + ":0" + hash1[:minutes].to_s
end
end
#-------------------------------------------------------------------
For 1 and 31 the code runs as expected. For anything bigger, such as 5 and 100, it seems to get into an infinite loop and I don't see where the bug is. What is going wrong?
The logic in your add_delay function is flawed.
def add_delay(min,hash)
hash[:minutes] = (hash[:minutes] + min)
if hash[:minutes] > 59
hash[:minutes] %= 60
if min < 60
add_hour(hash)
end
end
hash[:hour] += (min / 60)
hash
end
If hash[:minutes] is greater than 60, you should increment the hour no matter what. Observe that an increment less than 60 can cause the minutes to overflow.
Also, you may have to increment the hour more than once if the increment exceeds 60 minutes.
Finally, it is wrong to do hash[:hour] += (min / 60) because min is not necessarily over 60 and because you have already done add_hour(hash).
Here is a corrected version of the function:
def add_delay(minutes, time)
time[:minutes] += minutes
while time[:minutes] > 59 # If the minutes overflow,
time[:minutes] -= 60 # subtract 60 minutes and
add_hour(time) # increment the hour.
end # Repeat as necessary.
time
end
You can plug this function into your existing code. I have merely taken the liberty of renaming min to minutes and hash to time inside the function.
Your code
Let's look at your code and at the same time make some small improvements.
add_delay takes a given number of minutes to add to the hash, after converting the number of minutes to hours and minutes and then the number of hours to the number of hours within a day. One problem is that if a clock gains more than 59 minutes per hour, you may have to increment hours by more than one. Try writing it and add_hours like this:
def add_delay(min_to_add, hash)
mins = hash[:minutes] + min_to_add
hrs, mins = mins.divmod 60
hash[:minutes] = mins
add_hours(hash, hrs)
end
def add_hours(hash, hours=1)
hash[:hours] = (hash[:hours] + hours) % 24
end
We do not necessarily care what either of these methods returns, as they modify the argument hash.
This uses the very handy method Fixnum#divmod to convert minutes to hours and minutes.
(Aside: some Rubiests don't use hash as the name of a variable because it is also the name of a Ruby method.)
Next, compare determines if two hashes with keys :hour and :minutes are equal. Rather than checking if both the hours and minutes match, you can just see if the hashes are equal:
def compare(hash1, hash2)
hash1 == hash2
end
Get the minutes per hour by which the clocks are fast:
first_clock = Integer(gets) rescue nil
second_clock = Integer(gets) rescue nil
and now initialize the hashes and step by hour until a match is found, then return either hash:
def find_matching_time(first_clock, second_clock)
hash1 = {:hours => 0, :minutes => 0}
hash2 = {:hours => 0, :minutes => 0}
begin
add_delay(first_clock, hash1)
add_hours(hash1)
add_delay(second_clock, hash2)
add_hours(hash2)
end until compare(hash1, hash2)
hash1
end
Let's try it:
find_matching_time(1, 31)
# => {:hours=>0, :minutes=>48}
find_matching_time(5, 100)
#=> {:hours=>0, :minutes=>0}
find_matching_time(5, 5)
#=> {:hours=>1, :minutes=>5}
find_matching_time(0, 59)
#=> {:hours=>0, :minutes=>0}
These results match those I obtained below with an alternative method. You do not return the number hours from the present until the times are the same, but you may not need that.
I have not identified why you were getting the infinite loop, but perhaps with this analysis you will be able to find it.
There are two other small changes I would suggest: 1) incorporating add_hours in add_delay and renaming the latter, and 2) getting rid of compare because it so simple and only used in one place:
def add_hour_and_delay(min_to_add, hash)
mins = hash[:minutes] + min_to_add
hrs, mins = mins.divmod 60
hash[:minutes] = mins
hash[:hours] = (hash[:hours] + 1 + hrs) % 24
end
def find_matching_time(first_clock, second_clock)
hash1 = {:hours => 0, :minutes => 0}
hash2 = {:hours => 0, :minutes => 0}
begin
add_hour_and_delay(first_clock, hash1)
add_hour_and_delay(second_clock, hash2)
end until hash1 == hash2
hash1
end
Alternative method
Here's anther way to write the method. Let:
f0: minutes per hour the first clock is fast
f1: minutes per hour the second clock is fast
Then we can compute the next time they will show the same time as follows.
Code
MINS_PER_DAY = (24*60)
def find_matching_time(f0, f1)
elapsed_hours = (1..Float::INFINITY).find { |i|
(i*(60+f0)) % MINS_PER_DAY == (i*(60+f1)) % MINS_PER_DAY }
[elapsed_hours, "%d:%02d" % ((elapsed_hours*(60+f0)) % MINS_PER_DAY).divmod(60)]
end
Examples
find_matching_time(1, 31)
#=> [48, "0:48"]
After 48 hours both clocks will show a time of "0:48".
find_matching_time(5, 100)
#=> [288, "0:00"]
find_matching_time(5, 5)
#=> [1, "1:05"]
find_matching_time(0, 59)
#=> [1440, "0:00"]
Explanation
After i hours have elapsed, the two clocks will respectively display a time that is the following number of minutes within a day:
(i*(60+f0)) % MINS_PER_DAY # clock 0
(i*(60+f1)) % MINS_PER_DAY # clock 1
Enumerable#find is then used to determine the first number of elapsed hours i when these two values are equal. We don't know how long that may take, so I've enumerated over all positive integers beginning with 1. (I guess it could be no more than 59 hours, so I could have written (1..n).find.. where n is any integer greater than 58.) The value returned by find is assigned to the variable elapsed_hours.
Both clocks will display the same time after elapsed_hours, so we can compute the time either clock will show. I've chosen to do that for clock 0. For the first example (f0=1, f1=31)
elapsed_hours #=> 48
so
mins_clock0_advances = elapsed_hours*(60+1)
#=> 2928
mins_clock_advances_within_day = mins_clock0_advances % MINS_PER_DAY
#=> 48
We then convert this to hours and minutes:
mins_clock_advances_within_day.divmod(60)
#=> [0, 48]
which we can then the method String#% to format this result appropriately:
"%d:%02d" % mins_clock_advances_within_day.divmod(60)
#=> "0:48"
See Kernel#sprintf for information on formatting when using %. In "%02d", d is for "decimal", 2 is the field width and 0 means pad left with zeroes.

Resources