Problem k-subvector using dynamic programming - algorithm

Given a vector V of n integers and an integer k, k <= n, you want a subvector (a sequence of consecutive elements of the vector ) of maximum length containing at most k distinct elements.
The technique that I use for the resolution of the problem is dynamic programming.
The complexity of this algorithm must be O(n*k).
The main problem is how to count distinct elements of the vector. as you would resolve it ?
How to write the EQUATION OF RECURRENCE ?
Thanks you!!!.

I don't know why you would insist on O(n*k), this can be solved in O(n) with 'sliding window' approach.
Maintain current 'window' [left..right]
At each step, if we can increase right by 1 (without violating 'at most k disctint elements' requirement), do it
Otherwise, increase left by 1
Check whether current window is the longest and go back to #2
Checking whether we can increase right in #2 is a little tricky. We can use hashtable storing for each element inside window how many times it occurred there.
So, the condition to allow right increase would look like
hash.size < k || hash.contains(V[right + 1])
And each time left or right is increased, we'll need to update hash (decrease or increase number of occurrences of the given element).
I'm pretty sure, any DP solution here would be longer and more complicated.

the main problem is how to count distinct elements of the vector. as you would resolve it?
If you allowed to use hashing, you could do the following
init Hashtable h
distinct_count := 0
for each element v of the vector V
if h does not contain v (O(1) time in average)
insert v into h (O(1) time in average)
distinct_count := distinct_count + 1
return distinct_count
This is in average O(n) time.
If not here is an O(n log n) solution - this time worst case
sort V (O(n log n) comparisons)
Then it should be easy to determine the number of different elements in O(n) time ;-)
I could also tell you an algorithm to sort V in O(n*b) where b is the bit count of the integers - if this helps you.
Here is the algorithm:
sort(vector, begin_index, end_index, currentBit)
reorder the vector[begin_index to end_index] so that the elements that have a 1 at bit currentBit are after those that have a 0 there (O(end_index-begin_index) time)
Let c be the count of elements that have a 0 at bit currentBit (O(end_index-begin_index) time; can be got from the step before)
if (currentBit is not 0)
call sort(vector, begin_index, begin_index+c)
call sort(vector, begin_index+c+1, end_index)
Call it with
vector = V
begin_index = 0
end_index = n-1
currentBit = bit count of the integers (=: b)-1.
This even uses dynamic programming as requested.
As you can determine very easily this is O(n*b) time with a recursion depth of b.

Related

Fixing this faulty Bingo Sort implementation

While studying Selection Sort, I came across a variation known as Bingo Sort. According to this dictionary entry here, Bingo Sort is:
A variant of selection sort that orders items by first finding the least value, then repeatedly moving all items with that value to their final location and find the least value for the next pass.
Based on the definition above, I came up with the following implementation in Python:
def bingo_sort(array, ascending=True):
from operator import lt, gt
def comp(x, y, func):
return func(x, y)
i = 0
while i < len(array):
min_value = array[i]
j = i + 1
for k in range(i + 1, len(array), 1):
if comp(array[k], min_value, (lt if ascending else gt)):
min_value = array[k]
array[i], array[k] = array[k], array[i]
elif array[k] == min_value:
array[j], array[k] = array[k], array[j]
j += 1
i = j
return array
I know that this implementation is problematic. When I run the algorithm on an extremely small array, I get a correctly sorted array. However, running the algorithm with a larger array results in an array that is mostly sorted with incorrect placements here and there. To replicate the issue in Python, the algorithm can be ran on the following input:
test_data = [[randint(0, 101) for i in range(0, 101)],
[uniform(0, 101) for i in range(0, 101)],
["a", "aa", "aaaaaa", "aa", "aaa"],
[5, 5.6],
[3, 2, 4, 1, 5, 6, 7, 8, 9]]
for dataset in test_data:
print(dataset)
print(bingo_sort(dataset, ascending=True, mutation=True))
print("\n")
I cannot for the life of me realize where the fault is at since I've been looking at this algorithm too long and I am not really proficient at these things. I could not find an implementation of Bingo Sort online except an undergraduate graduation project written in 2020. Any help that can point me in the right direction would be greatly appreciated.
I think your main problem is that you're trying to set min_value in your first conditional statement and then to swap based on that same min_value you've just set in your second conditional statement. These processes are supposed to be staggered: the way bingo sort should work is you find the min_value in one iteration, and in the next iteration you swap all instances of that min_value to the front while also finding the next min_value for the following iteration. In this way, min_value should only get changed at the end of every iteration, not during it. When you change the value you're swapping to the front over the course of a given iteration, you can end up unintentionally shuffling things a bit.
I have an implementation of this below if you want to refer to something, with a few notes: since you're allowing a custom comparator, I renamed min_value to swap_value as we're not always grabbing the min, and I modified how the comparator is defined/passed into the function to make the algorithm more flexible. Also, you don't really need three indexes (I think there were even a couple bugs here), so I collapsed i and j into swap_idx, and renamed k to cur_idx. Finally, because of how swapping a given swap_val and finding the next_swap_val is to be staggered, you need to find the initial swap_val up front. I'm using a reduce statement for that, but you could just use another loop over the whole array there; they're equivalent. Here's the code:
from operator import lt, gt
from functools import reduce
def bingo_sort(array, comp=lt):
if len(array) <= 1:
return array
# get the initial swap value as determined by comp
swap_val = reduce(lambda val, cur: cur if comp(cur, val) else val, array)
swap_idx = 0 # set the inital swap_idx to 0
while swap_idx < len(array):
cur_idx = swap_idx
next_swap_val = array[cur_idx]
while cur_idx < len(array):
if comp(array[cur_idx], next_swap_val): # find next swap value
next_swap_val = array[cur_idx]
if array[cur_idx] == swap_val: # swap swap_vals to front of the array
array[swap_idx], array[cur_idx] = array[cur_idx], array[swap_idx]
swap_idx += 1
cur_idx += 1
swap_val = next_swap_val
return array
In general, the complexity of this algorithm depends on how many duplicate values get processed, and when they get processed. This is because every time k duplicate values get processed during a given iteration, the length of the inner loop is decreased by k for all subsequent iterations. Performance is therefore optimized when large clusters of duplicate values are processed early on (as when the smallest values of the array contain many duplicates). From this, there are basically two ways you could analyze the complexity of the algorithm: You could analyze it in terms of where the duplicate values tend to appear in the final sorted array (Type 1), or you could assume the clusters of duplicate values are randomly distributed through the sorted array and analyze complexity in terms of the average size of duplicate clusters (that is, in terms of the magnitude of m relative to n: Type 2).
The definition you linked uses the first type of analysis (based on where duplicates tend to appear) to derive best = Theta(n+m^2), average = Theta(nm), worst = Theta(nm). The second type of analysis produces best = Theta(n), average = Theta(nm), worst = Theta(n^2) as you vary m from Theta(1) to Theta(m) to Theta(n).
In the best Type 1 case, all duplicates will be among the smallest elements of the array, such that the run-time of the inner loop quickly decreases to O(m), and the final iterations of the algorithm proceed as an O(m^2) selection sort. However, there is still the up-front O(n) pass to select the initial swap value, so the overall complexity is O(n + m^2).
In the worst Type 1 case, all duplicates will be among the largest elements of the array. The length of the inner loop isn't substantially shortened until the last iterations of the algorithm, such that we achieve a run-time looking something like n + n-1 + n-2 .... + n-m. This is a sum of m O(n) values, giving us O(nm) total run-time.
In the average Type 1 case (and for all Type 2 cases), we don't assume that the clusters of duplicate values are biased towards the front or back of the sorted array. We take it that the m clusters of duplicate values are randomly distributed through the array in terms of their position and their size. Under this analysis, we expect that after the initial O(n) pass to find the first swap value, each of the m iterations of the outer loop reduce the length of the inner loop by approximately n/m. This leads to an expression of the overall run-time for unknown m and randomly distributed data as:
We can use this expression for the average case run-time with randomly distributed data and unknown m, Theta(nm), as the average Type 2 run-time, and it also directly gives us the best and worst case run-times based on how we might vary the magnitude of n.
In the best Type 2 case, m might just be some constant value independent of n. if we have m=Theta(1) randomly distributed duplicate clusters, the best case run time is then Theta(n*Theta(1))) = Theta(n). For example as you would see O(2n) = O(n) performance from bingo-sort with just one unique value (one pass to find the find value, one pass to swap every single value to the front), and this O(n) asymptotic complexity still holds if m is bounded by any constant.
However in the worst Type 2 case we could have m=Theta(n), and bingo sort essentially devolves into O(n^2) selection sort. This is clearly the case for m = n, but if the amount the inner-loop's run-time is expected to decrease by with each iteration, n/m, is any constant value, which is the case for any m value in Theta(n), we still see O(n^2) complexity.

Uniform sampling of k integers from [0:n)

My goal is to sample k integers from 0, ... n-1 without duplication. The order of sampled integers doesn't matter. At every each call (which occurs very often), n and k will slightly vary but not much (n is about 250,000 and k is about 2,000). I've come up with the following amortized O(k) algorithm:
Prepare an array A with items 0, 1, 2, ... , n-1. This takes O(n) but since n is relatively stable, the cost can be made amortized constant.
Sample a random number r from [0:i] where i = n - 1. Here the cost is in fact related to n, but as n is not VERY BIG, this dependency is not critical.
Swap the rth item and the ith item in the array A.
Decrease i by 1.
Repeat k times the steps 2~4; now we have a random permutation of length k at the tail of A. Copy this.
We should roll back A to its initial state (0, ... , n-1) to keep the cost of the step 1 constant. This can be done by push r to a stack of length k at each pass of step 2. Preparation of the stack requires amortized constant cost.
I think uniform sampling of permutation/combination should be an exhaustively studied problem, so either (1) there is a much better solution, or at least (2) my solution is a (minor modification of) a well-known solution. Thus,
In case (1), I want to know that better solution.
In case (2), I want to find a reference.
Please help me. Thanks.
If k is much less than n -- say, less than half of n -- then the most efficient solution is to keep the numbers generated in a hash table (actually, a hash set, since there is no value associated with a key). If the random number happens to already be in the hash table, reject it and generate another one in its place. With the actual values of k and n suggested (k ∼ 2000; n ∼ 250,000) the expected number of rejections to generate k unique samples is less than 10, so it will hardly be noticeable. The size of the hash table is O(k), and it can simply be deleted at the end of the sample generation.
It is also possible to simulate the FYK shuffle algorithm using a hash table instead of a vector of n values, thereby avoiding having to reject generated random numbers. If you were using a vector A, you would start by initializing A[i] to i, for every 0 ≤ i < k. With the hash table H, you start with an empty hash table, and use the convention that H[i] is considered to be i if the key i is not in the hash table. Step 3 in your algorithm -- "swap A[r] with A[i]" -- becomes "add H[r] as the next element of the sample and set H[r] to H[i]". Note that it is unnecessary to set H[i] because that element will never be referred to again: all subsequent random numbers r are generate from a range which does not include i.
Because the hash table in this case contains both keys and values, it is larger than the hash set used in alternative 1, above, and the increased size (and consequent increase in memory cache misses) is likely to cause more overhead than is saved by eliminating rejections. However, it has the advantage of working even if k is occasionally close to n.
Finally, in your proposed algorithm, it is actually quite easy to restore A in O(k) time. A value A[j] will have been modified by the algorithm only if:
a. n − k ≤ j < n, or
b. there is some i such that n − k ≤ i < n and A[i] &equals; j.
Consequently, you can restore the vector A by looking at each A[i] for n − k ≤ i < n: first, if A[i] < n−k, set A[A[i]] to A[i]; then, unconditionally set A[i] to i.

Kth largest element in a max-heap

I'm trying to come up with something to solve the following:
Given a max-heap represented as an array, return the kth largest element without modifying the heap. I was asked to do it in linear time, but was told it can be done in log time.
I thought of a solution:
Use a second max-heap and fill it with k or k+1 values into it (breadth first traversal into the original one) then pop k elements and get the desired one. I suppose this should be O(N+logN) = O(N)
Is there a better solution, perhaps in O(logN) time?
The max-heap can have many ways, a better case is a complete sorted array, and in other extremely case, the heap can have a total asymmetric structure.
Here can see this:
In the first case, the kth lagest element is in the kth position, you can compute in O(1) with a array representation of heap.
But, in generally, you'll need to check between (k, 2k) elements, and sort them (or partial sort with another heap). As far as I know, it's O(K·log(k))
And the algorithm:
Input:
Integer kth <- 8
Heap heap <- {19,18,10,17,14,9,4,16,15,13,12}
BEGIN
Heap positionHeap <- Heap with comparation: ((n0,n1)->compare(heap[n1], heap[n0]))
Integer childPosition
Integer candidatePosition <- 0
Integer count <- 0
positionHeap.push(candidate)
WHILE (count < kth) DO
candidatePosition <- positionHeap.pop();
childPosition <- candidatePosition * 2 + 1
IF (childPosition < size(heap)) THEN
positionHeap.push(childPosition)
childPosition <- childPosition + 1
IF (childPosition < size(heap)) THEN
positionHeap.push(childPosition)
END-IF
END-IF
count <- count + 1
END-WHILE
print heap[candidate]
END-BEGIN
EDITED
I found "Optimal Algorithm of Selection in a min-heap" by Frederickson here:
ftp://paranoidbits.com/ebooks/An%20Optimal%20Algorithm%20for%20Selection%20in%20a%20Min-Heap.pdf
No, there's no O(log n)-time algorithm, by a simple cell probe lower bound. Suppose that k is a power of two (without loss of generality) and that the heap looks like (min-heap incoming because it's easier to label, but there's no real difference)
1
2 3
4 5 6 7
.............
permutation of [k, 2k).
In the worst case, we have to read the entire permutation, because there are no order relations imposed by the heap, and as long as k is not found, it could be in any location not yet examined. This takes time Omega(k), matching the (complicated!) algorithm posted by templatetypedef.
To the best of my knowledge, there's no easy algorithm for solving this problem. The best algorithm I know of is due to Frederickson and it isn't easy. You can check out the paper here, but it might be behind a paywall. It runs in time O(k) and this is claimed to be the best possible time, so I suspect that a log-time solution doesn't exist.
If I find a better algorithm than this, I'll be sure to let you know.
Hope this helps!
Max-heap in an array: element at i is larger than elements at 2*i+1 and 2*i+2 (i is 0-based)
You'll need another max heap (insert, pop, empty) with element pairs (value, index) sorted by value. Pseudocode (without boundary checks):
input: k
1. insert (at(0), 0)
2. (v, i) <- pop and k <- k - 1
3. if k == 0 return v
4. insert (at(2*i+1), 2*i+1) and insert (at(2*+2), 2*+2)
5. goto 2
Runtime evaluation
array access at(i): O(1)
insertion into heap: O(log n)
insert at 4. takes at most log(k) since the size of heap of pairs is at most k + 1
statement 3. is reached at most k times
total runtime: O(k log k)

How do you find multiple ki smallest elements in array?

I am struggling with my homework and need a little push- the question is to design an algorithm that will in O(nlogm) time find multiple smallest elements 1<k1<k2<...<kn and you have m *k. I know that a simple selection algorithm takes o(n) time to find the kth element, but how do you reduce the m in your recurrence? I though to do both k1 and kn in each run, but that will only take out 2, not m/2.
Would appreciate some directions.
Thanks
If I understand the question correctly, you have a vector K containing m indices, and you want to find the k'th ranked element of A for each k in K. If K contains the smallest m indices (i.e. k=1,2,...,m) then this can be done easily in linear time T=O(n) by using quickselect to find the element k_m (since all the smaller elements will be on the left at the end of quickselect). So I'm assuming that K can contain any set of m indices.
One way to accomplish this is by running quickselect on all of K at the same time. Here is the algorithm
QuickselectMulti(A,K)
If K is empty, then return an empty result set
Pick a pivot p from A at random
Partition A into sets A0<p and A1>p.
i = A0.size + 1
if K contains i, then remove i from K and add (i=>p) to the result set.
Partition K into sets K0<i and K1>i
add QuickselectMulti(A0,K0) to the result set
subtract i from each k in K1
call QuickselectMulti(A1,K1), add i to each index of the output, and add this to the result set
return the result set
If K contains just one element, this is the same as randomized quickselect. To see why the running time is O(n log m) on average, first consider what happens when each pivot exactly splits both A and K in half. In this case, you get two recursive calls, so you have
T = n + 2T(n/2,m/2)
= n + n + 4T(n/4,m/4)
= n + n + n + 8T(n/8,m/8)
Since m drops in half each time, then n will show up log m times in this summation. To actually derive the expected running time requires a little more work, because you can't assume that the pivot will split both arrays in half, but if you work through the calculations, you will see that the running time is in fact O(n log m) on average.
On edit: The analysis of this algorithm can make this simpler by choosing the pivot by running p=Quickselect(A,k_i) where k_i is the middle element of K, rather than choosing p at random. This will guarantee that K gets split in half each time, and so the number of recursive calls will be exactly log m, and since quickselect runs in linear time, the result will still be O(n log m).

Why increase pointer by two while finding loop in linked list, why not 3,4,5?

I had a look at question already which talk about algorithm to find loop in a linked list. I have read Floyd's cycle-finding algorithm solution, mentioned at lot of places that we have to take two pointers. One pointer( slower/tortoise ) is increased by one and other pointer( faster/hare ) is increased by 2. When they are equal we find the loop and if faster pointer reaches null there is no loop in the linked list.
Now my question is why we increase faster pointer by 2. Why not something else? Increasing by 2 is necessary or we can increase it by X to get the result. Is it necessary that we will find a loop if we increment faster pointer by 2 or there can be the case where we need to increment by 3 or 5 or x.
From a correctness perspective, there is no reason that you need to use the number two. Any choice of step size will work (except for one, of course). However, choosing a step of size two maximizes efficiency.
To see this, let's take a look at why Floyd's algorithm works in the first place. The idea is to think about the sequence x0, x1, x2, ..., xn, ... of the elements of the linked list that you'll visit if you start at the beginning of the list and then keep on walking down it until you reach the end. If the list does not contain a cycle, then all these values are distinct. If it does contain a cycle, though, then this sequence will repeat endlessly.
Here's the theorem that makes Floyd's algorithm work:
The linked list contains a cycle if and only if there is a positive integer j such that for any positive integer k, xj = xjk.
Let's go prove this; it's not that hard. For the "if" case, if such a j exists, pick k = 2. Then we have that for some positive j, xj = x2j and j ≠ 2j, and so the list contains a cycle.
For the other direction, assume that the list contains a cycle of length l starting at position s. Let j be the smallest multiple of l greater than s. Then for any k, if we consider xj and xjk, since j is a multiple of the loop length, we can think of xjk as the element formed by starting at position j in the list, then taking j steps k-1 times. But each of these times you take j steps, you end up right back where you started in the list because j is a multiple of the loop length. Consequently, xj = xjk.
This proof guarantees you that if you take any constant number of steps on each iteration, you will indeed hit the slow pointer. More precisely, if you're taking k steps on each iteration, then you will eventually find the points xj and xkj and will detect the cycle. Intuitively, people tend to pick k = 2 to minimize the runtime, since you take the fewest number of steps on each iteration.
We can analyze the runtime more formally as follows. If the list does not contain a cycle, then the fast pointer will hit the end of the list after n steps for O(n) time, where n is the number of elements in the list. Otherwise, the two pointers will meet after the slow pointer has taken j steps. Remember that j is the smallest multiple of l greater than s. If s ≤ l, then j = l; otherwise if s > l, then j will be at most 2s, and so the value of j is O(s + l). Since l and s can be no greater than the number of elements in the list, this means than j = O(n). However, after the slow pointer has taken j steps, the fast pointer will have taken k steps for each of the j steps taken by the slower pointer so it will have taken O(kj) steps. Since j = O(n), the net runtime is at most O(nk). Notice that this says that the more steps we take with the fast pointer, the longer the algorithm takes to finish (though only proportionally so). Picking k = 2 thus minimizes the overall runtime of the algorithm.
Hope this helps!
Let us suppose the length of the list which does not contain the loop be s, length of the loop be t and the ratio of fast_pointer_speed to slow_pointer_speed be k.
Let the two pointers meet at a distance j from the start of the loop.
So, the distance slow pointer travels = s + j. Distance the fast pointer travels = s + j + m * t (where m is the number of times the fast pointer has completed the loop). But, the fast pointer would also have traveled a distance k * (s + j) (k times the distance of the slow pointer).
Therefore, we get k * (s + j) = s + j + m * t.
s + j = (m / k-1)t.
Hence, from the above equation, length the slow pointer travels is an integer multiple of the loop length.
For greatest efficiency , (m / k-1) = 1 (the slow pointer shouldn't have traveled the loop more than once.)
therefore , m = k - 1 => k = m + 1
Since m is the no.of times the fast pointer has completed the loop , m >= 1 .
For greatest efficiency , m = 1.
therefore k = 2.
if we take a value of k > 2 , more the distance the two pointers would have to travel.
Hope the above explanation helps.
Consider a cycle of size L, meaning at the kth element is where the loop is: xk -> xk+1 -> ... -> xk+L-1 -> xk. Suppose one pointer is run at rate r1=1 and the other at r2. When the first pointer reaches xk the second pointer will already be in the loop at some element xk+s where 0 <= s < L. After m further pointer increments the first pointer is at xk+(m mod L) and the second pointer is at xk+((m*r2+s) mod L). Therefore the condition that the two pointers collide can be phrased as the existence of an m satisfying the congruence
m = m*r2 + s (mod L)
This can be simplified with the following steps
m(1-r2) = s (mod L)
m(L+1-r2) = s (mod L)
This is of the form of a linear congruence. It has a solution m if s is divisible by gcd(L+1-r2,L). This will certainly be the case if gcd(L+1-r2,L)=1. If r2=2 then gcd(L+1-r2,L)=gcd(L-1,L)=1 and a solution m always exists.
Thus r2=2 has the good property that for any cycle size L, it satisfies gcd(L+1-r2,L)=1 and thus guarantees that the pointers will eventually collide even if the two pointers start at different locations. Other values of r2 do not have this property.
If the fast pointer moves 3 steps and slow pointer at 1 step, it is not guaranteed for both pointers to meet in cycles containing even number of nodes. If the slow pointer moved at 2 steps, however, the meeting would be guaranteed.
In general, if the hare moves at H steps, and tortoise moves at T steps, you are guaranteed to meet in a cycle iff H = T + 1.
Consider the hare moving relative to the tortoise.
Hare's speed relative to the tortoise is H - T nodes per iteration.
Given a cycle of length N =(H - T) * k, where k is any positive
integer, the hare would skip every H - T - 1 nodes (again, relative
to the tortoise), and it would be impossible to for them to meet if
the tortoise was in any of those nodes.
The only possibility where a meeting is guaranteed is when H - T - 1 = 0.
Hence, increasing the fast pointer by x is allowed, as long as the slow pointer is increased by x - 1.
Here is a intuitive non-mathematical way to understand this:
If the fast pointer runs off the end of the list obviously there is no cycle.
Ignore the initial part where the pointers are in the initial non-cycle part of the list, we just need to get them into the cycle. It doesn't matter where in the cycle the fast pointer is when the slow pointer finally reaches the cycle.
Once they are both in the cycle, they are circling the cycle but at different points. Imagine if they were both moving by one each time. Then they would be circling the cycle but staying the same distance apart. In other words, making the same loop but out of phase. Now by moving the fast pointer by two each step they are changing their phase with each other; Decreasing their distance apart by one each step. The fast pointer will catch up to the slow pointer and we can detect the loop.
To prove this is true, that they will meet each other and the fast pointer will not somehow overtake and skip over the slow pointer just hand simulate what happens when the fast pointer is three steps behind the slow, then simulate what happens when the fast pointer is two steps behind the slow, then when the fast pointer is just one step behind the slow pointer. In every case they meet at the same node. Any larger distance will eventually become a distance of three, two or one.
If there is a loop (of n nodes), then once a pointer has entered the loop it will remain there forever; so we can move forward in time until both pointers are in the loop. From here on the pointers can be represented by integers modulo n with initial values a and b. The condition for them to meet after t steps is then
a+t≡b+2t mod n
which has solution t=a−b mod n.
This will work so long as the difference between the speeds shares no prime factors with n.
Reference
https://math.stackexchange.com/questions/412876/proof-of-the-2-pointer-method-for-finding-a-linked-list-loop
The single restriction on speeds is that their difference should be co-prime with the loop's length.
Theoretically, consider the cycle(loop) as a park(circular, rectangle whatever), First person X is moving slow and Second person Y is moving faster than X. Now, it doesn't matter if person Y is moving with speed of 2 times that of X or 3,4,5... times. There will always be a case when they meet at one point.
Say we use two references Rp and Rq which take p and q steps in each iteration; p > q. In the Floyd's algorithm, p = 2, q = 1.
We know that after certain iterations, both Rp and Rq will be at some elements of the loop. Then, say Rp is ahead of Rq by x steps. That is, starting at the element of Rq, we can take x steps to reach the element of Rp.
Say, the loop has n elements. After t further iterations, Rp will be ahead of Rq by (x + (p-q)*t) steps. So, they can meet after t iterations only if:
n divides (x + (p-q)*t)
Which can be written as:
(p−q)*t ≡ (−x) (mod n)
Due to modular arithmetic, this is possible only if: GCD(p−q, n) | x.
But we do not know x. Though, if the GCD is 1, it will divide any x. To make the GCD as 1:
if n is not known, choose any p and q such that (p-q) = 1. Floyd's algorithm does have p-q = 2-1 = 1.
if n is known, choose any p and q such that (p-q) is coprime with n.
Update: On some further analysis later, I realized that any unequal positive integers p and q will make the two references meet after some iterations. Though, the values of 1 and 2 seem to require less number of total stepping.
The reason why 2 is chosen is because lets say
slow pointer moves at 1
fast moves at 2
The loop has 5 elements.
Now for slow and fast pointer to meet ,
Lowest common multiple (LCM) of 1,2 and 5 must exist and thats where they meet. In this case its 10.
If you simulate slow and fast pointer you will see that the slow and fast pointer meet at 2 * elements in loop. When you do 2 loops , you meet at exactly same point of as starting point.
In case of non loop , it becomes LCM of 1,2 and infinity. so they never meet.
If the linked list has a loop then a fast pointer with increment of 2 will work better then say increment of 3 or 4 or more because it ensures that once we are inside the loop the pointers will surely collide and there will be no overtaking.
For example if we take increment of 3 and inside the loop lets assume
fast pointer --> i
slow --> i+1
the next iteration
fast pointer --> i+3
slow --> i+2
whereas such case will never happen with increment of 2.
Also if you are really unlucky then you may end up in a situation where loop length is L and you are incrementing the fast pointer by L+1. Then you will be stuck infinitely since the difference of the movement fast and slow pointer will always be L.
I hope I made myself clear.

Resources