A friend gave me this problem as a challenge, and I've tried to find a problem like this on LeetCode, but sadly could not.
Question
Given a line of people numbered from 1 to N, and a list of pairs of M enemies, find the total number of sublines with people that contain no two people that are enemies.
Example: N = 5, enemies = [[3,4], [3,5]]
Answer: 9
Explanation: These continuous subintervals are:
[1,1], [2,2], [3,3], [1,2], [2,3], [1,3], [4,4], [4,5], [5,5]
My approach
We define a non-conflicting interval as a contiguous interval from (and including) [a,b] where no two people are enemies in that interval.
Working backwards, if I know there is a non conflicting interval from [1,3] like in the example given above, I know the number of contiguous intervals between those two numbers is n(n+1)/2 where n is the length of the interval. In this case, the interval length is 3, and so there are 6 intervals between (and including) [1,3] that count.
Extending this logic, if I have a list of all non-conflicting intervals, then the answer is simply the sum of (n_i*(n_i+1))/2 for every interval length n_i.
Then all I need to do is find these intervals. This is where I'm stuck.
I can't really think of a similar programming problem. This seems similar, but the opposite of what the Merge Intervals problem on leetcode asks for. In that problem we're sorta given the good intervals and are asked to combine them. Here we're given the bad.
Any guidance?
EDIT: Best I could come up with:
Does this work?
So let's define max_enemy[i] as the largest enemy that is less that a particular person i, where i is the usual [1,N]. We can generate this value in O(M) time simply using a the following loop:
max_enemy = [-1] * (N+1) # -1 means it doesn't exist
for e1, e2 in enms:
e1, e2 = min(e1,e2), max(e1, e2)
max_enemy[e2] = max(max_enemy[e2], e1)
Then if we go through the person's array keeping a sliding window. The sliding window ends as soon as we find a person i who has: max_enemy[i] < i. This way we know that including this person will break our contiguous interval. So we now know our interval is [s, i-1] and we can do our math. We reset s=i and continue.
Here is a visualization of how this works visually. We draw a path between any two enemies:
N=5, enemies = [[3,4], [3,5]]
1 2 3 4 5
| | |
-----
| |
--------
EDIT2: I know this doesn't work for N=5, enemies=[[1,4][3,5]], currently working on a fix, still stuck
You can solve this in O(M log M) time and O(M) space.
Let ENDINGAT(i) be the number of enemy-free intervals ending at position/person i. This is also the size of the largest enemy-free interval ending at i.
The answer you seek is the sum of all ENDINGAT(i) for every person i.
Let NEAREST(i) be the nearest enemy of person i that precedes person i. Let it be -1 if i has no preceding enemies.
Now we can write a simple formula to calculate all the ENDINGAT(values):
ENDINGAT(1) = 1, since there is only one interval ending at 1. For larger values:
ENDINGAT(i) = MIN( ENDINGAT(i-1)+1, i-NEAREST(i) )
So, it is very easy to calculate all the ENDINGAT(i) in order, as long as we can have all the NEAREST(i) in order. To get that, all you need to do is sort the enemy pairs by the highest member. Then for each i you can walk over all the pairs ending at i to find the closest one.
That's it -- it turns out to be pretty easy. The time is dominated by the O(M log M) time required to sort the enemy pairs, unless N is much bigger than M. In that case, you can skip runs of ENDINGAT for people with no preceding enemies, calculating their effect on the sum mathematically.
There's a cool visual way to see this!
Instead of focusing the line, let's look at the matrix of pairs of players. If ii and j are enemies, then the effect of this enemiship is precisely to eliminate from consideration (1) this interval, and (2) any interval strictly larger than it. Because enemiship is symmetric, we may as well just look at the upper-right half of the matrix, and the diagonal; we'll use the characters
"X" to denote that a pair is enemies,
"*" to indicate that a pair has been obscured by a pair of enemies, and
"%" in the lower half to mark it as not part of the upper-half matrix.
For the two examples in your code, observe their corresponding matrices:
# intervals: 9 # intervals: 10
0 1 2 3 4 0 1 2 3 4
------------------------ ------------------------
* * | 0 * * | 0
% * * | 1 % X * | 1
% % X X | 2 % % X | 2
% % % | 3 % % % | 3
% % % % | 4 % % % % | 4
The naive solution, provided below, solves the problem in O(N^2 M) time and O(N^2) space.
def matrix(enemies):
m = [[' ' for j in range(N)] for i in range(N)]
for (i,j) in enemies:
m[i][j] = 'X' #Mark Enemiship
# Now mark larger intervals as dead.
for q in range(0,i+1):
for r in range(j,N):
if m[q][r] == ' ':
m[q][r] = '*'
num_int = 0
for i in range(N):
for j in range(N):
if(j < i):
m[i][j] = '%'
elif m[i][j] == ' ':
num_int+=1
print("# intervals: ", num_int)
return m
To convince yourself further, here are the matrices where
player 2 is enemies with himself, so that there is a barrier, and there are two smaller versions of the puzzle on the intervals [0,1] and [3,4] each of which admits 3 sub-intervals)
Every player is enemies with the person two to their left, so that only length-(1 or 0) intervals are allowed (of which there are 4+5=9 intervals)
# intervals: 6 # intervals: 9
0 1 2 3 4 0 1 2 3 4
---------[===========+ --------[============+
* * * || 0 X * * || 0
% * * * || 1 % X * || 1
% % X * * II 2 % % X II 2
% % % | 3 % % % | 3
% % % % | 4 % % % % | 4
Complexity: Mathematically the same as sorting a list, or validating that it is sorted. that is, O(M log M) in the worst case, and O(M) space to sort, and still at least O(M) time in the best case to recognize if the list is sorted.
Bonus: This is also an excellent example to illustrate power of looking at the identity a problem, rather than its solution. Such a view of the of the problem will also inform smarter solutions. We can clearly do much better than the code I gave above...
We would clearly be done, for instance, if we could count the number of un-shaded points, which is the area of the smallest convex polygon covering the enemiships, together with the two boundary points. (Finding the two additional points can be done in O(M) time.) Now, this is probably not a problem you can solve in your sleep, but fortunately the problem of finding a convex hull, is so natural that the algorithms used to do it are well known.
In particular, a Graham Scan can do it in O(M) time, so long as we happen to be given the pairs of enemies so that one of their coordinates is sorted. Better still, once we have the set of points in the convex hull, the area can be calculated by dividing it into at most M axis-aligned rectangles. Therefore, if enemy pairs are sorted, the entire problem could be solved in O(M) time. Keep in mind that M could be dramatically than N, and we don't even need to store N numbers in an array! This corresponds to the arithmetic proposed to skip lines in the other answer to this question.
If they are not sorted, other Convex Hull algorithms yield an O(M log M) running time, with O(M) space, as given by #Matt Timmermans's solution. In fact, this is the general lower bound! This can be shown with a more complicated geometric reduction: if you can solve the problem, then you can compute the sum of the heights of each number, multiplied by its distance to "the new zero", of agents satisfying j+i = N. This sum can be used to compute distances to the diagonal line, which is enough to sort a list of numbers in O(M) time---a problem which cannot be solved in under O(M log M) time for adversarial inputs.
Ah, so why is it the case that we can get an O(N + M) solution by manually performing this integration, as done explicitly in the other solution? It is because we can sort the M numbers if we know that they fall into N bins, by Bucket Sort.
Thanks for sharing the puzzle!
This question already has answers here:
Take K elements and maximise the minimum distance
(2 answers)
Closed 7 years ago.
We are given N elements in form of array A , Now we have to choose K indexes from N given indexes such that for any 2 indexes i and j minimum value of |A[i]-A[j]| is as large as possible. We need to tell this maximum value.
Lets take an example : Let N=5 and K=2 and array be [1,5,3,7,11] then here answer is 10 as we can simply choose first and last position and differ = 11-1=10.
Example 2 : Let N=10 and K=3 and array A be [3 9 6 11 15 20 23] then here answer will be 8. As we can select [3,11,23] or [3,15,23].
Now given N , K and Array A we need to find this maximum difference.
We are given that 1 ≤ N ≤ 10^5 and 1 ≤ S ≤ 10^7
Let's sort the array.
Now we can do a binary search over the answer.
For a fixed candidate x, we can just pick the elements greedily(iterating over the sorted array and taking each element if we can). If the number of elements we have picked is not less than K, x is feasible. Otherwise, it is not.
The time complexity is O(N * log N + N * log (MAX_ELEMENT - MIN_ELEMENT))
A pseudo code:
bool isFeasible(int x):
cnt = 1
last = a[0]
for i <- 1 ... n - 1:
if a[i] - last >= x:
last = a[i]
cnt++
return cnt >= k
sort(a)
low = 0
high = a[n - 1] - a[0] + 1
while high - low > 1:
mid = low + (high - low) / 2
if isFeasible(mid):
low = mid
else
high = mid
print(low)
I think this can be dealt with as a dynamic programming problem. Start off by sorting A, and then the problem is to mark K elements in A such that the minimum difference between adjacent marked items is as large as possible. As a starter, you can always mark the first and last elements.
Moving from left to right, at each position for i=1..N work out the largest minimum difference you can get by marking i elements in the sub-array terminating at this position. You can work out the largest minimum difference for k items terminating at this position by considering the largest minimum difference for k-1 items terminating at each position to the left of the position you are working on. The obvious thing to do is to consider each possible position up to the position you are currently working on as ending a stretch of k-1 items with minimum difference, but you may be able to do a binary search here to speed things up.
Once you have worked all the way to the right hand end you know the maximum possible value for the original problem. If you need to know where to put the K elements, you can take notes as you go along so that you can backtrack to find out the elements chosen that lead to this solution, working from right to left.
My knowledge of big-O is limited, and when log terms show up in the equation it throws me off even more.
Can someone maybe explain to me in simple terms what a O(log n) algorithm is? Where does the logarithm come from?
This specifically came up when I was trying to solve this midterm practice question:
Let X(1..n) and Y(1..n) contain two lists of integers, each sorted in nondecreasing order. Give an O(log n)-time algorithm to find the median (or the nth smallest integer) of all 2n combined elements. For ex, X = (4, 5, 7, 8, 9) and Y = (3, 5, 8, 9, 10), then 7 is the median of the combined list (3, 4, 5, 5, 7, 8, 8, 9, 9, 10). [Hint: use concepts of binary search]
I have to agree that it's pretty weird the first time you see an O(log n) algorithm... where on earth does that logarithm come from? However, it turns out that there's several different ways that you can get a log term to show up in big-O notation. Here are a few:
Repeatedly dividing by a constant
Take any number n; say, 16. How many times can you divide n by two before you get a number less than or equal to one? For 16, we have that
16 / 2 = 8
8 / 2 = 4
4 / 2 = 2
2 / 2 = 1
Notice that this ends up taking four steps to complete. Interestingly, we also have that log2 16 = 4. Hmmm... what about 128?
128 / 2 = 64
64 / 2 = 32
32 / 2 = 16
16 / 2 = 8
8 / 2 = 4
4 / 2 = 2
2 / 2 = 1
This took seven steps, and log2 128 = 7. Is this a coincidence? Nope! There's a good reason for this. Suppose that we divide a number n by 2 i times. Then we get the number n / 2i. If we want to solve for the value of i where this value is at most 1, we get
n / 2i ≤ 1
n ≤ 2i
log2 n ≤ i
In other words, if we pick an integer i such that i ≥ log2 n, then after dividing n in half i times we'll have a value that is at most 1. The smallest i for which this is guaranteed is roughly log2 n, so if we have an algorithm that divides by 2 until the number gets sufficiently small, then we can say that it terminates in O(log n) steps.
An important detail is that it doesn't matter what constant you're dividing n by (as long as it's greater than one); if you divide by the constant k, it will take logk n steps to reach 1. Thus any algorithm that repeatedly divides the input size by some fraction will need O(log n) iterations to terminate. Those iterations might take a lot of time and so the net runtime needn't be O(log n), but the number of steps will be logarithmic.
So where does this come up? One classic example is binary search, a fast algorithm for searching a sorted array for a value. The algorithm works like this:
If the array is empty, return that the element isn't present in the array.
Otherwise:
Look at the middle element of the array.
If it's equal to the element we're looking for, return success.
If it's greater than the element we're looking for:
Throw away the second half of the array.
Repeat
If it's less than the the element we're looking for:
Throw away the first half of the array.
Repeat
For example, to search for 5 in the array
1 3 5 7 9 11 13
We'd first look at the middle element:
1 3 5 7 9 11 13
^
Since 7 > 5, and since the array is sorted, we know for a fact that the number 5 can't be in the back half of the array, so we can just discard it. This leaves
1 3 5
So now we look at the middle element here:
1 3 5
^
Since 3 < 5, we know that 5 can't appear in the first half of the array, so we can throw the first half array to leave
5
Again we look at the middle of this array:
5
^
Since this is exactly the number we're looking for, we can report that 5 is indeed in the array.
So how efficient is this? Well, on each iteration we're throwing away at least half of the remaining array elements. The algorithm stops as soon as the array is empty or we find the value we want. In the worst case, the element isn't there, so we keep halving the size of the array until we run out of elements. How long does this take? Well, since we keep cutting the array in half over and over again, we will be done in at most O(log n) iterations, since we can't cut the array in half more than O(log n) times before we run out of array elements.
Algorithms following the general technique of divide-and-conquer (cutting the problem into pieces, solving those pieces, then putting the problem back together) tend to have logarithmic terms in them for this same reason - you can't keep cutting some object in half more than O(log n) times. You might want to look at merge sort as a great example of this.
Processing values one digit at a time
How many digits are in the base-10 number n? Well, if there are k digits in the number, then we'd have that the biggest digit is some multiple of 10k. The largest k-digit number is 999...9, k times, and this is equal to 10k + 1 - 1. Consequently, if we know that n has k digits in it, then we know that the value of n is at most 10k + 1 - 1. If we want to solve for k in terms of n, we get
n ≤ 10k+1 - 1
n + 1 ≤ 10k+1
log10 (n + 1) ≤ k + 1
(log10 (n + 1)) - 1 ≤ k
From which we get that k is approximately the base-10 logarithm of n. In other words, the number of digits in n is O(log n).
For example, let's think about the complexity of adding two large numbers that are too big to fit into a machine word. Suppose that we have those numbers represented in base 10, and we'll call the numbers m and n. One way to add them is through the grade-school method - write the numbers out one digit at a time, then work from the right to the left. For example, to add 1337 and 2065, we'd start by writing the numbers out as
1 3 3 7
+ 2 0 6 5
==============
We add the last digit and carry the 1:
1
1 3 3 7
+ 2 0 6 5
==============
2
Then we add the second-to-last ("penultimate") digit and carry the 1:
1 1
1 3 3 7
+ 2 0 6 5
==============
0 2
Next, we add the third-to-last ("antepenultimate") digit:
1 1
1 3 3 7
+ 2 0 6 5
==============
4 0 2
Finally, we add the fourth-to-last ("preantepenultimate"... I love English) digit:
1 1
1 3 3 7
+ 2 0 6 5
==============
3 4 0 2
Now, how much work did we do? We do a total of O(1) work per digit (that is, a constant amount of work), and there are O(max{log n, log m}) total digits that need to be processed. This gives a total of O(max{log n, log m}) complexity, because we need to visit each digit in the two numbers.
Many algorithms get an O(log n) term in them from working one digit at a time in some base. A classic example is radix sort, which sorts integers one digit at a time. There are many flavors of radix sort, but they usually run in time O(n log U), where U is the largest possible integer that's being sorted. The reason for this is that each pass of the sort takes O(n) time, and there are a total of O(log U) iterations required to process each of the O(log U) digits of the largest number being sorted. Many advanced algorithms, such as Gabow's shortest-paths algorithm or the scaling version of the Ford-Fulkerson max-flow algorithm, have a log term in their complexity because they work one digit at a time.
As to your second question about how you solve that problem, you may want to look at this related question which explores a more advanced application. Given the general structure of problems that are described here, you now can have a better sense of how to think about problems when you know there's a log term in the result, so I would advise against looking at the answer until you've given it some thought.
When we talk about big-Oh descriptions, we are usually talking about the time it takes to solve problems of a given size. And usually, for simple problems, that size is just characterized by the number of input elements, and that's usually called n, or N. (Obviously that's not always true-- problems with graphs are often characterized in numbers of vertices, V, and number of edges, E; but for now, we'll talk about lists of objects, with N objects in the lists.)
We say that a problem "is big-Oh of (some function of N)" if and only if:
For all N > some arbitrary N_0, there is some constant c, such that the runtime of the algorithm is less than that constant c times (some function of N.)
In other words, don't think about small problems where the "constant overhead" of setting up the problem matters, think about big problems. And when thinking about big problems, big-Oh of (some function of N) means that the run-time is still always less than some constant times that function. Always.
In short, that function is an upper bound, up to a constant factor.
So, "big-Oh of log(n)" means the same thing that I said above, except "some function of N" is replaced with "log(n)."
So, your problem tells you to think about binary search, so let's think about that. Let's assume you have, say, a list of N elements that are sorted in increasing order. You want to find out if some given number exists in that list. One way to do that which is not a binary search is to just scan each element of the list and see if it's your target number. You might get lucky and find it on the first try. But in the worst case, you'll check N different times. This is not binary search, and it is not big-Oh of log(N) because there's no way to force it into the criteria we sketched out above.
You can pick that arbitrary constant to be c=10, and if your list has N=32 elements, you're fine: 10*log(32) = 50, which is greater than the runtime of 32. But if N=64, 10*log(64) = 60, which is less than the runtime of 64. You can pick c=100, or 1000, or a gazillion, and you'll still be able to find some N that violates that requirement. In other words, there is no N_0.
If we do a binary search, though, we pick the middle element, and make a comparison. Then we throw out half the numbers, and do it again, and again, and so on. If your N=32, you can only do that about 5 times, which is log(32). If your N=64, you can only do this about 6 times, etc. Now you can pick that arbitrary constant c, in such a way that the requirement is always met for large values of N.
With all that background, what O(log(N)) usually means is that you have some way to do a simple thing, which cuts your problem size in half. Just like the binary search is doing above. Once you cut the problem in half, you can cut it in half again, and again, and again. But, critically, what you can't do is some preprocessing step that would take longer than that O(log(N)) time. So for instance, you can't shuffle your two lists into one big list, unless you can find a way to do that in O(log(N)) time, too.
(NOTE: Nearly always, Log(N) means log-base-two, which is what I assume above.)
In the following solution, all the lines with a recursive call are done on
half of the given sizes of the sub-arrays of X and Y.
Other lines are done in a constant time.
The recursive function is T(2n)=T(2n/2)+c=T(n)+c=O(lg(2n))=O(lgn).
You start with MEDIAN(X, 1, n, Y, 1, n).
MEDIAN(X, p, r, Y, i, k)
if X[r]<Y[i]
return X[r]
if Y[k]<X[p]
return Y[k]
q=floor((p+r)/2)
j=floor((i+k)/2)
if r-p+1 is even
if X[q+1]>Y[j] and Y[j+1]>X[q]
if X[q]>Y[j]
return X[q]
else
return Y[j]
if X[q+1]<Y[j-1]
return MEDIAN(X, q+1, r, Y, i, j)
else
return MEDIAN(X, p, q, Y, j+1, k)
else
if X[q]>Y[j] and Y[j+1]>X[q-1]
return Y[j]
if Y[j]>X[q] and X[q+1]>Y[j-1]
return X[q]
if X[q+1]<Y[j-1]
return MEDIAN(X, q, r, Y, i, j)
else
return MEDIAN(X, p, q, Y, j, k)
The Log term pops up very often in algorithm complexity analysis. Here are some explanations:
1. How do you represent a number?
Lets take the number X = 245436. This notation of “245436” has implicit information in it. Making that information explicit:
X = 2 * 10 ^ 5 + 4 * 10 ^ 4 + 5 * 10 ^ 3 + 4 * 10 ^ 2 + 3 * 10 ^ 1 + 6 * 10 ^ 0
Which is the decimal expansion of the number. So, the minimum amount of information we need to represent this number is 6 digits. This is no coincidence, as any number less than 10^d can be represented in d digits.
So how many digits are required to represent X? Thats equal to the largest exponent of 10 in X plus 1.
==> 10 ^ d > X
==> log (10 ^ d) > log(X)
==> d* log(10) > log(X)
==> d > log(X) // And log appears again...
==> d = floor(log(x)) + 1
Also note that this is the most concise way to denote the number in this range. Any reduction will lead to information loss, as a missing digit can be mapped to 10 other numbers. For example: 12* can be mapped to 120, 121, 122, …, 129.
2. How do you search for a number in (0, N - 1)?
Taking N = 10^d, we use our most important observation:
The minimum amount of information to uniquely identify a value in a range between 0 to N - 1 = log(N) digits.
This implies that, when asked to search for a number on the integer line, ranging from 0 to N - 1, we need at least log(N) tries to find it. Why? Any search algorithm will need to choose one digit after another in its search for the number.
The minimum number of digits it needs to choose is log(N). Hence the minimum number of operations taken to search for a number in a space of size N is log(N).
Can you guess the order complexities of binary search, ternary search or deca search? Its O(log(N))!
3. How do you sort a set of numbers?
When asked to sort a set of numbers A into an array B, here’s what it looks like ->
Permute Elements
Every element in the original array has to be mapped to it’s corresponding index in the sorted array. So, for the first element, we have n positions. To correctly find the corresponding index in this range from 0 to n - 1, we need…log(n) operations.
The next element needs log(n-1) operations, the next log(n-2) and so on. The total comes to be:
==> log(n) + log(n - 1) + log(n - 2) + … + log(1)Using log(a) + log(b) = log(a * b), ==> log(n!)
This can be approximated to nlog(n) - n. Which is O(n*log(n))!
Hence we conclude that there can be no sorting algorithm that can do better than O(n*log(n)). And some algorithms having this complexity are the popular Merge Sort and Heap Sort!
These are some of the reasons why we see log(n) pop up so often in the complexity analysis of algorithms. The same can be extended to binary numbers. I made a video on that here.
Why does log(n) appear so often during algorithm complexity analysis?
Cheers!
We call the time complexity O(log n), when the solution is based on iterations over n, where the work done in each iteration is a fraction of the previous iteration, as the algorithm works towards the solution.
Can't comment yet... necro it is!
Avi Cohen's answer is incorrect, try:
X = 1 3 4 5 8
Y = 2 5 6 7 9
None of the conditions are true, so MEDIAN(X, p, q, Y, j, k) will cut both the fives. These are nondecreasing sequences, not all values are distinct.
Also try this even-length example with distinct values:
X = 1 3 4 7
Y = 2 5 6 8
Now MEDIAN(X, p, q, Y, j+1, k) will cut the four.
Instead I offer this algorithm, call it with MEDIAN(1,n,1,n):
MEDIAN(startx, endx, starty, endy){
if (startx == endx)
return min(X[startx], y[starty])
odd = (startx + endx) % 2 //0 if even, 1 if odd
m = (startx+endx - odd)/2
n = (starty+endy - odd)/2
x = X[m]
y = Y[n]
if x == y
//then there are n-2{+1} total elements smaller than or equal to both x and y
//so this value is the nth smallest
//we have found the median.
return x
if (x < y)
//if we remove some numbers smaller then the median,
//and remove the same amount of numbers bigger than the median,
//the median will not change
//we know the elements before x are smaller than the median,
//and the elements after y are bigger than the median,
//so we discard these and continue the search:
return MEDIAN(m, endx, starty, n + 1 - odd)
else (x > y)
return MEDIAN(startx, m + 1 - odd, n, endy)
}
This was an interview question, which seems related to Project Euler Problem 14
Collatz conjecture says that if you do the following
If n is even, replace n by n/2.
If n is odd, replace n by 3n+1.
You ultimately end up with 1.
For instance, 5 -> 16 -> 8 -> 4 -> 2 -> 1
Assuming the conjecture is true, each number has a chain length: The number of steps required to get to 1. (The chain length of 1 is 0).
Now, the problem is given natural numbers n, m and a natural number k, give an algorithm to find all numbers between 1 and n, such that the chain length of those numbers is <= k. There is also the restriction that the chain of any of those numbers must only include numbers between 1 and m (i.e. you cannot go over m).
An easy way is to brute-force it out, and combine that with memoization.
The interviewer said there was an O(S) time algorithm, where S is the number of numbers we need to output.
Does anyone know what it could be?
I think you can solve this in O(S) by running the process backwards. If you know what k is, then you can build up all of the numbers that halt in at most k steps using the following logic:
1 has a chain of length 0.
If a number z has a chain of length n, then 2z has a chain of length n + 1.
If a number z has a chain of length n, z - 1 is a multiple of three (other than 0 or 3), and (z - 1)/3 is odd, then (z - 1) / 3 has a chain of length n + 1.
From this, you can start building up the numbers in the sequence starting at 1:
1
|
2
|
4
|
8
|
16
| \
32 \
| 5
64 |
/| 10
/ 128 | \
21 20 3
We could do this algorithm using a work queue storing numbers we need to visit and the lengths of their chains. We populate the queue with the pair (1, 0), then continuously dequeue an element (z, n) from the queue and enqueue (2z, n + 1) and (possibly) ((z - 1) / 3, n + 1) into the queue. This is essentially doing a breadth-first search in the Collatz graph starting from one. When we find the first element at depth k, we stop the search.
Assuming that the Collatz conjecture holds true, we'll never get any duplicates this way. Moreover, we'll have found all numbers reachable in at most k steps this way (you can quickly check this with a quick inductive proof). And finally, this takes O(S) time. To see this, note that the work done per number generated is a constant (dequeue and insert up to two new numbers into the queue).
Hope this helps!