How is this obscure sorting algorithm called - algorithm

I came up with an obscure sorting algorithm and given that it's so simple, it must have been invented and named before, so I was wondering what it's called.
It has a very rare constraint: It only works for inputs that have keys from 0 to n-1 (or equivalent). That's a very strong constraint that makes it useless in practice, but maybe one can construct some artificial settings in which it's useful. The algorithm basically swaps the element at a particular position with its final position until the array is sorted. Pseudocode:
def obscure_sort(array):
sorted_until = 1
while true
if key(array[0]) != 0:
# Swap the element at position 0 to its final position.
swap(array, 0, key(array[0]))
else:
# Find the next element that isn't in its final position.
while key(array[sorted_until]) == sorted_until:
sorted_until++
# If we happen to reach the end, we're done.
if sorted_until == array.length:
return
# Swap the newfound first unsorted element to position 0
swap(array, 0, sorted_until)
The algorithm actually runs in O(n). It's not completely trivial to see that and I'll leave out the analysis unless someone is really interested.
Does anyone know if this has a name?

This is a slight variation of a restricted cycle sort, probably closest to the algorithm from section 3 of this paper.
Normally with cycle sort on the keys A = [0, 1,...(A.length-1)], you would loop through the array testing indices 0 to A.length-1 as a 'cycle start', looking for cycles to rotate. One 'rotation' is done by always holding a temporary variable 'temp' (initially our cycle start), and doing a swap(temp, A[temp]) until we are back at the start of the cycle (i.e., when temp == A[temp]).
Here, in contrast, we add 0 at the back of the cycle, and 'A[0]' takes the place of 'temp'. We use the operation swap(A[0], A[A[0]]), so that in general, an element x that's moved takes a journey of A[old] -> A[0] -> A[x] rather than A[temp] -> temp -> A[x].
In the linear time algorithm described in the paper above, upon starting loop iteration i, all of the elements 0, 1, ..., i-1 are in place and never moved again. This algorithm is similar, except that if it were written with the same loop style, 0, 1, ..., i-1 are also in place at the start of iteration i but element 0 is not fixed, being moved constantly during an iteration.
As a small example:
Traditional Cycle Sort
Initially, A = [1, 3, 0, 2]
Step 1: A = [1, 3, 0, 2], temp = 1, with cycle_start = 0
Step 2: A = [1, 1, 0, 2], temp = 3
Step 3: A = [1, 1, 0, 3], temp = 2
Step 4: A = [2, 1, 2, 3], temp = 0
Step 5: A = [0, 1, 2, 3], temp = 2; stop since temp == A[temp]
Custom Cycle-like Sort
A = [1, 3, 0, 2]
Step 1: A = [1, 3, 0, 2]
Step 2: A = [3, 1, 0, 2]
Step 3: A = [2, 1, 0, 3]
Step 4: A = [0, 1, 2, 3]
Note that this new sort can take more steps than the normal cycle sort, since 'adding 0 at the back of the cycle' can add an additional swap operation per cycle. The total number of array swaps, though, is linear (and at most twice the array length).

Related

Problems with the Iterator/Conditional Section of my Data Structure Manipulation Code - Ruby

I am stuck on the iterator/conditional section of my code, it does not complete the iteration as expected.
The routine accepts a random non negative integer and must return a rearranged integer where the leftmost digit is the greatest digit of the input integer and the rightmost digit is the smallest.
The code is as follows:
def descending_order(n)
new = n.digits
result=[]
for i in new
if i=new.max() then new.delete(i) && result.push(i)
end
end
return result.join().to_i
end
The sample input is as follows:
descending_order(6022563)
A sample of the erroneus result I get is as follows:
descending_order(6022563) should equal 6653220 - Expected: 6653220, instead got: 653
This doesn’t address WHY you’re having a problem with your code, but here’s an alternative solution:
def descending_order(n)
n.digits.sort.reverse.join.to_i
end
descending_order(6022563)
#=> 6653220
If you want to figure out what's going in your code, one of the best ways to do that, is to simply execute it step-by-step with pen and paper, keeping track of the current value of all variables in your code. So, let's do that.
However, before we start stepping through the code, let's first see if we can simplify it without changing how it works.
Firstly, the return is redundant. In Ruby, the return value of a block of code is the value of the last expression that was evaluated in that block. So, we can just remove the return.
Secondly, for … in is equivalent (not exactly, but the differences don't matter in this particular case) to #each, so your code could also be written like this:
def descending_order(n)
new = n.digits
result = []
new.each do |i|
if i = new.max
new.delete(i) && result.push(i)
end
end
result.join.to_i
end
Next, let's look at this bit:
new.delete(i) && result.push(i)
This will delete all occurrences of i from new, and also test whether the return value of new.delete(i) is trueish. If and only if the value is trueish, result.push(i) will be executed.
However, new.delete(i) will only return a falseish value (namely nil) if the value i was not found in new. But, we just assigned i to be the maximum value in new, so we know it always exists and therefore, new.delete(i) will always return i and never nil. This means result.push(i) will always be executed and the conditional is not doing anything, we can just remove it:
new.delete(i)
result.push(i)
Now, let's look at the conditional:
if i = new.max
This assigns the maximum value of the new array to i and also tests whether that value is trueish. In other words, it is equivalent to
i = new.max
if i
In Ruby, the only two values that are falseish are false and nil, so the only way this conditional can fail is if the maximum value of the new array is either false or nil. Since we know that we created the array from the digits of a number, we know that it only contains integers, not nil nor false. Enumerable#max also returns nil if the array is empty, so, the only way this can fail is if the array is empty. In other words, it is equivalent to
i = new.max
if !new.empty?
or
i = new.max
unless new.empty?
However, we also know that we are in the middle of iterating over new, so it cannot possibly be empty! If it were empty, the iteration would be executed 0 times, i.e. we would not be hitting this conditional at all.
Therefore, the conditional will always be true and can just be removed:
def descending_order(n)
new = n.digits
result = []
new.each do |i|
i = new.max
new.delete(i)
result.push(i)
end
result.join.to_i
end
Lastly, let's look at the iteration variable i. It will get assigned the current element of the iteration, but we immediately re-assign it on the first line of the block without ever looking at its value. So, it is actually not used as an iteration variable at all, it is simply a local variable and we can remove it from the block:
def descending_order(n)
new = n.digits
result = []
new.each do
i = new.max
new.delete(i)
result.push(i)
end
result.join.to_i
end
With this simplified code in place, we are now going to look at each step of each iteration of the loop separately.
We start off with the following (let's imagine we are just before the start of the first iteration):
Variable
Value
new
[3, 6, 5, 2, 2, 0, 6]
result
[]
i
But, there is actually another, hidden, variable as well: the pointer to the current index. Remember, we are in the middle of iterating over new and each must internally somehow keep track of where we are. So, even though we don't know how each works internally, we can assume that it needs to somehow, somewhere, remember where we are. So, we have another piece of state to keep track of: the current index of the iteration.
Variable
Value
new
[3, 6, 5, 2, 2, 0, 6]
result
[]
i
index
Alright, let's look at the first step of the first iteration, which is
i = new.max
Variable
Value
new
[3, 6, 5, 2, 2, 0, 6]
result
[]
i
6
index
0: [→ 3 ←, 6, 5, 2, 2, 0, 6]
The next step is
new.delete(i)
which deletes all occurrences of i from new, i.e. it deletes all occurrences of 6 from [3, 6, 5, 2, 2, 0, 6]. This leaves us with [3, 5, 2, 2, 0], but what is also important is that each doesn't know what we are doing, that we are mutating the array. Therefore, the pointer will still stay at position 0, it will not move.
Variable
Value
new
[3, 5, 2, 2, 0]
result
[]
i
6
index
0: [→ 3 ←, 5, 2, 2, 0]
The next step is
result.push(i)
which appends i to the end of the result array:
Variable
Value
new
[3, 5, 2, 2, 0]
result
[6]
i
6
index
0: [→ 3 ←, 5, 2, 2, 0]
And that's it for the first iteration!
Let's look now at the second iteration. The first thing that is going to happen is that each internally increments its counter or moves its pointer, or however each is implemented internally. Again, we don't know how each is implemented internally, but logic dictates that it must somehow keep track of where we are in the iteration, and we can also assume that before the next iteration, it will somehow need to move this.
So, at the beginning of the iteration, the situation now looks like this:
Variable
Value
new
[3, 5, 2, 2, 0]
result
[6]
i
index
1: [3, → 5 ←, 2, 2, 0]
Alright, let's look at the first step of the second iteration:
i = new.max
We again assign i to the maximum value of new, which is now 5:
Variable
Value
new
[3, 5, 2, 2, 0]
result
[6]
i
5
index
1: [3, → 5 ←, 2, 2, 0]
Next step is
new.delete(i)
which deletes all occurrences of 5 from [3, 5, 2, 2, 0], which leaves us with [3, 2, 2, 0]. But remember what we also said above: each doesn't know what we are doing, that we are mutating the array. Therefore, the pointer will still stay at position 1, it will not move. But the item that is at position 1 (which is the number 5) will be deleted! That means, all elements after the 5, i.e. all elements after index 1 now get shifted one index to the left, and that means the index pointer is still pointing at the same index but there is now a different element at that index:
Variable
Value
new
[3, 2, 2, 0]
result
[6]
i
5
index
1: [3, → 2 ←, 2, 0]
The root cause of this problem is that we are mutating new at the same time that we are iterating over it. This is a big no-no. You should never mutate a data structure while you are processing it. Or, at the very least, you need to be very careful that the mutations you do perform do not cause you to incorrectly process or skip parts of it.
Next step is
result.push(i)
which means we push 5 to the end of result:
Variable
Value
new
[3, 2, 2, 0]
result
[6, 5]
i
5
index
1: [3, → 2 ←, 2, 0]
Alright, let's get to the next iteration: the pointer is again pushed forward to the next element, which leaves us with this picture:
Variable
Value
new
[3, 2, 2, 0]
result
[6, 5]
i
index
2: [3, 2, → 2 ←, 0]
First step:
i = new.max
Variable
Value
new
[3, 2, 2, 0]
result
[6, 5]
i
3
index
2: [3, 2, → 2 ←, 0]
Next step:
new.delete(i)
Variable
Value
new
[2, 2, 0]
result
[6, 5]
i
3
index
2: [2, 2, → 0 ←]
Next step:
result.push(i)
Variable
Value
new
[2, 2, 0]
result
[6, 5, 3]
i
3
index
2: [2, 2, → 0 ←]
As you can see, the iteration pointer is now at the end of the array, so there will be no next iteration. Therefore, the final value of the result array is [6, 5, 3], which we now join into a string and convert to an integer, so the final result is 653.
Fundamentally, there are four problems with your code:
Presumably, the if i = new.max combined assignment / conditional which always clobbers the value of i is a typo, and you meant to write if i == new.max.
You are mutating the new array while at the same time you are iterating over it.
You are deleting every occurrence of i from new, but you are only adding one occurrence to the result.
Your logic is wrong: if the current element is not the maximum element, you skip over it and ignore it completely. But if the current element is not the maximum element, that only means that it should appear in the output later, not that it should not appear at all.
If you only fix #1, i.e. you replace i = new.max with i == new.max and change nothing else about your code, the result will be 6. If you only fix #2, i.e. you replace for i in new with for i in new.dup (thus duplicating the new array and iterating over the copy so that mutating the new array itself does not influence the iteration), the result will be 65320. If you fix both #1 and #2, the result will be 65.
I encourage you to take pen and paper and trace all those three variants the same way we did above and fully understand what is going on.

Finding minimum element to the right of an index in an array for all indices

Given an array, I wish to find the minimum element to the right of the current element at i where 0=<i<n and store the index of the corresponding minimum element in another array.
For example, I have an array A ={1,3,6,7,8}
The result array would contain R={1,2,3,4} .(R array stores indices to min element).
I could only think of an O(N^2) approach.. where for each element in A, I would traverse the remaining elements to right of A and find the minimum.
Is it possible to do this in O(N)? I want to use the solution to solve another problem.
You should be able to do this in O(n) by filling the array from the right hand side and maintaining the index of the current minimum, as per the following pseudo-code:
def genNewArray (oldArray):
newArray = new array[oldArray.size]
saveIndex = -1
for i = newArray.size - 1 down to 0:
newArray[i] = saveIndex
if saveIndex == -1 or oldArray[i] < oldArray[saveIndex]:
saveIndex = i
return newArray
This passes through the array once, giving you the O(n) time complexity. It can do this because, once you've found a minimum beyond element N, it will only change for element N-1 if element N is less than the current minimum.
The following Python code shows this in action:
def genNewArray (oldArray):
newArray = []
saveIndex = -1
for i in range (len (oldArray) - 1, -1, -1):
newArray.insert (0, saveIndex)
if saveIndex == -1 or oldArray[i] < oldArray[saveIndex]:
saveIndex = i
return newArray
oldList = [1,3,6,7,8,2,7,4]
x = genNewArray (oldList)
print "idx", [0,1,2,3,4,5,6,7]
print "old", oldList
print "new", x
The output of this is:
idx [0, 1, 2, 3, 4, 5, 6, 7]
old [1, 3, 6, 7, 8, 2, 7, 4]
new [5, 5, 5, 5, 5, 7, 7, -1]
and you can see that the indexes at each element of the new array (the second one) correctly point to the minimum value to the right of each element in the original (first one).
Note that I've taken one specific definition of "to the right of", meaning it doesn't include the current element. If your definition of "to the right of" includes the current element, just change the order of the insert and if statement within the loop so that the index is updated first:
idx [0, 1, 2, 3, 4, 5, 6, 7]
old [1, 3, 6, 7, 8, 2, 7, 4]
new [0, 5, 5, 5, 5, 5, 7, 7]
The code for that removes the check on saveIndex since you know that the minimum index for the last element can be found at the last element:
def genNewArray (oldArray):
newArray = []
saveIndex = len (oldArray) - 1
for i in range (len (oldArray) - 1, -1, -1):
if oldArray[i] < oldArray[saveIndex]:
saveIndex = i
newArray.insert (0, saveIndex)
return newArray
Looks like HW. Let f(i) denote the index of the minimum element to the right of the element at i. Now consider walking backwards (filling in f(n-1), then f(n-2), f(n-3), ..., f(3), f(2), f(1)) and think about how information of f(i) can give you information of f(i-1).

Loop through different sets of unique permutations

I'm having a hard time getting started to layout code for this problem.
I have a fixed amount of random numbers, in this case 8 numbers.
R[] = { 1, 2, 3, 4, 5, 6, 7, 8 };
That are going to be placed in 3 sets of numbers, with the only constraint that each set contain minimum one value, and each value can only be used once. Edit: all 8 numbers should be used
For example:
R1[] = { 1, 4 }
R2[] = { 2, 8, 5, 6 }
R3[] = { 7, 3 }
I need to loop through all possible combinations of a set R1, R2, R3. Order is not important, so if the above example happened, I don't need
R1[] = { 4, 1 }
R2[] = { 2, 8, 5, 6 }
R3[] = { 7, 3 }
NOR
R1[] = { 2, 8, 5, 6 }
R2[] = { 7, 3 }
R3[] = { 1, 4 }
What is a good method?
I have in front of me Knuth Volume 4, Fascicle 3, Generating all Combinations and Partitions, section 7.2.1.5 Generating all set partitions (page 61 in fascicle).
First he details Algorithm H, Restricted growth strings in lexicographic order due to George Hutchinson. It looks simple, but I'm not going to dive into it just now.
On the next page under an elaboration Gray codes for set partitions he ponders:
Suppose, however, that we aren't interested in all of the partitions; we might want only the ones that have m blocks. Can we run this through the smaller collection of restricted growth strings, still changing one digit at a time?
Then he details a solution due to Frank Ruskey.
The simple solution (and certain to be correct) is to code Algorithm H filtering on partitions where m==3 and none of the partitions are the empty set (according to your stated constraints). I suspect Algorithm H runs blazingly fast, so the filtering cost will not be large.
If you're implementing this on an 8051, you might start with the Ruskey algorithm and then only filter on partitions containing the empty set.
If you're implementing this on something smaller than an 8051 and milliseconds matter, you can seed each of the three partitions with a unique element (a simple nested loop of three levels), and then augment by partitioning on the remaining five elements for m==3 using the Ruskey algorithm. You won't have to filter anything, but you do have to keep track of which five elements remain to partition.
The nice thing about filtering down from the general algorithm is that you don't have to verify any cleverness of your own, and you change your mind later about your constraints without having to revise your cleverness.
I might even work a solution later, but that's all for now.
P.S. for the Java guppies: I discovered searching on "George Hutchison restricted growth strings" a certain package ca.ubc.cs.kisynski.bell with documentation for method growthStrings() which implements the Hutchison algorithm.
Appears to be available at http://www.cs.ubc.ca/~kisynski/code/bell/
Probably not the best approach but it should work.
Determine number of combinations of three numbers which sum to 8:
1,1,6
1,2,5
1,3,4
2,2,4
2,3,3
To find the above I started with:
6,1,1 then subtracted 1 from six and added it to the next column...
5,2,1 then subtracted 1 from second column and added to next column...
5,1,2 then started again at first column...
4,2,2 carry again from second to third
4,1,3 again from first...
3,2,3 second -> third
3,1,4
knowing that less than half is 2 all combinations must have been found... but since the list isn't long we might as well go to the end.
Now sort each list of 3 from greatest to least(or vice versa)
Now sort each list of 3 relative to each other.
Copy each unique list into a list of unique lists.
We now have all the combinations which add to 8 (five lists I think).
Now consider a list in the above set
6,1,1 all the possible combinations are found by:
8 pick 6, (since we picked six there is only 2 left to pick from) 2 pick 1, 1 pick 1
which works out to 28*2*1 = 56, it is worth knowing how many possibilities there are so you can test.
n choose r (pick r elements from n total options)
n C r = n! / [(n-r)! r!]
So now you have the total number of iterations for each component of the list for the first one it is 28...
Well picking 6 items from 8 is the same as creating a list of 8 minus 2 elements, but which two elements?
Well if we remove 1,2 that leaves us with 3,4,5,6,7,8. Lets consider all groups of 2... Starting with 1,2 the next would be 1,3... so the following is read column by column.
12
13 23
14 24 34
15 25 35 45
16 26 36 46 56
17 27 37 47 57 67
18 28 38 48 58 68 78
Summing each of the above columns gives us 28. (so this only covered the first digit in the list (6,1,1) repeat the procedure for the second digit (a one) which is "2 Choose 1" So of the left over two digits from the above list we pick one of two and then for the last we pick the remaining one.
I know this is not a detailed algorithm but I hope you'll be able to get started.
Turn the problem on it's head and you'll find a straight-forward solution. You've got 8 numbers that each need to be assigned to exactly one group; The "solution" is only a solution if at least one number got assigned to each group.
The trivial implementation would involve 8 for loops and a few IF's (pseudocode):
for num1 in [1,2,3]
for num2 in [1,2,3]
for num3 in [1,2,3]
...
if ((num1==1) or (num2==1) or (num3 == 1) ... (num8 == 1)) and ((num1 == 2) or ... or (num8 == 2)) and ((num1 == 3) or ... or (num8 == 3))
Print Solution!
It may also be implemented recursively, using two arrays and a couple of functions. Much nicer and easier to debug/follow (pseudocode):
numbers = [1, 2, 3, 4, 5, 6, 7, 8]
positions = [0, 0, 0, 0, 0, 0, 0, 0]
function HandleNumber(i) {
for position in [1,2,3] {
positions[i] = position;
if (i == LastPosition) {
// Check if valid solution (it's valid if we got numbers in all groups)
// and print solution!
}
else HandleNumber(i+1)
}
}
The third implementation would use no recursion and a little bit of backtracking. Pseudocode, again:
numbers = [1,2,3,4,5,6,7,8]
groups = [0,0,0,0,0,0,0,0]
c_pos = 0 // Current position in Numbers array; We're done when we reach -1
while (cpos != -1) {
if (groups[c_pos] == 3) {
// Back-track
groups[c_pos]=0;
c_pos=c_pos-1
}
else {
// Try the next group
groups[c_pos] = groups[c_pos] + 1
// Advance to next position OR print solution
if (c_pos == LastPostion) {
// Check for valid solution (all groups are used) and print solution!
}
else
c_pos = c_pos + 1
}
}
Generate all combinations of subsets recursively in the classic way. When you reach the point where the number of remaining elements equals the number of empty subsets, then restrict yourself to the empty subsets only.
Here's a Python implementation:
def combinations(source, n):
def combinations_helper(source, subsets, p=0, nonempty=0):
if p == len(source):
yield subsets[:]
elif len(source) - p == len(subsets) - nonempty:
empty = [subset for subset in subsets if not subset]
for subset in empty:
subset.append(source[p])
for combination in combinations_helper(source, subsets, p+1, nonempty+1):
yield combination
subset.pop()
else:
for subset in subsets:
newfilled = not subset
subset.append(source[p])
for combination in combinations_helper(source, subsets, p+1, nonempty+newfilled):
yield combination
subset.pop()
assert len(source) >= n, "Not enough items"
subsets = [[] for _ in xrange(n)]
for combination in combinations_helper(source, subsets):
yield combination
And a test:
>>> for combination in combinations(range(1, 5), 2):
... print ', '.join(map(str, combination))
...
[1, 2, 3], [4]
[1, 2, 4], [3]
[1, 2], [3, 4]
[1, 3, 4], [2]
[1, 3], [2, 4]
[1, 4], [2, 3]
[1], [2, 3, 4]
[2, 3, 4], [1]
[2, 3], [1, 4]
[2, 4], [1, 3]
[2], [1, 3, 4]
[3, 4], [1, 2]
[3], [1, 2, 4]
[4], [1, 2, 3]
>>> len(list(combinations(range(1, 9), 3)))
5796

Find the middle element in merged arrays in O(logn)

We have two sorted arrays of the same size n. Let's call the array a and b.
How to find the middle element in an sorted array merged by a and b?
Example:
n = 4
a = [1, 2, 3, 4]
b = [3, 4, 5, 6]
merged = [1, 2, 3, 3, 4, 4, 5, 6]
mid_element = merged[(0 + merged.length - 1) / 2] = merged[3] = 3
More complicated cases:
Case 1:
a = [1, 2, 3, 4]
b = [3, 4, 5, 6]
Case 2:
a = [1, 2, 3, 4, 8]
b = [3, 4, 5, 6, 7]
Case 3:
a = [1, 2, 3, 4, 8]
b = [0, 4, 5, 6, 7]
Case 4:
a = [1, 3, 5, 7]
b = [2, 4, 6, 8]
Time required: O(log n). Any ideas?
Look at the middle of both the arrays. Let's say one value is smaller and the other is bigger.
Discard the lower half of the array with the smaller value. Discard the upper half of the array with the higher value. Now we are left with half of what we started with.
Rinse and repeat until only one element is left in each array. Return the smaller of those two.
If the two middle values are the same, then pick arbitrarily.
Credits: Bill Li's blog
Quite interesting task. I'm not sure about O(logn), but solution O((logn)^2) is obvious for me.
If you know position of some element in first array then you can find how many elements are smaller in both arrays then this value (you know already how many smaller elements are in first array and you can find count of smaller elements in second array using binary search - so just sum up this two numbers). So if you know that number of smaller elements in both arrays is less than N, you should look in to the upper half in first array, otherwise you should move to the lower half. So you will get general binary search with internal binary search. Overall complexity will be O((logn)^2)
Note: if you will not find median in first array then start initial search in the second array. This will not have impact on complexity
So, having
n = 4 and a = [1, 2, 3, 4] and b = [3, 4, 5, 6]
You know the k-th position in result array in advance based on n, which is equal to n.
The result n-th element could be in first array or second.
Let's first assume that element is in first array then
do binary search taking middle element from [l,r], at the beginning l = 0, r = 3;
So taking middle element you know how many elements in the same array smaller, which is middle - 1.
Knowing that middle-1 element is less and knowing you need n-th element you may have [n - (middle-1)]th element from second array to be smaller, greater. If that's greater and previos element is smaller that it's what you need, if it's greater and previous is also greater we need to L = middle, if it's smaller r = middle.
Than do the same for the second array in case you did not find solution for first.
In total log(n) + log(n)

Complexity with Array.min

I have an array:
[0, 0, 0, 0, 0, 0, 0, 1, 2, 3]
I need to figure out index of the minimal element which is not zero. How do I do that?
For ruby 1.8.7+:
>> [0,0,2,0,1,3].each_with_index.reject {|(e, i)| e == 0}
=> [[2, 2], [1, 4], [3, 5]]
>> [0,0,2,0,1,3].each_with_index.reject {|(e, i)| e == 0}.min
=> [1, 4]
>> [0,0,2,0,1,3].each_with_index.reject {|(e, i)| e == 0}.min[1]
=> 4
For ruby 1.8.6:
a.zip((0...a.size).to_a).reject {|(e, i)| e == 0}.min[1]
(solution by chuck)
a=[0, 0, 0, 0, 0, 0, 0, 1, 2, 3]
i=a.index a.reject{|x|x==0}.min
(i=7)
Simplest way:
check each element of the array, keep a variable that is the minimum, set it equal to the first number you come across (unless 0, then discard and use next number). Any time you come across a number smaller than your minimum, set it to your minimum. And, of course, discard any zero rather than setting your minimum.
More efficient:
It appears we have a sorted array, if we can use that to our advantage, we can use a better search mechanism, such as quick-search or binary-search. I will describe binary search as it is easy to understand.
Our array is in ascending order.
Check the middle most element, set it equal to your minimum (unless 0). Split the array in half on this middle element. Since the array is ascending, check the middle point of the left half (unless element was 0, then check right). Continue until there is only one element to the left when you split. That is your minimum.
I don't know Ruby, so I can't offer code, but the process that immediately springs to my mind is:
Create a copy of the array (if needed)
Remove all the 0 entries
Check the minimum value in the new array

Resources