Rank of string solution - algorithm

I was going through a question where it asks you to find the rank of the string amongst its permutations sorted lexicographically.
O(N^2) is pretty clear.
Some websites have O(n) solution also. The part that is optimized is basically pre-populating a count array such that
count[i] contains count of characters which are present in str and are smaller than i.
I understand that this'd reduce the complexity but can't fit my head around how we are calculating this array. This is the function that does this (taken from the link):
// Construct a count array where value at every index
// contains count of smaller characters in whole string
void populateAndIncreaseCount (int* count, char* str)
{
int i;
for( i = 0; str[i]; ++i )
++count[ str[i] ];
for( i = 1; i < 256; ++i )
count[i] += count[i-1];
}
Can someone please provide an intuitive explanation of this function?

That solution is doing a Bucket Sort and then sorting the output.
A bucket sort is O(items + number_of_possible_distinct_inputs) which for a fixed alphabet can be advertised as O(n).
However in practice UTF makes for a pretty large alphabet. I would therefore suggest a quicksort instead. Because a quicksort that divides into the three buckets of <, > and = is efficient for a large character set, but still takes advantage of a small one.

Understood after going through it again. Got confused due to wrong syntax in c++. It's actually doing a pretty simple thing (Here's the java version :
void populateAndIncreaseCount(int[] count, String str) {
// count is initialized to zero for all indices
for (int i = 0; i < str.length(); ++i) {
count[str.charAt(i)]++;
}
for (int i = 1; i < 256; ++i)
count[i] += count[i - 1];
}
After first step, indices whose character are present in string are non-zero. Then, for each index in count array, it'd be the sum of all the counts till index-1 since array represents lexicographically sorted characters. And, after each search, we udate the count array also:
// Removes a character ch from count[] array
// constructed by populateAndIncreaseCount()
void updatecount (int* count, char ch)
{
int i;
for( i = ch; i < MAX_CHAR; ++i )
--count[i];
}

Related

Counting sort - Efficiency

I was thinking about counting sort and how we implement it, actually how the algorithm works. I am stuck with one part, algorithm is really straightforward and easy to understand but one part of it doesn't seem necessary. I thought people might mistaken or so, but it seems like everyone using the same method so I am mistaken somewhere. Can you please explain.
Here is code for counting sort from geeksforgeeks
// C Program for counting sort
#include <stdio.h>
#include <string.h>
#define RANGE 255
// The main function that sort the given string arr[] in
// alphabatical order
void countSort(char arr[])
{
// The output character array that will have sorted arr
char output[strlen(arr)];
// Create a count array to store count of inidividul
// characters and initialize count array as 0
int count[RANGE + 1], i;
memset(count, 0, sizeof(count));
// Store count of each character
for(i = 0; arr[i]; ++i)
++count[arr[i]];
// Change count[i] so that count[i] now contains actual
// position of this character in output array
for (i = 1; i <= RANGE; ++i)
count[i] += count[i-1];
// Build the output character array
for (i = 0; arr[i]; ++i)
{
output[count[arr[i]]-1] = arr[i];
--count[arr[i]];
}
// Copy the output array to arr, so that arr now
// contains sorted characters
for (i = 0; arr[i]; ++i)
arr[i] = output[i];
}
// Driver program to test above function
int main()
{
char arr[] = "geeksforgeeks";//"applepp";
countSort(arr);
printf("Sorted character array is %s\n", arr);
return 0;
}
Cool , but about this part:
// Build the output character array
for (i = 0; arr[i]; ++i)
{
output[count[arr[i]]-1] = arr[i];
--count[arr[i]];
}
Why do I need this ?? Ok I counted my numbers :
Let's say I had array -> [1, 3, 6, 3, 2, 4]
INDEXES 0 1 2 3 4 5 6
I created this -> [0, 1, 1, 2, 1, 0, 1]
Than this part does this:
[0, 1+0, 1+1, 2+2, 4+1, 0+5, 1+5]
[0, 1, 2, 4, 5, 5, 6]
BUT WHY ??
Can't I just use my array like the one before ? Here is my idea and my code, please explain why it's wrong or, why other way is more useful.
void countingSort (int *arr) {
int countingArray[MAX_NUM] = {0};
for (i = 0 ; i < ARRAY_SIZE ; i++)
countingArray[arr[i]]++;
int output_Index = 0;
for (i = 0 ; i < MAX_NUM ; i++)
while ( countingArray[i]-- )
arr[output_Index++] = i;
}
For the simple case where you are sorting an array of integers, your code is simpler and better.
However, counting sort is a general sorting algorithm that can sort based on a sorting key derived from the items to be sorted, which is used to compare them, as opposed to directly comparing the items themselves. In the case of an array of integers, the items and the sort keys can be one and the same, you just compare them directly.
It looks to me as though the geeksforgeeks code has been adapted from a more generic example that allows the use of sorting keys, something like this:
// Store count of each item
for(i = 0; arr[i]; ++i)
++count[key(arr[i])];
// Change count[i] so that count[i] now contains actual
// position of this character in output array
for (i = 1; i <= RANGE; ++i)
count[i] += count[i-1];
// Build the output array
for (i = 0; arr[i]; ++i)
{
output[count[key(arr[i])]-1] = arr[i];
--count[key(arr[i])];
}
Where key is a function that computes a sort key based on an item (for an integer type you could just return the integer itself). In this case MAX_NUM would have to be replaced with MAX_KEY.
This approach uses the extra output array because the final result is generated by copying the items from arr rather than simply from the information in count (which only contains the count of items with each key). However, an in-place counting sort is possible.
The algorithm also guarantees a stable sort (items with the same sort key have their relative order preserved by sorting) - this is meaningless when sorting integers.
However, since they have removed the ability to sort based on key, there's no reason for the extra complexity and your way is better.
It's also possible that they have copied the code from a language like C++, where the int cast (which will be called when using an item to index an array) could be overloaded to return the sort key, but have mistakenly converted to C.
I think your version is a better approach. I suspect that the person who wrote this code sample had probably written similar code samples for other sorting algorithms — there are many sorting algorithms where you do need separate "scratch space" — and didn't put enough thought into this one.
Alternatively, (s)he may have felt that the algorithm is easier to explain if we separate "generating the result" from "moving the result into place"? I don't agree, if so, but the detailed comments make clear that (s)he had pedagogy in mind.
That said, there are a few minor issues with your version:
You forgot to declare i.
You should take the array-length as a parameter, rather than using a hardcoded ARRAY_SIZE. (In the code sample, this issue is avoided by using a string, so they could iterate until the terminating null byte.)
This may be subjective, but rather than while ( countingArray[i]-- ), I think it's clearer to write for (int j = 0; j < countingArray[i]; ++j).

Find the largest subset of it which form a sequence

I came across this problem during an interview forum.,
Given an int array which might contain duplicates, find the largest subset of it which form a sequence.
Eg. {1,6,10,4,7,9,5}
then ans is 4,5,6,7
Sorting is an obvious solution. Can this be done in O(n) time.
My take on the problem is that this cannot be done O(n) time & the reason is that if we could do this in O(n) time we could do sorting in O(n) time also ( without knowing the upper bound).
As a random array can contain all the elements in sequence but in random order.
Does this sound a plausible explanation ? your thoughts.
I believe it can be solved in O(n) if you assume you have enough memory to allocate an uninitialized array of a size equal to the largest value, and that allocation can be done in constant time. The trick is to use a lazy array, which gives you the ability to create a set of items in linear time with a membership test in constant time.
Phase 1: Go through each item and add it to the lazy array.
Phase 2: Go through each undeleted item, and delete all contiguous items.
In phase 2, you determine the range and remember it if it is the largest so far. Items can be deleted in constant time using a doubly-linked list.
Here is some incredibly kludgy code that demonstrates the idea:
int main(int argc,char **argv)
{
static const int n = 8;
int values[n] = {1,6,10,4,7,9,5,5};
int index[n];
int lists[n];
int prev[n];
int next_existing[n]; //
int prev_existing[n];
int index_size = 0;
int n_lists = 0;
// Find largest value
int max_value = 0;
for (int i=0; i!=n; ++i) {
int v=values[i];
if (v>max_value) max_value=v;
}
// Allocate a lazy array
int *lazy = (int *)malloc((max_value+1)*sizeof(int));
// Set items in the lazy array and build the lists of indices for
// items with a particular value.
for (int i=0; i!=n; ++i) {
next_existing[i] = i+1;
prev_existing[i] = i-1;
int v = values[i];
int l = lazy[v];
if (l>=0 && l<index_size && index[l]==v) {
// already there, add it to the list
prev[n_lists] = lists[l];
lists[l] = n_lists++;
}
else {
// not there -- create a new list
l = index_size;
lazy[v] = l;
index[l] = v;
++index_size;
prev[n_lists] = -1;
lists[l] = n_lists++;
}
}
// Go through each contiguous range of values and delete them, determining
// what the range is.
int max_count = 0;
int max_begin = -1;
int max_end = -1;
int i = 0;
while (i<n) {
// Start by searching backwards for a value that isn't in the lazy array
int dir = -1;
int v_mid = values[i];
int v = v_mid;
int begin = -1;
for (;;) {
int l = lazy[v];
if (l<0 || l>=index_size || index[l]!=v) {
// Value not in the lazy array
if (dir==1) {
// Hit the end
if (v-begin>max_count) {
max_count = v-begin;
max_begin = begin;
max_end = v;
}
break;
}
// Hit the beginning
begin = v+1;
dir = 1;
v = v_mid+1;
}
else {
// Remove all the items with value v
int k = lists[l];
while (k>=0) {
if (k!=i) {
next_existing[prev_existing[l]] = next_existing[l];
prev_existing[next_existing[l]] = prev_existing[l];
}
k = prev[k];
}
v += dir;
}
}
// Go to the next existing item
i = next_existing[i];
}
// Print the largest range
for (int i=max_begin; i!=max_end; ++i) {
if (i!=max_begin) fprintf(stderr,",");
fprintf(stderr,"%d",i);
}
fprintf(stderr,"\n");
free(lazy);
}
I would say there are ways to do it. The algorithm is the one you already describe, but just use a O(n) sorting algorithm. As such exist for certain inputs (Bucket Sort, Radix Sort) this works (this also goes hand in hand with your argumentation why it should not work).
Vaughn Cato suggested implementation is working like this (its working like a bucket sort with the lazy array working as buckets-on-demand).
As shown by M. Ben-Or in Lower bounds for algebraic computation trees, Proc. 15th ACM Sympos. Theory Comput., pp. 80-86. 1983 cited by J. Erickson in pdf Finding Longest Arithmetic Progressions, this problem cannot be solved in less than O(n log n) time (even if the input is already sorted into order) when using an algebraic decision tree model of computation.
Earlier, I posted the following example in a comment to illustrate that sorting the numbers does not provide an easy answer to the question: Suppose the array is given already sorted into ascending order. For example, let it be (20 30 35 40 47 60 70 80 85 95 100). The longest sequence found in any subsequence of the input is 20,40,60,80,100 rather than 30,35,40 or 60,70,80.
Regarding whether an O(n) algebraic decision tree solution to this problem would provide an O(n) algebraic decision tree sorting method: As others have pointed out, a solution to this subsequence problem for a given multiset does not provide a solution to a sorting problem for that multiset. As an example, consider set {2,4,6,x,y,z}. The subsequence solver will give you the result (2,4,6) whenever x,y,z are large numbers not in arithmetic sequence, and it will tell you nothing about the order of x,y,z.
What about this? populate a hash-table so each value stores the start of the range seen so far for that number, except for the head element that stores the end of the range. O(n) time, O(n) space. A tentative Python implementation (you could do it with one traversal keeping some state variables, but this way seems more clear):
def longest_subset(xs):
table = {}
for x in xs:
start = table.get(x-1, x)
end = table.get(x+1, x)
if x+1 in table:
table[end] = start
if x-1 in table:
table[start] = end
table[x] = (start if x-1 in table else end)
start, end = max(table.items(), key=lambda pair: pair[1]-pair[0])
return list(range(start, end+1))
print(longest_subset([1, 6, 10, 4, 7, 9, 5]))
# [4, 5, 6, 7]
here is a un-optimized O(n) implementation, maybe you will find it useful:
hash_tb={}
A=[1,6,10,4,7,9,5]
for i in range(0,len(A)):
if not hash_tb.has_key(A[i]):
hash_tb[A[i]]=A[i]
max_sq=[];cur_seq=[]
for i in range(0,max(A)):
if hash_tb.has_key(i):
cur_seq.append(i)
else:
if len(cur_seq)>len(max_sq):
max_sq=cur_seq
cur_seq=[]
print max_sq

How can we find a repeated number in array in O(n) time and O(1) space complexity

How can we find a repeated number in array in O(n) time and O(1) complexity?
eg
array 2,1,4,3,3,10
output is 3
EDIT:
I tried in following way.
i found that if no is oddly repeated then we can achieve the result by doing xor . so i thought to make the element which is odd no repeating to even no and every evenly repeating no to odd.but for that i need to find out unique element array from input array in O(n) but couldn't find the way.
Assuming that there is an upped bound for the values of the numbers in the array (which is the case with all built-in integer types in all programming languages I 've ever used -- for example, let's say they are 32-bit integers) there is a solution that uses constant space:
Create an array of N elements, where N is the upper bound for the integer values in the input array and initialize all elements to 0 or false or some equivalent. I 'll call this the lookup array.
Loop over the input array, and use each number to index into the lookup array. If the value you find is 1 or true (etc), the current number in the input array is a duplicate.
Otherwise, set the corresponding value in the lookup array to 1 or true to remember that we have seen this particular input number.
Technically, this is O(n) time and O(1) space, and it does not destroy the input array. Practically, you would need things to be going your way to have such a program actually run (e.g. it's out of the question if talking about 64-bit integers in the input).
Without knowing more about the possible values in the array you can't.
With O(1) space requirement the fastest way is to sort the array so it's going to be at least O(n*log(n)).
Use Bit manipulation ... traverse the list in one loop.
Check if the mask is 1 by shifting the value from i.
If so print out repeated value i.
If the value is unset, set it.
*If you only want to show one repeated values once, add another integer show and set its bits as well like in the example below.
**This is in java, I'm not sure we will reach it, but you might want to also add a check using Integer.MAX_VALUE.
public static void repeated( int[] vals ) {
int mask = 0;
int show = 0;
for( int i : vals ) {
// get bit in mask
if( (( mask >> i ) & 1) == 1 &&
(( show >> i ) & 1) == 0 )
{
System.out.println( "\n\tfound: " + i );
show = show | (1 << i);
}
// set mask if not found
else
{
mask = mask | (1 << i);
System.out.println( "new: " + i );
}
System.out.println( "mask: " + mask );
}
}
This is impossible without knowing any restricted rules about the input array, either that the Memory complexity would have some dependency on the input size or that the time complexity is gonna be higher.
The 2 answers above are infact the best answers for getting near what you have asked, one's trade off is Time where the second trade off is in Memory, but you cant have it run in O(n) time and O(1) complexity in SOME UNKNOWN INPUT ARRAY.
I met the problem too and my solution is using hashMap .The python version is the following:
def findRepeatNumber(lists):
hashMap = {}
for i in xrange(len(lists)):
if lists[i] in hashMap:
return lists[i]
else:
hashMap[lists[i]]=i+1
return
It is possible only if you have a specific data. Eg all numbers are of a small range. Then you could store repeat info in the source array not affecting the whole scanning and analyzing process.
Simplified example: You know that all the numbers are smaller than 100, then you can mark repeat count for a number using extra zeroes, like put 900 instead of 9 when 9 is occurred twice.
It is easy when NumMax-NumMin
http://www.geeksforgeeks.org/find-the-maximum-repeating-number-in-ok-time/
public static string RepeatedNumber()
{
int[] input = {66, 23, 34, 0, 5, 4};
int[] indexer = {0,0,0,0,0,0}
var found = 0;
for (int i = 0; i < input.Length; i++)
{
var toFind = input[i];
for (int j = 0; j < input.Length; j++)
{
if (input[j] == toFind && (indexer[j] == 1))
{
found = input[j];
}
else if (input[j] == toFind)
{
indexer[j] = 1;
}
}
}
return $"most repeated item in the array is {found}";
}
You can do this
#include<iostream.h>
#include<conio.h>
#include<stdio.h>
void main ()
{
clrscr();
int array[5],rep=0;
for(int i=1; i<=5; i++)
{
cout<<"enter elements"<<endl;
cin>>array[i];
}
for(i=1; i<=5; i++)
{
if(array[i]==array[i+1])
{
rep=array[i];
}
}
cout<<" repeat value is"<<rep;
getch();
}

calculating the number of bits using K&R method with infinite memory

I got answer for the question, counting number of sets bits from here.
How to count the number of set bits in a 32-bit integer?
long count_bits(long n) {
unsigned int c; // c accumulates the total bits set in v
for (c = 0; n; c++)
n &= n - 1; // clear the least significant bit set
return c;
}
It is simple to understand also. And found the best answer as Brian Kernighans method, posted by hoyhoy... and he adds the following at the end.
Note that this is an question used during interviews. The interviewer will add the caveat that you have "infinite memory". In that case, you basically create an array of size 232 and fill in the bit counts for the numbers at each location. Then, this function becomes O(1).
Can somebody explain how to do this ? If i have infinite memory ...
The fastest way I have ever seen to populate such an array is ...
array[0] = 0;
for (i = 1; i < NELEMENTS; i++) {
array[i] = array[i >> 1] + (i & 1);
}
Then to count the number of set bits in a given number (provided the given number is less than NELEMENTS) ...
numSetBits = array[givenNumber];
If your memory is not finite, I often see NELEMENTS set to 256 (for one byte's worth) and add the number of set bits in each byte in your integer.
int counts[MAX_LONG];
void init() {
for (int i= 0; i < MAX_LONG; i++)
{
counts[i] = count_bits[i]; // as given
}
}
int count_bits_o1(long number)
{
return counts[number];
}
You can probably pre-populate the array more wiseley, i.e. fill with zeros, then every second index add one, then every fourth index add 1, then every eighth index add 1 etc, which might be a bit faster, although I doubt it...
Also, you might account for unsigned values.

Remove duplicate items with minimal auxiliary memory?

What is the most efficient way to remove duplicate items from an array under the constraint that axillary memory usage must be to a minimum, preferably small enough to not even require any heap allocations? Sorting seems like the obvious choice, but this is clearly not asymptotically efficient. Is there a better algorithm that can be done in place or close to in place? If sorting is the best choice, what kind of sort would be best for something like this?
I'll answer my own question since, after posting, I came up with a really clever algorithm to do this. It uses hashing, building something like a hash set in place. It's guaranteed to be O(1) in axillary space (the recursion is a tail call), and is typically O(N) time complexity. The algorithm is as follows:
Take the first element of the array, this will be the sentinel.
Reorder the rest of the array, as much as possible, such that each element is in the position corresponding to its hash. As this step is completed, duplicates will be discovered. Set them equal to sentinel.
Move all elements for which the index is equal to the hash to the beginning of the array.
Move all elements that are equal to sentinel, except the first element of the array, to the end of the array.
What's left between the properly hashed elements and the duplicate elements will be the elements that couldn't be placed in the index corresponding to their hash because of a collision. Recurse to deal with these elements.
This can be shown to be O(N) provided no pathological scenario in the hashing:
Even if there are no duplicates, approximately 2/3 of the elements will be eliminated at each recursion. Each level of recursion is O(n) where small n is the amount of elements left. The only problem is that, in practice, it's slower than a quick sort when there are few duplicates, i.e. lots of collisions. However, when there are huge amounts of duplicates, it's amazingly fast.
Edit: In current implementations of D, hash_t is 32 bits. Everything about this algorithm assumes that there will be very few, if any, hash collisions in full 32-bit space. Collisions may, however, occur frequently in the modulus space. However, this assumption will in all likelihood be true for any reasonably sized data set. If the key is less than or equal to 32 bits, it can be its own hash, meaning that a collision in full 32-bit space is impossible. If it is larger, you simply can't fit enough of them into 32-bit memory address space for it to be a problem. I assume hash_t will be increased to 64 bits in 64-bit implementations of D, where datasets can be larger. Furthermore, if this ever did prove to be a problem, one could change the hash function at each level of recursion.
Here's an implementation in the D programming language:
void uniqueInPlace(T)(ref T[] dataIn) {
uniqueInPlaceImpl(dataIn, 0);
}
void uniqueInPlaceImpl(T)(ref T[] dataIn, size_t start) {
if(dataIn.length - start < 2)
return;
invariant T sentinel = dataIn[start];
T[] data = dataIn[start + 1..$];
static hash_t getHash(T elem) {
static if(is(T == uint) || is(T == int)) {
return cast(hash_t) elem;
} else static if(__traits(compiles, elem.toHash)) {
return elem.toHash;
} else {
static auto ti = typeid(typeof(elem));
return ti.getHash(&elem);
}
}
for(size_t index = 0; index < data.length;) {
if(data[index] == sentinel) {
index++;
continue;
}
auto hash = getHash(data[index]) % data.length;
if(index == hash) {
index++;
continue;
}
if(data[index] == data[hash]) {
data[index] = sentinel;
index++;
continue;
}
if(data[hash] == sentinel) {
swap(data[hash], data[index]);
index++;
continue;
}
auto hashHash = getHash(data[hash]) % data.length;
if(hashHash != hash) {
swap(data[index], data[hash]);
if(hash < index)
index++;
} else {
index++;
}
}
size_t swapPos = 0;
foreach(i; 0..data.length) {
if(data[i] != sentinel && i == getHash(data[i]) % data.length) {
swap(data[i], data[swapPos++]);
}
}
size_t sentinelPos = data.length;
for(size_t i = swapPos; i < sentinelPos;) {
if(data[i] == sentinel) {
swap(data[i], data[--sentinelPos]);
} else {
i++;
}
}
dataIn = dataIn[0..sentinelPos + start + 1];
uniqueInPlaceImpl(dataIn, start + swapPos + 1);
}
Keeping auxillary memory usage to a minimum, your best bet would be to do an efficient sort to get them in order, then do a single pass of the array with a FROM and TO index.
You advance the FROM index every time through the loop. You only copy the element from FROM to TO (and increment TO) when the key is different from the last.
With Quicksort, that'll average to O(n-log-n) and O(n) for the final pass.
If you sort the array, you will still need another pass to remove duplicates, so the complexity is O(NN) in the worst case (assuming Quicksort), or O(Nsqrt(N)) using Shellsort.
You can achieve O(N*N) by simply scanning the array for each element removing duplicates as you go.
Here is an example in Lua:
function removedups (t)
local result = {}
local count = 0
local found
for i,v in ipairs(t) do
found = false
if count > 0 then
for j = 1,count do
if v == result[j] then found = true; break end
end
end
if not found then
count = count + 1
result[count] = v
end
end
return result, count
end
I don't see any way to do this without something like a bubblesort. When you find a dupe, you need to reduce the length of the array. Quicksort is not designed for the size of the array to change.
This algorithm is always O(n^2) but it also use almost no extra memory -- stack or heap.
// returns the new size
int bubblesqueeze(int* a, int size) {
for (int j = 0; j < size - 1; ++j) {
for (int i = j + 1; i < size; ++i) {
// when a dupe is found, move the end value to index j
// and shrink the size of the array
while (i < size && a[i] == a[j]) {
a[i] = a[--size];
}
if (i < size && a[i] < a[j]) {
int tmp = a[j];
a[j] = a[i];
a[i] = tmp;
}
}
}
return size;
}
Is you have two different var for traversing a datadet insted of just one then you can limit the output by dismissing all diplicates that currently are already in the dataset.
Obvious this example in C is not an efficiant sorting algorith but it is just an example on one way to look at the probkem.
You could also blindly sort the data first and then relocate the data for removing dups, but I'm not sure that would be faster.
#define ARRAY_LENGTH 15
int stop = 1;
int scan_sort[ARRAY_LENGTH] = {5,2,3,5,1,2,5,4,3,5,4,8,6,4,1};
void step_relocate(char tmp,char s,int *dataset)
{
for(;tmp<s;s--)
dataset[s] = dataset[s-1];
}
int exists(int var,int *dataset)
{
int tmp=0;
for(;tmp < stop; tmp++)
{
if( dataset[tmp] == var)
return 1;/* value exsist */
if( dataset[tmp] > var)
tmp=stop;/* Value not in array*/
}
return 0;/* Value not in array*/
}
void main(void)
{
int tmp1=0;
int tmp2=0;
int index = 1;
while(index < ARRAY_LENGTH)
{
if(exists(scan_sort[index],scan_sort))
;/* Dismiss all values currently in the final dataset */
else if(scan_sort[stop-1] < scan_sort[index])
{
scan_sort[stop] = scan_sort[index];/* Insert the value as the highest one */
stop++;/* One more value adde to the final dataset */
}
else
{
for(tmp1=0;tmp1<stop;tmp1++)/* find where the data shall be inserted */
{
if(scan_sort[index] < scan_sort[tmp1])
{
index = index;
break;
}
}
tmp2 = scan_sort[index]; /* Store in case this value is the next after stop*/
step_relocate(tmp1,stop,scan_sort);/* Relocated data already in the dataset*/
scan_sort[tmp1] = tmp2;/* insert the new value */
stop++;/* One more value adde to the final dataset */
}
index++;
}
printf("Result: ");
for(tmp1 = 0; tmp1 < stop; tmp1++)
printf( "%d ",scan_sort[tmp1]);
printf("\n");
system( "pause" );
}
I liked the problem so I wrote a simple C test prog for it as you can see above. Make a comment if I should elaborate or you see any faults.

Resources