How these pseudocodes for bubble sort works? - pseudocode

I got this pseudocode from Wikipedia:
procedure bubbleSort( A : list of sortable items )
n = length(A)
repeat
swapped = false
for i = 1 to n-1 inclusive do
/* if this pair is out of order */
if A[i-1] > A[i] then
/* swap them and remember something changed */
swap( A[i-1], A[i] )
swapped = true
end if
end for
until not swapped
end procedure
And this from a book (named Principles of Computer Science)
BubbleSort( list )
length <-- lenght of list
do {
swapped_pair <-- false
index <-- 1
while index <= length - 1 {
if list[index] > list[index + 1] {
swap( list[index], list[index + 1] )
swapped_pair = true
index <-- index + 1
}
}
} while( swapped = true )
end
I don't know which is better pseudocode.
The parts I don't understand is the swapped_pair <-- false part and the last lines.
In the line 4 when it's written swapped=false or swapped_pair <-- false.
Why it's set to false at the start? What would happen if it weren't set to false?
And the last lines, on the Wikipedia it's written:
end if
end for
until not swapped
end procedure
And on the pseudocode from the book it's written:
while( swapped = true )
What does these last lines mean?

The swapped variable keeps track if any swaps were made in the last pass through the array.
If a swap was made, the array is still not sorted and we need to continue.
If no swaps were made, then the array is already sorted and we can stop there. Otherwise we will do redundant iterations.
This is one of the optimizations that we ca do to make bubble sort more efficient.
If you are interested in more optimizations you can look here:
http://www.c-programming-simple-steps.com/bubble-sort.html
However, even optimized, bubble sort is too inefficient to be used in practice. It is an interesting case to look at, while learning, but if you need a simple sort algorithm use insertion sort instead.

Related

How can I further code my merged_sort method in ruby?

So this one is really tough since it uses recursion and I cant go anymore further than this. I dont know what to do next.
for description, merge_sort divides the array until it has been divided into single elements and then I dont know what to do next its really hard I have been thinking this through since 8 hours
def merge_sort ( array )
return if array.length < 2
a = merge_sort(array.slice!(0..array.length/2))
b = merge_sort(array)
end
def merge ( a , b )
merged = []
j_a = 0 # pointer to the first list
k_b = 0 # pointer to the second list
while j_a < a.length || k_b < b.length
if a[j_a] > b[k_b]
merged << b[k_b]
k_b += 1
else
merged << a[j_a]
j_a += 1
end
if j_a == a.length # pointer has reached the end of first list? append the whole of 2nd `list`
merged << b[k_b..-1]
else
merged << a[j_a..-1] # else append the first list to merged.
end
end
merged
end
Merge sort has two phases, (1) divide, and (2) conquer/merge.
The divide phase splits an array into left and right halves.
The merge phase merges the results.
Your merge_sort() function needs to be called recursively on the left half and the right half, each returning a sorted subarray.
Then the merge function needs to be called to merge the two subarrays.
Basically, when you want to sort an array, the recursive function divides array in half during recursion, then merge sorted halves as recursion unwinds, something like this,
# merge sort array charles
def merge_sort ( charles )
return if charles.length < 2
k = charles.length
L = merge_sort(charles[0:k/2])
R = merge_sort(charles[k/2+1:k])
# missing merge
return merge(L,R)
end

Quick Sort Algo

My algorithm is not working as intended. When I use a data set that has a starting value greater than the last element, the method sorts the numbers in descending order rather than ascending. I am not exactly sure changing the numbers at input[0] and input.length - 1 can alter the output from ascending to reverse order. I would appreciate any insight on how to fix this. Thanks!
def quickSort(input)
divide = lambda do |first, last|
if first >= last
return
end
mid = first
i = 0
while i < last do
if input[i] < input[last]
input[i], input[mid] = input[mid], input[i]
mid += 1
end
i += 1
end
input[mid], input[last] = input[last], input[mid]
divide.call(first, mid - 1)
divide.call(mid + 1, last)
end
divide.call(0, input.length - 1 )
return input
end
quickSort([24, 6, 8, 2, 35]) // causes a descending sort
quickSort([3,9,1,4,7]) // works as intended
I don't think that is quicksort (at least not the way I learned), and if you try adding more values to the first array you are sorting it will crash the program.
Take a look at this following implementation (my ruby is a bit rusty so bear with me)
def quickSort(input)
return input if input.length <= 1
i = input.length - 1
pivot = input[rand(i)]
input.delete(pivot)
lesser = []
greater = []
input.map do |n|
lesser.push(n) if n < pivot
greater.push(n) if n >= pivot
end
sorted = []
sorted.concat(quickSort(lesser))
sorted.push(pivot)
sorted.concat(quickSort(greater))
return sorted
end
print quickSort([24, 6, 8, 2, 35, 12])
puts ""
print quickSort([3,9,1,4,7,8,10,15,2])
puts ""
Usually when doing quicksort you will pick a random pivot in the array and split the array into parts lesser and greater than the pivot. Then you recursively call quicksort on the lesser and greater arrays before rejoining them into a sorted array. Hope that helps!

Implementation of short bubble and bubble sort

Bubble sort
In the above URL it is clearly written that short bubble is a modification in bubble sort to reduce the number of passes.
So in my implementation of both the algorithms i have added a counter which counts the number of passes and surprisingly both are having same no. of passes.
Here is my code:
def bubbleshort(mylist):
flag= True
passnum= len(mylist) -1
counter = 0
while flag and passnum > 0:
flag = False
for element in range(passnum):
if mylist[element] > mylist[element + 1]:
flag = True
temp= mylist[element]
mylist[element]= mylist[element+1]
mylist[element + 1] = temp
counter += 1
passnum -= 1
return mylist, counter
def bubble(yourlist):
count=0
for i in range(len(yourlist)-1, 0, -1):
for swap in range(i):
if yourlist[swap] > yourlist[swap + 1]:
temp=yourlist[swap]
yourlist[swap]=yourlist[swap + 1]
yourlist[swap + 1]= temp
count+= 1
return yourlist, count
mylist = [20,30,40,90,50,60,70,80,100,110]
mylistx = [20,30,40,90,50,60,70,80,100,110]
sortedList, counter= bubbleshort(mylist)
sortList, count= bubble(mylistx)
print(sortedList,counter)
print(sortList,count)
Also if i pass same list to both the functions the the bubble function is producing zero counts but is still giving a sorted list.
So can anybody tell me what exactly is the purpose of modification when the no. of passes are same. Their maybe a chance that my implementation of counter is wrong that why i am getting wrong answers.
It really depends on the input list whether the two functions go through the same number of passes.
For example, an almost sorted list like [9,1,2,3,4,5,6,7,8] takes only two passes for the short bubble function while it always takes 8 (n-1) passes for the regular bubble function.

Bubble sort pseudo code what does n-1 mean?

I have a question about a specific line in the bubble sort pseudo code.
This pseudocode is taken from wikipedia:
procedure bubbleSort( A : list of sortable items )
n = length(A)
repeat
swapped = false
for i = 1 to n-1 inclusive do //THIS IS THE LINE I DON'T UNDERSTAND
/* if this pair is out of order */
if A[i-1] > A[i] then
/* swap them and remember something changed */
swap( A[i-1], A[i] )
swapped = true
end if
end for
until not swapped
end procedure
I do not understand the for loop's condition (1 to n-1). I clearly have to run through all elements from the second element at index 1 to the last element for the algorithm to work.
But when I read the term n-1 I see it as the last element minus 1, which will skip the last element. So I guess my question is, what does n-1 really mean in this context?
If n is the count of elements. The highest index is n-1.
This line iterates from the index 1 to the highest index n-1.
The first element has an index of 0. This code does not start there because of what it does inside the loop. pay attention to the i-1 part.
To give you an example of what that pseudocode does:
`A ={'C', 'E', 'B', 'D', 'A'}`
`n` = `5`
inner_loop for i => 1, 2, 3, 4
i = 1
if(A[0] > A[1]) => false
i = 2
if(A[1] > A[2]) => true
swap(A[1] , A[2]) => A ={'C', 'B', 'E', 'D', 'A'}
swapped = true
i = 3
if(A[2] > A[3]) => false
i = 4
if(A[3] > A[4]) => true
swap(A[3] , A[4]) => A ={'C', 'B', 'E', 'A', 'D'}
swapped = true
In a senses this code does not run through the elements but rather trough the comparisson of adjacent elements.
n-1 does not mean the second-to-last element. It means the last element.
Here's why: Usually in programming, lists are zero-indexed, meaning the numbering starts at zero and goes to n-1 where n is the length of the list. The loop starts at i = 1 which is actually the second element (since later you have to compare A[i] to A[i-1]—that's the first element).
Since most programming languages start with index 0, you'll only want to compare from array index 0 to array index n-1 for an array of size n. If you continue to n, you'll be comparing outside of the array in the line:
if A[i-1] > A[i]
Hope this helps.
That is written in pseudo-code, so we don't know for sure how that "language" implements array indexing, but it seems that it is 0-indexed. Which means that if length(A) = n = 5 the elements are numbered from 0 through 4 (i.e. A[0] is how you access the first element A[4] is how you access the last one).
the sorting is occurring till n-1 because the last element will automatically be sorted during the last iteration i.e the nth iteration in case of bubblesort

Programming Interview Question / how to find if any two integers in an array sum to zero?

Not a homework question, but a possible interview question...
Given an array of integers, write an algorithm that will check if the sum of any two is zero.
What is the Big O of this solution?
Looking for non brute force methods
Use a lookup table: Scan through the array, inserting all positive values into the table. If you encounter a negative value of the same magnitude (which you can easily lookup in the table); the sum of them will be zero. The lookup table can be a hashtable to conserve memory.
This solution should be O(N).
Pseudo code:
var table = new HashSet<int>();
var array = // your int array
foreach(int n in array)
{
if ( !table.Contains(n) )
table.Add(n);
if ( table.Contains(n*-1) )
// You found it.;
}
The hashtable solution others have mentioned is usually O(n), but it can also degenerate to O(n^2) in theory.
Here's a Theta(n log n) solution that never degenerates:
Sort the array (optimal quicksort, heap sort, merge sort are all Theta(n log n))
for i = 1, array.len - 1
binary search for -array[i] in i+1, array.len
If your binary search ever returns true, then you can stop the algorithm and you have a solution.
An O(n log n) solution (i.e., the sort) would be to sort all the data values then run a pointer from lowest to highest at the same time you run a pointer from highest to lowest:
def findmatch(array n):
lo = first_index_of(n)
hi = last_index_of(n)
while true:
if lo >= hi: # Catch where pointers have met.
return false
if n[lo] = -n[hi]: # Catch the match.
return true
if sign(n[lo]) = sign(n[hi]): # Catch where pointers are now same sign.
return false
if -n[lo] > n[hi]: # Move relevant pointer.
lo = lo + 1
else:
hi = hi - 1
An O(n) time complexity solution is to maintain an array of all values met:
def findmatch(array n):
maxval = maximum_value_in(n) # This is O(n).
array b = new array(0..maxval) # This is O(1).
zero_all(b) # This is O(n).
for i in index(n): # This is O(n).
if n[i] = 0:
if b[0] = 1:
return true
b[0] = 1
nextfor
if n[i] < 0:
if -n[i] <= maxval:
if b[-n[i]] = 1:
return true;
b[-n[i]] = -1
nextfor
if b[n[i]] = -1:
return true;
b[n[i]] = 1
This works by simply maintaining a sign for a given magnitude, every possible magnitude between 0 and the maximum value.
So, if at any point we find -12, we set b[12] to -1. Then later, if we find 12, we know we have a pair. Same for finding the positive first except we set the sign to 1. If we find two -12's in a row, that still sets b[12] to -1, waiting for a 12 to offset it.
The only special cases in this code are:
0 is treated specially since we need to detect it despite its somewhat strange properties in this algorithm (I treat it specially so as to not complicate the positive and negative cases).
low negative values whose magnitude is higher than the highest positive value can be safely ignored since no match is possible.
As with most tricky "minimise-time-complexity" algorithms, this one has a trade-off in that it may have a higher space complexity (such as when there's only one element in the array that happens to be positive two billion).
In that case, you would probably revert to the sorting O(n log n) solution but, if you know the limits up front (say if you're restricting the integers to the range [-100,100]), this can be a powerful optimisation.
In retrospect, perhaps a cleaner-looking solution may have been:
def findmatch(array num):
# Array empty means no match possible.
if num.size = 0:
return false
# Find biggest value, no match possible if empty.
max_positive = num[0]
for i = 1 to num.size - 1:
if num[i] > max_positive:
max_positive = num[i]
if max_positive < 0:
return false
# Create and init array of positives.
array found = new array[max_positive+1]
for i = 1 to found.size - 1:
found[i] = false
zero_found = false
# Check every value.
for i = 0 to num.size - 1:
# More than one zero means match is found.
if num[i] = 0:
if zero_found:
return true
zero_found = true
# Otherwise store fact that you found positive.
if num[i] > 0:
found[num[i]] = true
# Check every value again.
for i = 0 to num.size - 1:
# If negative and within positive range and positive was found, it's a match.
if num[i] < 0 and -num[i] <= max_positive:
if found[-num[i]]:
return true
# No matches found, return false.
return false
This makes one full pass and a partial pass (or full on no match) whereas the original made the partial pass only but I think it's easier to read and only needs one bit per number (positive found or not found) rather than two (none, positive or negative found). In any case, it's still very much O(n) time complexity.
I think IVlad's answer is probably what you're after, but here's a slightly more off the wall approach.
If the integers are likely to be small and memory is not a constraint, then you can use a BitArray collection. This is a .NET class in System.Collections, though Microsoft's C++ has a bitset equivalent.
The BitArray class allocates a lump of memory, and fills it with zeroes. You can then 'get' and 'set' bits at a designated index, so you could call myBitArray.Set(18, true), which would set the bit at index 18 in the memory block (which then reads something like 00000000, 00000000, 00100000). The operation to set a bit is an O(1) operation.
So, assuming a 32 bit integer scope, and 1Gb of spare memory, you could do the following approach:
BitArray myPositives = new BitArray(int.MaxValue);
BitArray myNegatives = new BitArray(int.MaxValue);
bool pairIsFound = false;
for each (int testValue in arrayOfIntegers)
{
if (testValue < 0)
{
// -ve number - have we seen the +ve yet?
if (myPositives.get(-testValue))
{
pairIsFound = true;
break;
}
// Not seen the +ve, so log that we've seen the -ve.
myNegatives.set(-testValue, true);
}
else
{
// +ve number (inc. zero). Have we seen the -ve yet?
if (myNegatives.get(testValue))
{
pairIsFound = true;
break;
}
// Not seen the -ve, so log that we've seen the +ve.
myPositives.set(testValue, true);
if (testValue == 0)
{
myNegatives.set(0, true);
}
}
}
// query setting of pairIsFound to see if a pair totals to zero.
Now I'm no statistician, but I think this is an O(n) algorithm. There is no sorting required, and the longest duration scenario is when no pairs exist and the whole integer array is iterated through.
Well - it's different, but I think it's the fastest solution posted so far.
Comments?
Maybe stick each number in a hash table, and if you see a negative one check for a collision? O(n). Are you sure the question isn't to find if ANY sum of elements in the array is equal to 0?
Given a sorted array you can find number pairs (-n and +n) by using two pointers:
the first pointer moves forward (over the negative numbers),
the second pointer moves backwards (over the positive numbers),
depending on the values the pointers point at you move one of the pointers (the one where the absolute value is larger)
you stop as soon as the pointers meet or one passed 0
same values (one negative, one possitive or both null) are a match.
Now, this is O(n), but sorting (if neccessary) is O(n*log(n)).
EDIT: example code (C#)
// sorted array
var numbers = new[]
{
-5, -3, -1, 0, 0, 0, 1, 2, 4, 5, 7, 10 , 12
};
var npointer = 0; // pointer to negative numbers
var ppointer = numbers.Length - 1; // pointer to positive numbers
while( npointer < ppointer )
{
var nnumber = numbers[npointer];
var pnumber = numbers[ppointer];
// each pointer scans only its number range (neg or pos)
if( nnumber > 0 || pnumber < 0 )
{
break;
}
// Do we have a match?
if( nnumber + pnumber == 0 )
{
Debug.WriteLine( nnumber + " + " + pnumber );
}
// Adjust one pointer
if( -nnumber > pnumber )
{
npointer++;
}
else
{
ppointer--;
}
}
Interesting: we have 0, 0, 0 in the array. The algorithm will output two pairs. But in fact there are three pairs ... we need more specification what exactly should be output.
Here's a nice mathematical way to do it: Keep in mind all prime numbers (i.e. construct an array prime[0 .. max(array)], where n is the length of the input array, so that prime[i] stands for the i-th prime.
counter = 1
for i in inputarray:
if (i >= 0):
counter = counter * prime[i]
for i in inputarray:
if (i <= 0):
if (counter % prime[-i] == 0):
return "found"
return "not found"
However, the problem when it comes to implementation is that storing/multiplying prime numbers is in a traditional model just O(1), but if the array (i.e. n) is large enough, this model is inapropriate.
However, it is a theoretic algorithm that does the job.
Here's a slight variation on IVlad's solution which I think is conceptually simpler, and also n log n but with fewer comparisons. The general idea is to start on both ends of the sorted array, and march the indices towards each other. At each step, only move the index whose array value is further from 0 -- in only Theta(n) comparisons, you'll know the answer.
sort the array (n log n)
loop, starting with i=0, j=n-1
if a[i] == -a[j], then stop:
if a[i] != 0 or i != j, report success, else failure
if i >= j, then stop: report failure
if abs(a[i]) > abs(a[j]) then i++ else j--
(Yeah, probably a bunch of corner cases in here I didn't think about. You can thank that pint of homebrew for that.)
e.g.,
[ -4, -3, -1, 0, 1, 2 ] notes:
^i ^j a[i]!=a[j], i<j, abs(a[i])>abs(a[j])
^i ^j a[i]!=a[j], i<j, abs(a[i])>abs(a[j])
^i ^j a[i]!=a[j], i<j, abs(a[i])<abs(a[j])
^i ^j a[i]==a[j] -> done
The sum of two integers can only be zero if one is the negative of the other, like 7 and -7, or 2 and -2.

Resources