Proving Heap's Algorithm for Generating Permutations - algorithm

I need to prove the correctness of Heap's algorithm for generating permutations. The pseudocode for it is as follows:
HeapPermute(n)
//Implements Heap’s algorithm for generating permutations
//Input: A positive integer n and a global array A[1..n]
//Output: All permutations of elements of A
if n = 1
write A
else
for i ←1 to n do
HeapPermute(n − 1)
if n is odd
swap A[1] and A[n]
else swap A[i] and A[n]
(taken from Introduction to the Design and Analysis of Algorithms by Levitin)
I know I need to use induction to prove its correctness, but I'm not sure exactly how to go about doing so. I've proved mathematical equations but never algorithms.
I was thinking the proof would look something like this...
1) For n = 1, heapPermute is obviously correct. {1} is printed.
2) Assume heapPermute() outputs a set of n! permutations for a given n. Then
??
I'm just not sure how to go about finishing the induction step. Am I even on the right track here? Any help would be greatly appreciated.

For n = 1, heapPermute is obviously correct. {1} is printed.
Assume heapPermute() outputs a set of n! permutations for a given n. Then
??
Now, given the first two assumptions, show that heapPermutate(n+1) returns all the (n+1)! permutations.

Yes, that sounds like a good approach. Think about how to recursively define a set of all permutations, i.e. how can be permutations of {1..n} be expressed in terms of permutations of {1.. n-1}. For this, recall the inductive proof that there are n! permutations. How does the inductive step proceed there?

A recursive approach is definitely the way to go. Given your first two steps, to prove that heapPermutate(n+1) returns all the $(n+1)!$ permutations, you may want to explain that each element is adjoined to each permutation of the rest of the elements.
If you would like to have a look at an explanation by example, this blog post provides one.

Related

Complement of a set of intervals

I have a set of intervals inside the range [0,k].
How can I produce the complement set of this set of intervals?
I can come up with an algorithm, but it requires sorting the intervals.
Therefore, the complexity is O(nlogn), where n is the number of intervals.
Is there any faster algorithm to do this? If not, is there any way to prove that this is the optimal complexity?
Thank you.
In practice, let us assume that you have found an algorithm to perform this task (find the complement set) in O(n).
Then we can show that you have invented a new sort algorithm working in O(n).
To simplify, let us assume that the array to be sorted consists of natural numbers, and that there is no repetition.
If [a1 a2 ... an] need to be sorted, then consider the intervals [a1, a1+1) [a2, a2 + 1) ... [an, an + 1).
Applying you algorithm to generate the complement set of intervals in O(n), we get n intervals
​ [x1a + 1, x1b) [x2a + 1, x2b])... [xna + 1, xnb)
where the {xia, xib} corresponds to successive aj elements after sorting.
Let us assimilate this relation as a directed edge in a graph, connecting the two vertices xia and xib.
To get the original array in sorted array, we need to find the start of the graph, and then walking through the graph, which can be done in O(n).
The sort has been performed in O(n).
The fact that we did not consider repetitions is not too annoying from a theoretical point of view, if we consider for example that with hashing, we can suppress the repetitions in O(n).
The fact to not consider floating point values is a detail: finding a new O(n) sort algorithm for natural numbers
would be already a great result.

Two Sum and its Time Complexity

I was going over both solutions to Two Sum on leetcode and I noticed that the n^2 solution is to basically test all combinations of two numbers and see if they sum to Target.
I understand the naive solution iterates over each element of the array( or more precisely n-1 times because we can't compare the last element to itself) to grab the first addend and then another loop to grab all of the following elements. This second loop needs to iterate n-1-i times where i is the index of the first addend. I can see that (n-1)(n-1-i) is O(n^2).
The problem comes when I googled "algorithm for finding combinations" and it lead to this thread where the accepted answer talks about Gray Codes which goes way above my head.
Now I'm unsure whether my assumption was correct, the naive solution is a version of a Gray Code, or something else.
If Two Sum is a combinations problem then it's Time Complexity would be O(n!/ ((n-k)! k!)) or O(nCk) but I don't see how that reduces to O(n^2)
I read the Two Sum question and it states that:
Given an array of integers nums and an integer target, return indices
of the two numbers such that they add up to target.
It is a combinations problem. However, on closer inspection you will find that here the value of k is fixed.
You need to find two numbers from a list of given numbers that
add up to a particular target.
Any two numbers from n numbers can be selected in nC2 ways.
nC2 = n!/((n-2)! * 2!)
= n*(n-1)*(n-2)!/((n-2)!*2)
= n*(n-1)/2
= (n^2 - n)/2
Ignoring n and the constant 2 as it will hardly matter when n tends to infinity. The expressions finally results in a complexity of O(n^2).
Hence, a naïve solution of Two Sum has a complexity of O(n^2). Check this article for more information on your question.
https://www.geeksforgeeks.org/given-an-array-a-and-a-number-x-check-for-pair-in-a-with-sum-as-x/

Count the total number of subsets that don't have consecutive elements

I'm trying to solve pretty complex problem with combinatorics and counting subsets. First of all let's say we have given set A = {1, 2, 3, ... N} where N <= 10^(18). Now we want to count subsets that don't have consecutive numbers in their representation.
Example
Let's say N = 3, and A = {1,2,3}. There are 2^3 total subsets but we don't want to count the subsets (1,2), (2,3) and (1,2,3). So in total for this question we want to answer 5 because we want to count only the remaining 5 subsets. Those subsets are (Empty subset), (1), (2), (3), (1,3). Also we want to print the result modulo 10^9 + 7.
What I've done so far
I was thinking that this should be solved using dynamical programming with two states (are we taking the i-th element or not), but then I saw that N could go up to 10^18, so I was thinking that this should be solved using mathematical formula. Can you please give me some hints where should I start to get the formula.
Thanks in advance.
Take a look at How many subsets contain no consecutive elements? on the Mathematics Stack Exchange.
They come to the conclusion that the number of non-consecutive subsets in the set {1,2,3...n} is the fib(n+2) where fib is the function computing the Fibonacci sequence for the number n+2. Your solution to n=3 conforms to this solution. If you can implement the Fibonacci algorithm, then you can solve this problem, but solving the question for a number as large as 10^18 will still be a challenge.
As mentioned in the comments here, you can check out the fast doubling algorithm on Hacker Earth.
It will find Fibonacci numbers in O(log n).

All Possible Sorting algorithm analysis

The “APS” (All Possible Sorting) algorithm sorts an array A of size n by generating all possible sequences of elements of n, and for each sequence, checking to see if the elements are in sorted (ascending) order.
a) What is the worst-case time complexity of APS? Explain your logic / show your work.
My answer:
Worst case is O(n!) because it generates all possible sequences and then checks if sorted.
Preferably, I would like someone to tell me if I'm right or wrong and how to get to the answer. This big O stuff confuses me.
APS is generating all possible permutations of N elements, which gives you n! different possible sortings, so you are correct.
Proving that it is O(n!) just requires you to prove that the runtime is asymptotically upper-bounded by n!, which basically means you have to prove that:
f(n) = O(n!) if, for some m and c, |f(n)| < |m * n!| for all n > c.
Proving this is easier if you have the actual algorithm written out, but if you walk through your logic it should do the trick.

Show that n positive integers can be sorted in Nlogk time

Show that n positive integers in the range 1 to k can be sorted in O(n log k) time.
I can only use Mergesort, since I know how to do it using a heap. This is not a HW problem, it's from Skiena's book.
I see that if I have K = 3, then in 3 steps i can merge the list; but does that suffice for an answer or 'showing'?
Here are a couple of ideas for efficient sorting. As user templatetypedef said, radix sort may be what you are looking for.
Hope it helps

Resources