Algorithm for a stair climbing permutation - algorithm

I've been asked to build an algorithm that involves a permutation and I'm a little stumped and looking for a starting place. The details are this...
You are climbing a staircase of n stairs. Each time you can either climb 1 or 2 steps at a time. How many distinct ways can you climb to the top?
Any suggestions how I can tackle this challenge?

You're wrong about permutations. Permutations involve orderings of a set. This problem is something else.
Problems involving simple decisions that produce another instance of the same problem are often solvable by dynamic programming. This is such a problem.
When you have n steps to climb, you can choose a hop of either 1 or 2 steps, then solve the smaller problems for n-1 and n-2 steps respectively. In this case you want to add the numbers of possibilities. (Many DPs are to find minimums or maximums instead, so this is a bit unusual.)
The "base cases" are when you have either 0 or 1 step. There's exactly 1 way to traverse each of these.
With all that in mind, we can write this dynamic program for the number of ways to climb n steps as a recursive expression:
W(n) = W(n - 1) + W(n - 2) if n > 1
1 n == 0, 1
Now you don't want to implement this as a simple recursive function. It will take time exponential in n to compute because each call to W calls itself twice. Yet most of those calls are unnecessary repeats. What to do?
One way to get the job done is find a way to compute the W(i) values in sequence. For a 1-valued DP it's usually quite simple, and so it is here:
W(0) = 1
W(1) = 1 (from the base case)
W(2) = W(1) + W(0) = 1 + 1 = 2
W(3) = W(2) + W(1) = 2 + 1 = 3
You get the idea. This is a very simple DP indeed. To compute W(i), we need only two previous values, W(i-1) and W(i-2). A simple O(n) loop will do the trick.
As a sanity check, look at W(3)=3. Indeed, to go up 3 steps, we can take hops of 1 then 2, 2 then 1, or three hops of 1. That's 3 ways!
Can't resist one more? W(4)=2+3=5. The hop sequences are (2,2), (2,1,1), (1,2,1), (1,1,2), and (1,1,1,1): 5 ways.
In fact, the chart above will look familiar to many. The number of ways to climb n steps is the (n+1)th Fibonacci number. You should code the loop yourself. If stuck, you can look up any of the hundreds of posted examples.

private static void printPath(int n,String path) {
if (n == 0) {
System.out.println(path.substring(1));
}
if (n-1 >=0) {
printPath(n-1,path + ",1");
}
if (n-2 >=0) {
printPath(n-2,path + ",2");
}
}
public static void main(String[] args) {
printPath(4,"");
}

Since it looks like a programming assignment, I will give you the steps to get you started instead of giving the actual code:
You can write a recursive function which keeps track of the step you are on.
If you have reached step n then you found one permutation or if you are past n then you should discard it.
At every step you can call the function by incrementing the steps by 1 and 2
The function will return the number of permutations possible once finished

Related

Is Recursion W/Memoization In Staircase Problem Bottom-Up?

Considering the classical staircase problem as "Davis has a number of staircases in his house and he likes to climb each staircase 1, 2, or 3 steps at a time. Being a very precocious child, he wonders how many ways there are to reach the top of the staircase."
My approach is to use memoization with recursion as
# TimeO(N), SpaceO(N), DP Bottom Up + Memoization
def stepPerms(n, memo = {}):
if n < 3:
return n
elif n == 3:
return 4
if n in memo:
return memo[n]
else:
memo[n] = stepPerms(n - 1, memo) + stepPerms(n - 2 ,memo) + stepPerms(n - 3 ,memo)
return memo[n]
The question that comes to my mind is that, is this solution bottom-up or top-down. My way of approaching it is that since we go all the way down to calculate the upper N values (imagine the recursion tree). I consider this bottom-up. Is this correct?
Recoursion strategies are as a general rule topdown approaches, whether they have memory or not. The underlaying algorithm design is dynamic programming, which traditionally built in a bottom-up fashion.
I noticed that you wrote your code in python, and python is generally not happy about deep recoursion (small amounts are okay, but performance quickly takes a hit and there is a maximum recousion depth of 1000 - unless it was changed since I read that).
If we make a bottom-up dynamic programmin version, we can get rid of this recousion, and we can also recognise that we only need constant amount of space, since we are only really interested in the last 3 values:
def stepPerms(n):
if n < 1: return n
memo = [1,2,4]
if n <= 3: return memo[n-1]
for i in range(3,n):
memo[i % 3] = sum(memo)
return memo[n-1]
Notice how much simpler the logic is, appart from the i is one less than the value, since the positions are starts a 0 instead of the count of 1.
In the top-down approach, the complex module is divided into submodules. So it is top down approach. On the other hand, bottom-up approach begins with elementary modules and then combine them further.
And bottom up approach of this solution will be:
memo{}
for i in range(0,3):
memo[i]=i
memo[3]=4
for i in range(4,n+1):
memo[i]=memo[i-1]+memo[i-2]+memo[i-3]

Find a math algorithm for an equation

I just post a mathematic question at math.stackexchange, but I'll ask people here for a programmatically recursive algorithm.
The problem: fill in the blank number from 1 to 9 (once and only once each blank) to finish the equation.
Additional conditions:
1. Mathematic priority DOES matter.
2. All numbers (include evaluation result) should be integers.
Which mean the divide should be divisible (E.g. 9 mod 3 = 0 is OK, 8 mod 3 != 0 is not OK).
3. For those who don't know (as one in the original question), the operations in the diagram are:
+ = plus; : = divide; X = multiple; - = minus.
There should be more than 1 answer. I'd like to have a recursive algorithm to find out all the solutions.
Original question
PS: I'd like to learn about the recursive algorithm, performance improval. I was trying to solve the problem using brute force. My PC freeze for quite a while.
You have to find the right permuations
9! = 362880
This is not a big number and you can do your calculations the following way:
isValid(elements)
//return true if and only if the permutation of elements yields the expected result
end isValid
isValid is the validator, which checks whether a given permutation is correct.
calculate(elements, depth)
//End sign
if (depth >= 9) then
//if valid, then store
if (isValid(elements)) then
store(elements)
end if
return
end if
//iterate elements
for element = 1 to 9
//exclude elements already in the set
if (not contains(elements, element)) then
calculate(union(elements, element), depth + 1)
end if
end for
end calculate
Call calculate as follows:
calculate(emptySet, 1)
Here's a solution using PARI/GP:
div(a,b)=if(b&&a%b==0,a/b,error())
f(v)=
{
iferr(
v[1]+div(13*v[2],v[3])+v[4]+12*v[5]-v[6]-11+div(v[7]*v[8],v[9])-10==66
, E, 0)
}
for(i=0,9!-1,if(f(t=numtoperm(9,i)),print(t)))
The function f defines the particular function here. I used a helper function div which throws an error if the division fails (producing a non-integer or dividing by 0).
The program could be made more efficient by splitting out the blocks which involve division and aborting early if they fail. But since this takes only milliseconds to run through all 9! permutations I didn't think it was worth it.

Count ways to take atleast one stick

There are N sticks placed in a straight line. Bob is planning to take few of these sticks. But whatever number of sticks he is going to take, he will take no two successive sticks.(i.e. if he is taking a stick i, he will not take i-1 and i+1 sticks.)
So given N, we need to calculate how many different set of sticks he could select. He need to take at least stick.
Example : Let N=3 then answer is 4.
The 4 sets are: (1, 3), (1), (2), and (3)
Main problem is that I want solution better than simple recursion. Can their be any formula for it? As am not able to crack it
It's almost identical to Fibonacci. The final solution is actually fibonacci(N)-1, but let's explain it in terms of actual sticks.
To begin with we disregard from the fact that he needs to pick up at least 1 stick. The solution in this case looks as follows:
If N = 0, there is 1 solution (the solution where he picks up 0 sticks)
If N = 1, there are 2 solutions (pick up the stick, or don't)
Otherwise he can choose to either
pick up the first stick and recurse on N-2 (since the second stick needs to be discarded), or
leave the first stick and recurse on N-1
After this computation is finished, we remove 1 from the result to avoid counting the case where he picks up 0 sticks in total.
Final solution in pseudo code:
int numSticks(int N) {
return N == 0 ? 1
: N == 1 ? 2
: numSticks(N-2) + numSticks(N-1);
}
solution = numSticks(X) - 1;
As you can see numSticks is actually Fibonacci, which can be solved efficiently using for instance memoization.
Let the number of sticks taken by Bob be r.
The problem has a bijection to the number of binary vectors with exactly r 1's, and no two adjacent 1's.
This is solveable by first placing the r 1's , and you are left with exactly n-r 0's to place between them and in the sides. However, you must place r-1 0's between the 1's, so you are left with exactly n-r-(r-1) = n-2r+1 "free" 0's.
The number of ways to arrange such vectors is now given as:
(1) = Choose(n-2r+1 + (r+1) -1 , n-2r+1) = Choose(n-r+1, n-2r+1)
Formula (1) is deriving from number of ways of choosing n-2r+1
elements from r+1 distinct possibilities with replacements
Since we solved it for a specific value of r, and you are interested in all r>=1, you need to sum for each 1<=r<=n
So, the solution of the problem is given by the close formula:
(2) = Sum{ Choose(n-r+1, n-2r+1) | for each 1<=r<=n }
Disclaimer:
(A close variant of the problem with fixed r was given as HW in the course I am TAing this semester, main difference is the need to sum the various values of r.

In how many ways you can count till N using the numbers <= with N [duplicate]

This question already has answers here:
What is the fastest (known) algorithm to find the n-th Catalan number mod m?
(2 answers)
Closed 8 years ago.
in how many ways you can sum the numbers less or equal with N to be equal with n. What is the algorithm to solve that?
Example:
lets say that we have
n =10;
so there are a lot of combinations but for example we can do:
1+1+1+1+1+1+1+1+1+1 = 10
1+2+1+1+1+1+1+1+1=10
1+1+2+1+1+1+1+1+1=10
.....
1+9=10
10=10
8+2=10
and so on.
If you think is the Catalan questions, the answer is: the problem seems to be Catalan problem but is not. If you take a look to the results you will see that lets say for N=5 In Catalan algorithm you have 14 possibilities. But in right answer you have 2^4=16 possibilities if you count all, or the Fibonacci array if you keep only the unique combinations. Eg N=5 we have 8 possibilities, so the Catalan algorithm doesn't verify.
This was a question received by me in a quiz done for fun, at that time i thought that the solution is a well known formula, so i lost a lot of time trying to remember it :)
I found 2 solutions for this problem and 1 more if you are considering only the unique combinations. Eg 2+8 is the same as 8+2, you are considering only 1 of them.
So what is the algorithm to solve it?
This is an interesting problem. I do not have the solution (yet), but I think this can be done in a divide-and-conquer way. If you think of the problem space as a binary tree, you can generate it like this:
The root is the whole number n
Its children are floor(n/2) and ceil(n/2)
Example:
n=5
5
/ \
2 3
/ \ / \
1 1 1 2
/ \
1 1
If you do this recursively, you get a binary tree. If can then traverse the tree in this manner to get all the possible combinations of summing up to n:
get_combinations(root_node)
{
combinations=[]
combine(combinations, root_node.child_left, root_node.child_right)
}
combine(combinations, nodeA, nodeB)
{
new_combi = "nodeA" + "+nodeB"
combinations.add(new_combi)
if nodeA.has_children(): combinations.add( combine(combinations, nodeA.child_left, nodeA.child_right) + "+nodeB" )
if nodeB.has_children(): combinations.add( "nodeA+" + combine(combinations, nodeB.child_left, nodeB.child_right) )
return new_combi
}
This is just a draft. Of yourse you don't have to explicitly generate the tree beforehand, but you can do that along the way. Maybe I can come up with a nicer algorithm if I find the time.
EDIT:
OK, I didn't quite answer OPs question to the point, but I don't like to leave stuff unfinished, so here I present my solution as a working python program:
import math
def print_combinations(n):
for calc in combine(n):
line = ""
count = 0
for op in calc:
line += str(int(op))
count += 1
if count < len(calc):
line += "+"
print line
def combine(n):
p_comb = []
if n >= 1: p_comb.append([n])
if n >1:
comb_left = combine(math.floor(n/float(2)))
comb_right = combine(math.ceil(n/float(2)))
for l in comb_left:
for r in comb_right:
lr_merge = []
lr_merge.extend(l)
lr_merge.extend(r)
p_comb.append(lr_merge)
return p_comb
You can now generate all possible ways of summing up n with numbers <= n. For example if you want to do that for n=5 you call this: print_combinations(5)
Have fun, be aware though that you run into memory issues pretty fast (dynamic programming to the rescue!) and that you can have equivalent calculations (e.g. 1+2 and 2+1).
All the 3 solutions that I fount use Math induction:
solution 1:
if n =0 comb =1
if n =1 comb = 1
if n=2 there are 1+1, 2 comb =2 = comb(0)+comb(1)
if n=3 there are 1+1+1, 1+2, 2+1, 3 comb = 4 = comb(0)+comb(1)+comb(2)
if n=4 there are 1+1+1+1, 1+2+1,1+1+2,2+1+1,2+2,1+3,3+1,4 comb = 8 =comb(0)+comb(1)+comb(2)+comb(3)
Now we see a pattern here that says that:
at k value we have comb(k)= sum(comb(i)) where i between 0 and k-1
using math induction we can prove it for k+1 that:
comb(k+1)= sum(comb(i)) where is is between 0 and k
Solution number 2:
If we pay a little more attention to the solution 1 we can say that:
comb(0)=2^0
comb(1)=2^0
comb(2)=2^1
comb(3)=2^2
comb(4)=2^3
comb(k)=2^(k-1)
again using the math induction we can prove that
comb(k+1)=2^k
Solution number 3 (if we keep only the unique combinations) we can see that:
comb(0)=1
comb(1)=1
comb(2)= 1+1,2=2
comb(3)= 1+1+1, 1+2, 2+1, 3 we take out 1+2 because we have 2+1 and its the same comb(3)=3
comb(4) = 1+1+1+1, 1+2+1,1+1+2,2+1+1,2+2,1+3,3+1,4, here we take out the 1+2+1,,2+1+1 and 1+3 because we have them but in different order comb(4)= 5.
If we continue we can see that:
comb(5) = 8
comb(6)=13
we now can see the pattern that:
comb (k) = comb (k-1) + comb(k-2) the Fibonacci array
again using Math induction we can prove that for k+1
comb(k+1) = comb(k)+comb(k-1)
now it's easy to implement those solutions in a language using recursion for 2 of the solutions or just the non recursive method for the solution with 2^k.
And by the way this has serious connections with graph theory (how many sub-graphs you can build starting from a bigger graph - our number N, and sub-graphs being the ways to count )
Amazing isn't it?

How to master in-place array modification algorithms?

I am preparing for a software job interview, and I am having trouble with in-place array modifications.
For example, in the out-shuffle problem you interleave two halves of an array so that 1 2 3 4 5 6 7 8 would become 1 5 2 6 3 7 4 8. This question asks for a constant-memory solution (and linear-time, although I'm not sure that's even possible).
First I thought a linear algorithm is trivial, but then I couldn't work it out. Then I did find a simple O(n^2) algorithm but it took me a long time. And I still don't find a faster solution.
I remember also having trouble solving a similar problem from Bentley's Programming Pearls, column 2:
Rotate an array left by i positions (e.g. abcde rotated by 2 becomes cdeab), in time O(n) and with just a couple of bytes extra space.
Does anyone have tips to help wrap my head around such problems?
About an O(n) time, O(1) space algorithm for out-shuffle
Doing an out-shuffle in O(n) time and O(1) space is possible, but it is tough. Not sure why people think it is easy and are suggesting you try something else.
The following paper has an O(n) time and O(1) space solution (though it is for in-shuffle, doing in-shuffle makes out-shuffle trivial):
http://arxiv.org/PS_cache/arxiv/pdf/0805/0805.1598v1.pdf
About a method to tackle in-place array modification algorithms
In-place modification algorithms could become very hard to handle.
Consider a couple:
Inplace out-shuffle in linear time. Uses number theory.
In-place merge sort, was open for a few years. An algorithm came but was too complicated to be practical. Uses very complicated bookkeeping.
Sorry, if this sounds discouraging, but there is no magic elixir that will solve all in-place algorithm problems for you. You need to work with the problem, figure out its properties, and try to exploit them (as is the case with most algorithms).
That said, for array modifications where the result is a permutation of the original array, you can try the method of following the cycles of the permutation. Basically, any permutation can be written as a disjoint set of cycles (see John's answer too). For instance the permutation:
1 4 2 5 3 6
of 1 2 3 4 5 6 can be written as
1 -> 1
2 -> 3 -> 5 -> 4 -> 2
6 -> 6.
you can read the arrow as 'goes to'.
So to permute the array 1 2 3 4 5 6 you follow the three cycles:
1 goes to 1.
6 goes to 6.
2 goes to 3, 3 goes to 5, 5 goes to 4, and 4 goes to 2.
To follow this long cycle, you can use just one temp variable. Store 3 in it. Put 2 where 3 was. Now put 3 in 5 and store 5 in the temp and so on. Since you only use constant extra temp space to follow a particular cycle, you are doing an in-place modification of the array for that cycle.
Now if I gave you a formula for computing where an element goes to, all you now need is the set of starting elements of each cycle.
A judicious choice of the starting points of the cycles can make the algorithm easy. If you come up with the starting points in O(1) space, you now have a complete in-place algorithm. This is where you might actually have to get familiar with the problem and exploit its properties.
Even if you didn't know how to compute the starting points of the cycles, but had a formula to compute the next element, you could use this method to get an O(n) time in-place algorithm in some special cases.
For instance: if you knew the array of unsigned integers held only positive integers.
You can now follow the cycles, but negate the numbers in them as an indicator of 'visited' elements. Now you can walk the array and pick the first positive number you come across and follow the cycles for that, making the elements of the cycle negative and continue to find untouched elements. In the end, you just make all the elements positive again to get the resulting permutation.
You get an O(n) time and O(1) space algorithm! Of course, we kind of 'cheated' by using the sign bits of the array integers as our personal 'visited' bitmap.
Even if the array was not necessarily integers, this method (of following the cycles, not the hack of sign bits :-)) can actually be used to tackle the two problems you state:
The in-shuffle (or out-shuffle) problem: When 2n+1 is a power of 3, it can be shown (using number theory) that 1,3,3^2, etc are in different cycles and all cycles are covered using those. Combine this with the fact that the in-shuffle is susceptible to divide and conquer, you get an O(n) time, O(1) space algorithm (the formula is i -> 2*i modulo 2n+1). Refer to the above paper for more details.
The cyclic shift an array problem: Cyclic shift an array of size n by k also gives a permutation of the resulting array (given by the formula i goes to i+k modulo n), and can also be solved in linear time and in-place using the following the cycle method. In fact, in terms of the number of element exchanges this following cycle method is better than the 3 reverses algorithm. Of course, following the cycle method can kill the cache because of the access patterns, and in practice, the 3 reverses algorithm might actually fare better.
As for interviews, if the interviewer is a reasonable person, they will be looking at how you think and approach the problem and not whether you actually solve it. So even if you don't solve a problem, I think you should not be discouraged.
The basic strategy with in place algorithms is to figure out the rule for moving a entry from slot N to slot M.
So, your shuffle, for instance. if A and B are cards and N is the number of chards. the rules for the first half of the deck are different than the rules for the second half of the deck
// A is the current location, B is the new location.
// this math assumes that the first card is card 0
if (A < N/2)
B = A * 2;
else
B = (A - N/2) * 2 + 1;
Now we know the rule, we just have to move each card, each time we move a card, we calculate the new location, then remove the card that is currently in B. place A in slot B, then let B be A, and loop back to the top of the algorithm. Each card moved displaces the new card which becomes the next card to be moved.
I think the analysis is easier if we are 0 based rather than 1 based, so
0 1 2 3 4 5 6 7 // before
0 4 1 5 2 6 3 7 // after
So we want to move 1->2 2->4 4->1 and that completes a cycle
then move 3->6 6->5 5->3 and that completes a cycle
and we are done.
Now we know that card 0 and card N-1 don't move, so we can ignore those,
so we know that we only need to swap N-2 cards in total. The only sticky bit
is that there are 2 cycles, 1,2,4,1 and 3,6,5,3. when we get to card 1 the
second time, we need to move on to card 3.
int A = 1;
int N = 8;
card ary[N]; // Our array of cards
card a = ary[A];
for (int i = 0; i < N/2; ++i)
{
if (A < N/2)
B = A * 2;
else
B = (A - N/2) * 2 + 1;
card b = ary[B];
ary[B] = a;
a = b;
A = B;
if (A == 1)
{
A = 3;
a = ary[A];
}
}
Now this code only works for the 8 card example, because of that if test that moves us from 1 to 3 when we finish the first cycle. What we really need is a general rule to recognize the end of the cycle, and where to go to start the next one.
That rule could be mathematical if you can think of a way, or you could keep track of which places you had visited in a separate array, and when A is back to a visited place, you could then scan forward in your array looking for the first non-visited place.
For your in-place algorithm to be 0(n), the solution will need to be mathematical.
I hope this breakdown of the thinking process is helpful to you. If I was interviewing you, I would expect to see something like this on the whiteboard.
Note: As Moron points out, this doesn't work for all values of N, it's just an example of the sort of analysis that an interviewer is looking for.
Frank,
For programming with loops and arrays, nothing beats David Gries's textbook The Science of Programming. I studied it over 20 years ago, and there are ideas that I still use every day. It is very mathematical and will require real effort to master, but that effort will repay you many times over for your whole career.
Complementing Aryabhatta's answer:
There is a general method to "follow the cycles" even without knowing the starting positions for each cycle or using memory to know visited cycles. This is specially useful if you need O(1) memory.
For each position i in the array, follow the cycle without moving any data yet, until you reach...
the starting position i: end of the cyle. this is a new cycle: follow it again moving the data this time.
a position lower than i: this cycle was already visited, nothing to do with it.
Of course this has a time overhead (O(n^2), I believe) and has the cache problems of the general "following cycles" method.
For the first one, let's assume n is even. You have:
first half: 1 2 3 4
second : 5 6 7 8
Let x1 = first[1], x2 = second[1].
Now, you have to print one from the first half, one from the second, one from the first, one from the second...
Meaning first[1], second[1], first[2], second[2], ...
Obviously, you don't keep two halves in memory, as that will be O(n) memory. You keep pointers to the two halves. Do you see how you'd do that?
The second is a bit harder. Consider:
12345
abcde
..cde
.....ab
..cdeab
cdeab
Do you notice anything? You should notice that the question basically asks you to move the first i characters to the end of your string, without affording the luxury of copying the last n - i in a buffer then appending the first i and then returning the buffer. You need to do with O(1) memory.
To figure how to do this you basically need a lot of practice with these kinds of problems, as with anything else. Practice makes perfect basically. If you've never done these kinds of problems before, it's unlikely you'll figure it out. If you have, then you have to think about how you can manipulate the substrings and or indices such that you solve your problem under the given constraints. The general rule is to work and learn as much as possible so you'll figure out the solutions to these problems very fast when you see them. But the solution differs quite a bit from problem to problem. There's no clear recipe for success I'm afraid. Just read a lot and understand the stuff you read before you move on.
The logic for the second problem is this: what happens if we reverse the substring [1, 2], the substring [3, 5] and then concatenate them and reverse that? We have, in general:
1, 2, 3, 4, ..., i, i + 1, i + 2, ..., N
reverse [1, i] =>
i, i - 1, ..., 4, 3, 2, 1, i + 1, i + 2, ..., N
reverse [i + 1, N] =>
i, i - 1, ..., 4, 3, 2, 1, N, ..., i + 1
reverse [1, N] =>
i + 1, ..., N, 1, 2, 3, 4, ..., i - 1, i
which is what you wanted. Writing the reverse function using O(1) memory should be trivial.
Generally speaking, the idea is to loop through the array once, while
storing the value at the position you are at in a temporary variable
finding the correct value for that position and writing it
either move on to the next value, or figure out what to do with your temporary value before continuing.
A general approach could be as follows:
Construct a positions array int[] pos, such that pos[i] refers to the position (index) of a[i] in the shuffled array.
Rearrange the original array int[] a, according to this positions array pos.
/** Shuffle the array a. */
void shuffle(int[] a) {
// Step 1
int [] pos = contructRearrangementArray(a)
// Step 2
rearrange(a, pos);
}
/**
* Rearrange the given array a according to the positions array pos.
*/
private static void rearrange(int[] a, int[] pos)
{
// By definition 'pos' should not contain any duplicates, otherwise rearrange() can run forever.
// Do the above sanity check.
for (int i = 0; i < pos.length; i++) {
while (i != pos[i]) {
// This while loop completes one cycle in the array
swap(a, i, pos[i]);
swap(pos, i, pos[i]);
}
}
}
/** Swap ith element in a with jth element. */
public static void swap(int[] a, int i, int j)
{
int temp = a[i];
a[i] = a[j];
a[j] = temp;
}
As an example, for the case of outShuffle the following would be an implementation of contructRearrangementArray().
/**
* array : 1 2 3 4 5 6 7 8
* pos : 0 2 4 6 1 3 5 7
* outshuffle: 1 5 2 6 3 7 4 8 (outer boundaries remain same)
*/
public int[] contructRearrangementArray(int[] a)
{
if (a.length % 2 != 0) {
throw new IllegalArgumentException("Cannot outshuffle odd sized array");
}
int[] pos = new int[a.length];
for (int i = 0; i < pos.length; i++) {
pos[i] = i * 2 % (pos.length - 1);
}
pos[a.length - 1] = a.length - 1;
return pos;
}

Resources