Linked list loop detection - algorithm

how we can prove that moving fast and slow pointer(from the begining) by 1 makes there meeting point the loop node?I mean i cant understand what gives it a guranteed solution that the meeting node is the loop node(i.e node from where cycle starts)
I am clear with tortoise hare loop detection basically i am talking about detcting the node where cycle starts after loop has been detected.

It's a very simple proof really. First, you proof that the slow pointer will match the fast pointer after at most n + k steps, where n is the number of links to the start of the cycle, and k is the length of the cycle. And then you proof that they will match again after exactly k further steps.
The point where they meet will be anywhere in the cycle.

Before trying to prove this formally, you should first look at an example so you can get a more intuitive understanding of, and can visualize, what is going on. Suppose you have the following linked list, in which the 3 (at index 3) points back to the 1 (at index 1):
[0| ]->[1| ]->[2| ]->[3| ]--+
^ |
| |
| |
+------------------+
Walking through the logical progression, you can observe the following when incrementing slow by one position and fast by two:
slow = index 0; fast = index 0
slow = index 1; fast = index 2
slow = index 2; fast = index 1
slow = index 3; fast = index 3 (loop exists)
Hope this helps!

Related

Minimum distance to reach till end

Please help me with the below problem statement:
Bounce is a fast bunny. This time she faces the challenging task of completing all the trades on a number line.
Initially, bounce is at the 0th position, and the trades to be performed are on the right side(position>0).
She has two list of equal length, one containing the value v[i], and the other position p[i], for each of the trade it needs to perform .
The given list 'pos' is in strictly increasing order, that is pos[i]<pos[i+1], for 1<=i<=n-1 (1 based indexing) where n is the sizeof list.
the trade values can be positive, negative or zero.
During the process she cannot have a resource count of strictly less than zero at any moment, and after finishing all the trades she should finish at the right most position of trade(even if trade value is zero).
It is guaranteed that the sum of all trades is greater than or equal to zero.
Bounce can jump from any position to any other position. If she jumps from pos1 to pos2, the distance covered is |pos1-pos2|, and the distance for this jump is added to total distance covered.
find the Total minimum total Bounce has to cover to complete all the trades and then end at the last(rightmost) position of the trade.
Constraints
1<=n<=10^5
-1000<=v[i]<=1000
1<=pos<=10^8
Sample I/O: 1
4
2
-3
1
2
1
2
3
4
6
Explanation:
Number of trades = 4v = {2,-3,1,2}
position = {1,2,3,4}
at x=1
we gain 2 resources and resource count is 2
at x=2
we can't trade as we have only 2 resources
at x=3
we gain 1 more resource and count becomes 3(now go back to 2 and finish pending task and come back)
distance covered = 3+1+1 = 5
at x=4
we gain 2 more resource and exit
Hence, total distance covered = 6
Sample I/O: 2
4
2
-3
-1
2
1
2
3
4
8
I was asked this question in an interview and wasn't able to answer and i'm unable to solve it till now. I tried to relate this with many concepts like DAG, maximum sum, Kadane's Algo. but none was helpful.
How to approach this question and how to relate this with any existing algorithm?
It is an past interview question for which i don't have any link. I just want to know what i could have done at that time which would had solved it.
A greedy algorithm works here: as you walk forward, and would get a negative accumulated result, then you know that you'll have to get back to this position some time later. This means that every next step counts three times (forward, backward and forward again). As you know that the conflicting negative trade amount will eventually need to be accumulated, you might as well account for it immediately, knowing that you will have to triple the distance of the following steps until you have a positive accumulated amount.
So here is how that algorithm can be implemented in JavaScript. The two examples are run:
function minDistance(v, p) {
let distance = 0;
let position = 0;
let resources = 0;
for (let i = 0; i < v.length; i++) {
let step = p[i] - position;
if (resources < 0) distance += step * 3; // need to get back & forth here
else distance += step;
resources += v[i]; // all trades have to be performed anyway
position = p[i];
}
return distance;
}
console.log(minDistance([2,-3,1,2], [1,2,3,4])); // 6
console.log(minDistance([2,-3,-1,2], [1,2,3,4])); // 8

Find k-th element in skiplist - explanation needed

We're learning about skiplists at my university and we have to find k-th element in the skiplist. I haven't found anything about this in the internet, since skiplist in not really a popular data structure. W. Pugh in his original article wrote:
Each element x has an index pos(x). We use this value in our invariants but do not store it. The index of the header is zero, the index of the first element is one and so on. Associated with each forward pointer is a measurement, fDistance, of the distance traversed by that pointer:
x→fDistance[i] = pos(x→forward[i]) – pos(x).
Note that the distance traversed by a level 1 pointer is always 1, so some storage economy is possible here at the cost of a slight increase in the complexity of the algorithms.
SearchByPosition(list, k)
if k < 1 or k > size(list) then return bad-index
x := list→header
pos := 0
-- loop invariant: pos = pos(x)
for i := list→level downto 1 do
while pos + x→fDistance[i] ≤ k do
pos := pos + x→fDistance[i]
x := x→forward[i]
return x→value
The problem is, I still don't get what is going on here. How do we know positions of elements without storing them? How do we calculate fDistance from pos(x) if we don't store it? If we go from the highest level of the skiplist, how do we know how many nodes on level 0 (or 1, the lowest one anyway) we skip this way?
I'm going to assume you're referring to how to find the k-th smallest (or largest) element in a skip list. This is a rather standard assumption I think, otherwise you have to clarify what you mean.
I'll refer to the GIF on wikipedia in this answer: https://en.wikipedia.org/wiki/Skip_list
Let's say you want to find the k = 5 smallest element.
You start from the highest level (4 in the figure). How many elements would you skip from 30 to NIL? 6 (we also count the 30). That's too much.
Go down a level. How many skipped from 30 to 50? 2: 30 and 40.
So we reduced the problem to finding the k = 5 - 2 = 3 smallest element starting at 50 on level 3.
How many skipped from 50 to NIL? 4, that's one too many.
Go down a level. How many skipped from 50 to 70? 2. Now find the 3 - 2 = 1 smallest element starting from 70 on level 2.
How many skipped from 70 to NIL? 2, one too many.
From 70 to 90 on level 1? 1 (itself). So the answer is 70.
So you need to store how many nodes are skipped for each node at each level and use that extra information in order to get an efficient solution. That seems to be what fDistance[i] does in your code.

How to adapt Fenwick tree to answer range minimum queries

Fenwick tree is a data-structure that gives an efficient way to answer to main queries:
add an element to a particular index of an array update(index, value)
find sum of elements from 1 to N find(n)
both operations are done in O(log(n)) time and I understand the logic and implementation. It is not hard to implement a bunch of other operations like find a sum from N to M.
I wanted to understand how to adapt Fenwick tree for RMQ. It is obvious to change Fenwick tree for first two operations. But I am failing to figure out how to find minimum on the range from N to M.
After searching for solutions majority of people think that this is not possible and a small minority claims that it actually can be done (approach1, approach2).
The first approach (written in Russian, based on my google translate has 0 explanation and only two functions) relies on three arrays (initial, left and right) upon my testing was not working correctly for all possible test cases.
The second approach requires only one array and based on the claims runs in O(log^2(n)) and also has close to no explanation of why and how should it work. I have not tried to test it.
In light of controversial claims, I wanted to find out whether it is possible to augment Fenwick tree to answer update(index, value) and findMin(from, to).
If it is possible, I would be happy to hear how it works.
Yes, you can adapt Fenwick Trees (Binary Indexed Trees) to
Update value at a given index in O(log n)
Query minimum value for a range in O(log n) (amortized)
We need 2 Fenwick trees and an additional array holding the real values for nodes.
Suppose we have the following array:
index 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
value 1 0 2 1 1 3 0 4 2 5 2 2 3 1 0
We wave a magic wand and the following trees appear:
Note that in both trees each node represents the minimum value for all nodes within that subtree. For example, in BIT2 node 12 has value 0, which is the minimum value for nodes 12,13,14,15.
Queries
We can efficiently query the minimum value for any range by calculating the minimum of several subtree values and one additional real node value. For example, the minimum value for range [2,7] can be determined by taking the minimum value of BIT2_Node2 (representing nodes 2,3) and BIT1_Node7 (representing node 7), BIT1_Node6 (representing nodes 5,6) and REAL_4 - therefore covering all nodes in [2,7]. But how do we know which sub trees we want to look at?
Query(int a, int b) {
int val = infinity // always holds the known min value for our range
// Start traversing the first tree, BIT1, from the beginning of range, a
int i = a
while (parentOf(i, BIT1) <= b) {
val = min(val, BIT2[i]) // Note: traversing BIT1, yet looking up values in BIT2
i = parentOf(i, BIT1)
}
// Start traversing the second tree, BIT2, from the end of range, b
i = b
while (parentOf(i, BIT2) >= a) {
val = min(val, BIT1[i]) // Note: traversing BIT2, yet looking up values in BIT1
i = parentOf(i, BIT2)
}
val = min(val, REAL[i]) // Explained below
return val
}
It can be mathematically proven that both traversals will end in the same node. That node is a part of our range, yet it is not a part of any subtrees we have looked at. Imagine a case where the (unique) smallest value of our range is in that special node. If we didn't look it up our algorithm would give incorrect results. This is why we have to do that one lookup into the real values array.
To help understand the algorithm I suggest you simulate it with pen & paper, looking up data in the example trees above. For example, a query for range [4,14] would return the minimum of values BIT2_4 (rep. 4,5,6,7), BIT1_14 (rep. 13,14), BIT1_12 (rep. 9,10,11,12) and REAL_8, therefore covering all possible values [4,14].
Updates
Since a node represents the minimum value of itself and its children, changing a node will affect its parents, but not its children. Therefore, to update a tree we start from the node we are modifying and move up all the way to the fictional root node (0 or N+1 depending on which tree).
Suppose we are updating some node in some tree:
If new value < old value, we will always overwrite the value and move up
If new value == old value, we can stop since there will be no more changes cascading upwards
If new value > old value, things get interesting.
If the old value still exists somewhere within that subtree, we are done
If not, we have to find the new minimum value between real[node] and each tree[child_of_node], change tree[node] and move up
Pseudocode for updating node with value v in a tree:
while (node <= n+1) {
if (v > tree[node]) {
if (oldValue == tree[node]) {
v = min(v, real[node])
for-each child {
v = min(v, tree[child])
}
} else break
}
if (v == tree[node]) break
tree[node] = v
node = parentOf(node, tree)
}
Note that oldValue is the original value we replaced, whereas v may be reassigned multiple times as we move up the tree.
Binary Indexing
In my experiments Range Minimum Queries were about twice as fast as a Segment Tree implementation and updates were marginally faster. The main reason for this is using super efficient bitwise operations for moving between nodes. They are very well explained here. Segment Trees are really simple to code so think about is the performance advantage really worth it? The update method of my Fenwick RMQ is 40 lines and took a while to debug. If anyone wants my code I can put it on github. I also produced a brute and test generators to make sure everything works.
I had help understanding this subject & implementing it from the Finnish algorithm community. Source of the image is http://ioinformatics.org/oi/pdf/v9_2015_39_44.pdf, but they credit Fenwick's 1994 paper for it.
The Fenwick tree structure works for addition because addition is invertible. It doesn't work for minimum, because as soon as you have a cell that's supposed to be the minimum of two or more inputs, you've lost information potentially.
If you're willing to double your storage requirements, you can support RMQ with a segment tree that is constructed implicitly, like a binary heap. For an RMQ with n values, store the n values at locations [n, 2n) of an array. Locations [1, n) are aggregates, with the formula A(k) = min(A(2k), A(2k+1)). Location 2n is an infinite sentinel. The update routine should look something like this.
def update(n, a, i, x): # value[i] = x
i += n
a[i] = x
# update the aggregates
while i > 1:
i //= 2
a[i] = min(a[2*i], a[2*i+1])
The multiplies and divides here can be replaced by shifts for efficiency.
The RMQ pseudocode is more delicate. Here's another untested and unoptimized routine.
def rmq(n, a, i, j): # min(value[i:j])
i += n
j += n
x = inf
while i < j:
if i%2 == 0:
i //= 2
else:
x = min(x, a[i])
i = i//2 + 1
if j%2 == 0:
j //= 2
else:
x = min(x, a[j-1])
j //= 2
return x

How to sort an integer array on lexicological order using only adjacent swaps for a given max # of swaps(m)

I was asked that one during a phone interview of course, the other questions where fine, but that one I'm still not sure of the best answer.
At first i thought it smelled of a radix sort but since you can't only use adjacent swaps of course not.
So I think it's more of a bubble sort type algo, which is what I tried to do but the "max number of swaps" bit makes it very tricky (along with he lexicological part but i guess that's just a comparaison side issue)
I guess my algo would be something like (of course now i have better ideas than during the interview !)
int index = 0;
while(swapsLeft>0 && index < arrays.length)
{
int smallestIndex = index;
for(int i=index; i < index + swapsLeft)
{
// of course < is not correct, we need to compare as string or "by radix" or something
if(array[i]) < array[smallestIndex]
smallestIndex = i;
}
// if found a smaller item within swap range then swap it to the front
for(int i = smallestIndex; i > index; i--)
{
temp = array[smallestIndex];
array[smallestIndex] = array[index];
array[index] = temp
swapsLeft--;
}
// continue for next item in array
index ++; // edit:could probably optimize to index = index + 1 + (smallestIndex - index) ?
}
Does that seem about right ?
Who as a better solution, I'm curious as to an efficient / proper way to do this.
I am actually working on writing this exact code for my Algorithms class in Java for my Software Engineering Bachelors degree. So I will help you solve this by explaining the problem, and the steps to solve it. You are going to need at least 2 methods to do this more than once.
First you take your first value, just to make this easy lets keep it small and simple.
1 2 3 4
You should be using an array for sorting. To find the next number lexologically, you start out on the far right, move to the left, and stop when you find your first decrease. You have to replace that smaller value with the next largest value on the right. So for our example we would be replacing 3 with 4. So our next number is:
1 2 4 3
That was pretty simple right? Don't worry it gets much harder. Let's now try to get the next number using:
1 4 3 2
Ok so we start out on the far right and move left till our first smaller number. 2 is smaller than 3 is smaller than 4 is larger than 1. Ok so we have our first decrease at 1. So now we need to move back to the right till we hit the last number that is larger than 1. 4 is larger than 1, 3 is larger than 1, and 2 is larger than 1. Ok with 2 being the last number that means that 2 need to replace 1. But what about the rest of the numbers, well they are already in order, they are just backwards of what we need. So we need to flip the order and we come up with:
2 1 3 4
So you need a method that does that sorting, and another method that calls that method in a loop until you have done the correct number of parameters.

How to master in-place array modification algorithms?

I am preparing for a software job interview, and I am having trouble with in-place array modifications.
For example, in the out-shuffle problem you interleave two halves of an array so that 1 2 3 4 5 6 7 8 would become 1 5 2 6 3 7 4 8. This question asks for a constant-memory solution (and linear-time, although I'm not sure that's even possible).
First I thought a linear algorithm is trivial, but then I couldn't work it out. Then I did find a simple O(n^2) algorithm but it took me a long time. And I still don't find a faster solution.
I remember also having trouble solving a similar problem from Bentley's Programming Pearls, column 2:
Rotate an array left by i positions (e.g. abcde rotated by 2 becomes cdeab), in time O(n) and with just a couple of bytes extra space.
Does anyone have tips to help wrap my head around such problems?
About an O(n) time, O(1) space algorithm for out-shuffle
Doing an out-shuffle in O(n) time and O(1) space is possible, but it is tough. Not sure why people think it is easy and are suggesting you try something else.
The following paper has an O(n) time and O(1) space solution (though it is for in-shuffle, doing in-shuffle makes out-shuffle trivial):
http://arxiv.org/PS_cache/arxiv/pdf/0805/0805.1598v1.pdf
About a method to tackle in-place array modification algorithms
In-place modification algorithms could become very hard to handle.
Consider a couple:
Inplace out-shuffle in linear time. Uses number theory.
In-place merge sort, was open for a few years. An algorithm came but was too complicated to be practical. Uses very complicated bookkeeping.
Sorry, if this sounds discouraging, but there is no magic elixir that will solve all in-place algorithm problems for you. You need to work with the problem, figure out its properties, and try to exploit them (as is the case with most algorithms).
That said, for array modifications where the result is a permutation of the original array, you can try the method of following the cycles of the permutation. Basically, any permutation can be written as a disjoint set of cycles (see John's answer too). For instance the permutation:
1 4 2 5 3 6
of 1 2 3 4 5 6 can be written as
1 -> 1
2 -> 3 -> 5 -> 4 -> 2
6 -> 6.
you can read the arrow as 'goes to'.
So to permute the array 1 2 3 4 5 6 you follow the three cycles:
1 goes to 1.
6 goes to 6.
2 goes to 3, 3 goes to 5, 5 goes to 4, and 4 goes to 2.
To follow this long cycle, you can use just one temp variable. Store 3 in it. Put 2 where 3 was. Now put 3 in 5 and store 5 in the temp and so on. Since you only use constant extra temp space to follow a particular cycle, you are doing an in-place modification of the array for that cycle.
Now if I gave you a formula for computing where an element goes to, all you now need is the set of starting elements of each cycle.
A judicious choice of the starting points of the cycles can make the algorithm easy. If you come up with the starting points in O(1) space, you now have a complete in-place algorithm. This is where you might actually have to get familiar with the problem and exploit its properties.
Even if you didn't know how to compute the starting points of the cycles, but had a formula to compute the next element, you could use this method to get an O(n) time in-place algorithm in some special cases.
For instance: if you knew the array of unsigned integers held only positive integers.
You can now follow the cycles, but negate the numbers in them as an indicator of 'visited' elements. Now you can walk the array and pick the first positive number you come across and follow the cycles for that, making the elements of the cycle negative and continue to find untouched elements. In the end, you just make all the elements positive again to get the resulting permutation.
You get an O(n) time and O(1) space algorithm! Of course, we kind of 'cheated' by using the sign bits of the array integers as our personal 'visited' bitmap.
Even if the array was not necessarily integers, this method (of following the cycles, not the hack of sign bits :-)) can actually be used to tackle the two problems you state:
The in-shuffle (or out-shuffle) problem: When 2n+1 is a power of 3, it can be shown (using number theory) that 1,3,3^2, etc are in different cycles and all cycles are covered using those. Combine this with the fact that the in-shuffle is susceptible to divide and conquer, you get an O(n) time, O(1) space algorithm (the formula is i -> 2*i modulo 2n+1). Refer to the above paper for more details.
The cyclic shift an array problem: Cyclic shift an array of size n by k also gives a permutation of the resulting array (given by the formula i goes to i+k modulo n), and can also be solved in linear time and in-place using the following the cycle method. In fact, in terms of the number of element exchanges this following cycle method is better than the 3 reverses algorithm. Of course, following the cycle method can kill the cache because of the access patterns, and in practice, the 3 reverses algorithm might actually fare better.
As for interviews, if the interviewer is a reasonable person, they will be looking at how you think and approach the problem and not whether you actually solve it. So even if you don't solve a problem, I think you should not be discouraged.
The basic strategy with in place algorithms is to figure out the rule for moving a entry from slot N to slot M.
So, your shuffle, for instance. if A and B are cards and N is the number of chards. the rules for the first half of the deck are different than the rules for the second half of the deck
// A is the current location, B is the new location.
// this math assumes that the first card is card 0
if (A < N/2)
B = A * 2;
else
B = (A - N/2) * 2 + 1;
Now we know the rule, we just have to move each card, each time we move a card, we calculate the new location, then remove the card that is currently in B. place A in slot B, then let B be A, and loop back to the top of the algorithm. Each card moved displaces the new card which becomes the next card to be moved.
I think the analysis is easier if we are 0 based rather than 1 based, so
0 1 2 3 4 5 6 7 // before
0 4 1 5 2 6 3 7 // after
So we want to move 1->2 2->4 4->1 and that completes a cycle
then move 3->6 6->5 5->3 and that completes a cycle
and we are done.
Now we know that card 0 and card N-1 don't move, so we can ignore those,
so we know that we only need to swap N-2 cards in total. The only sticky bit
is that there are 2 cycles, 1,2,4,1 and 3,6,5,3. when we get to card 1 the
second time, we need to move on to card 3.
int A = 1;
int N = 8;
card ary[N]; // Our array of cards
card a = ary[A];
for (int i = 0; i < N/2; ++i)
{
if (A < N/2)
B = A * 2;
else
B = (A - N/2) * 2 + 1;
card b = ary[B];
ary[B] = a;
a = b;
A = B;
if (A == 1)
{
A = 3;
a = ary[A];
}
}
Now this code only works for the 8 card example, because of that if test that moves us from 1 to 3 when we finish the first cycle. What we really need is a general rule to recognize the end of the cycle, and where to go to start the next one.
That rule could be mathematical if you can think of a way, or you could keep track of which places you had visited in a separate array, and when A is back to a visited place, you could then scan forward in your array looking for the first non-visited place.
For your in-place algorithm to be 0(n), the solution will need to be mathematical.
I hope this breakdown of the thinking process is helpful to you. If I was interviewing you, I would expect to see something like this on the whiteboard.
Note: As Moron points out, this doesn't work for all values of N, it's just an example of the sort of analysis that an interviewer is looking for.
Frank,
For programming with loops and arrays, nothing beats David Gries's textbook The Science of Programming. I studied it over 20 years ago, and there are ideas that I still use every day. It is very mathematical and will require real effort to master, but that effort will repay you many times over for your whole career.
Complementing Aryabhatta's answer:
There is a general method to "follow the cycles" even without knowing the starting positions for each cycle or using memory to know visited cycles. This is specially useful if you need O(1) memory.
For each position i in the array, follow the cycle without moving any data yet, until you reach...
the starting position i: end of the cyle. this is a new cycle: follow it again moving the data this time.
a position lower than i: this cycle was already visited, nothing to do with it.
Of course this has a time overhead (O(n^2), I believe) and has the cache problems of the general "following cycles" method.
For the first one, let's assume n is even. You have:
first half: 1 2 3 4
second : 5 6 7 8
Let x1 = first[1], x2 = second[1].
Now, you have to print one from the first half, one from the second, one from the first, one from the second...
Meaning first[1], second[1], first[2], second[2], ...
Obviously, you don't keep two halves in memory, as that will be O(n) memory. You keep pointers to the two halves. Do you see how you'd do that?
The second is a bit harder. Consider:
12345
abcde
..cde
.....ab
..cdeab
cdeab
Do you notice anything? You should notice that the question basically asks you to move the first i characters to the end of your string, without affording the luxury of copying the last n - i in a buffer then appending the first i and then returning the buffer. You need to do with O(1) memory.
To figure how to do this you basically need a lot of practice with these kinds of problems, as with anything else. Practice makes perfect basically. If you've never done these kinds of problems before, it's unlikely you'll figure it out. If you have, then you have to think about how you can manipulate the substrings and or indices such that you solve your problem under the given constraints. The general rule is to work and learn as much as possible so you'll figure out the solutions to these problems very fast when you see them. But the solution differs quite a bit from problem to problem. There's no clear recipe for success I'm afraid. Just read a lot and understand the stuff you read before you move on.
The logic for the second problem is this: what happens if we reverse the substring [1, 2], the substring [3, 5] and then concatenate them and reverse that? We have, in general:
1, 2, 3, 4, ..., i, i + 1, i + 2, ..., N
reverse [1, i] =>
i, i - 1, ..., 4, 3, 2, 1, i + 1, i + 2, ..., N
reverse [i + 1, N] =>
i, i - 1, ..., 4, 3, 2, 1, N, ..., i + 1
reverse [1, N] =>
i + 1, ..., N, 1, 2, 3, 4, ..., i - 1, i
which is what you wanted. Writing the reverse function using O(1) memory should be trivial.
Generally speaking, the idea is to loop through the array once, while
storing the value at the position you are at in a temporary variable
finding the correct value for that position and writing it
either move on to the next value, or figure out what to do with your temporary value before continuing.
A general approach could be as follows:
Construct a positions array int[] pos, such that pos[i] refers to the position (index) of a[i] in the shuffled array.
Rearrange the original array int[] a, according to this positions array pos.
/** Shuffle the array a. */
void shuffle(int[] a) {
// Step 1
int [] pos = contructRearrangementArray(a)
// Step 2
rearrange(a, pos);
}
/**
* Rearrange the given array a according to the positions array pos.
*/
private static void rearrange(int[] a, int[] pos)
{
// By definition 'pos' should not contain any duplicates, otherwise rearrange() can run forever.
// Do the above sanity check.
for (int i = 0; i < pos.length; i++) {
while (i != pos[i]) {
// This while loop completes one cycle in the array
swap(a, i, pos[i]);
swap(pos, i, pos[i]);
}
}
}
/** Swap ith element in a with jth element. */
public static void swap(int[] a, int i, int j)
{
int temp = a[i];
a[i] = a[j];
a[j] = temp;
}
As an example, for the case of outShuffle the following would be an implementation of contructRearrangementArray().
/**
* array : 1 2 3 4 5 6 7 8
* pos : 0 2 4 6 1 3 5 7
* outshuffle: 1 5 2 6 3 7 4 8 (outer boundaries remain same)
*/
public int[] contructRearrangementArray(int[] a)
{
if (a.length % 2 != 0) {
throw new IllegalArgumentException("Cannot outshuffle odd sized array");
}
int[] pos = new int[a.length];
for (int i = 0; i < pos.length; i++) {
pos[i] = i * 2 % (pos.length - 1);
}
pos[a.length - 1] = a.length - 1;
return pos;
}

Resources