Dynamic programming problems - algorithm

I'm looking for some pointers about a dynamic programming problem. I cannot find any relevant information about how to solve this kind of problem. The only kind of problem I know how to solve using dynamic programming is when I have two sequences and create a matrix of those sequences. But I don't see how I can apply that to the following problem...
If I have a set A = {7,11,33,71,111} and a number B. Then C which is a subset of A, contains the elements from A which builds the sum B.
EXAMPLE:
A = {7,11,33,71,111}
If B = 18, then C = {7,11} (because 7+11 = 18)
If B = 3, then there is no solution
Thankful for any help here, I just don't know how to think when solving these kind of problems. I cannot find any general method either, only some examples on gene sequences and stuff like that.

Dynamic programming is a broad category of solutions wherein a partial solution is kept in some structure for the next iteration to build upon instead of having it recalculate the intermediate results over and over again.
If I were to take a dynamic approach to this particular problem, I would probably keep a running list of every sum calculable from the previous step, as well as the set used to compute that sum.
So for example the first iteration my working set would contain {null, 7}, then I would add 11 to everything in that that set as well as the set itself (let's pretend that null+11=11 for now). Now my working set would contain {null, 7, 11, 18}. For each value in the set, I would keep track of how I got that result: so 7 maps to the original set {7} and 18 maps to the original set {7,11}. Iteration would end when either A) the target value is generated or B) the original set is exhausted without finding the value. You could optimize the negative case with an ordered set, but I'll leave figuring that out to you.
There is more than one way to approach this problem. This is a dynamic solution, and it's not very efficient as it needs to build a set of 2^(size of set) members. But the general approach corresponds to what dynamic programming was created to solve.

I think dynamic approach depend on B and number elements of A.
I suggested a dynamic approach in this case with B*number element of A <= 1.000.000
Use call F[i,j] is true if use can use from A[1] to A[j] to build i and false otherwise
So each step you have to choise:
use a[j] then F[i,j]=F[i-a[j],j-1]
don't user a[j] then F[i,j] = F[i,j-1]
Then if existing a F[B,*]=1 ,you can build B.
Bellow is example code:
#include<stdio.h>
#include<iostream>
using namespace std;
int f[1000][1000], a[1000], B,n;
// f[i][j] = 1 => can build i when using A[1]..a[j], 0 otherwisw
int tmax(int a, int b){
if (a>b) return a;
return b;
}
void DP(){
f[0][0] = 1;
for (int i=1;i<=B;i++)
for (int j=1;j<=n;j++)
{
f[i][j] = f[i][j-1];
if (a[j]<=i)
f[i][j] = tmax(f[i-a[j]][j-1], f[i][j]);
}
}
int main(){
cin >> n >> B;
for (int i=1;i<=n;i++) cin >>a[i];
DP();
bool ok = false;
for (int i=1;i<=n;i++){
if (f[B][i]==1) {
cout<<"YES";
ok = true;
break;
}
}
if (!ok) cout <<"NO";
}

Related

Code Jam 2008 "Price Is Wrong" - Explanation

I have been going through Code Jam archives. I am really struggling at the solution of The Price Is Wrong of Code Jam 2008
The problem statement is -
You're playing a game in which you try to guess the correct retail price of various products for sale. After guessing the price of each product in a list, you are shown the same list of products sorted by their actual prices, from least to most expensive. (No two products cost the same amount.) Based on this ordering, you are given a single chance to change one or more of your guesses.
Your program should output the smallest set of products such that, if you change your prices for those products, the ordering of your guesses will be consistent with the correct ordering of the product list. The products in the returned set should be listed in alphabetical order. If there are multiple smallest sets, output the set which occurs first lexicographically.
For example, assume these are your initial guesses:
code = $20
jam = $15
foo = $40
bar = $30
google = $60
If the correct ordering is code jam foo bar google, then you would need to change two of your prices in order to match the correct ordering. You might change one guess to read jam = $30 and another guess to read bar = $50, which would match the correct ordering and produce the output set bar jam. However, the output set bar code comes before bar jam lexicographically, and you can match the correct ordering by changing your guesses for these items as well.
Example
Input
code jam foo bar google
20 15 40 30 60
Output
Case #1: bar code
I am not asking for exact solution but for, how should I proceed with the problem
Thanks in advance.
Okay after struggling a bit, I got both small & large cases accepted.
Before posting my ugly ugly code, here is some brief explanation:
First, based on the problem statement, and the limits of the parameters, it is intuitive to think that the core part of the problem is simply finding Longest Increasing Subsequence (LIS). It does rely on your experience to figure it out fast though (indeed most cases in competitive programming field).
Think like this, if I can find the set of items which price is forming a LIS, then the items left are the smallest set that you need to change.
But you need to fulfil one more requirement, which is I think is the hardest part of this problem, is when there exists multiple smallest set, you have to find the lexicographical smallest one. That is same as saying find the LIS with lexicographical largest name (and then we throw them away, the items left is the answer)
To do this, there are many ways, but as the limits are so small (N <= 64), you can use basically whatever algorithm (O(N^4)? O(N^5)? Go ahead!)
My accepted method is to add a stupid twist into the traditional O(N^2) dynamic programming for LIS:
Let DP(i) be the LIS in number[0..i] AND number i must be chosen
Also use an array of set<string> to store the optimal set of items'name which can achieve DP(i), we update this array together with the process of doing dynamic programming for finding DP(i)
Then after the dynamic programming, simply find the lexicographical largest set of item's name, and exclude them from the original item set. The items left is the answer.
Here is my accepted ugly ugly code in C++14, most of the lines is to handle the troublesome I/O stuff, please tell me if it's not clear, I can provide a few example to elaborate more.
#include<bits/stdc++.h>
using namespace std;
int T, n, a[70], dp[70], mx=0;
vector<string> name;
set<string> ans, dp2[70];
string s;
char c;
bool compSet(set<string> st1, set<string> st2){
if(st1.size() != st2.size()) return true;
auto it1 = st1.begin();
auto it2 = st2.begin();
for(; it1 != st1.end(); it1++, it2++)
if((*it1) > (*it2)) return true;
else if((*it1) < (*it2)) return false;
return false;
}
int main() {
cin >> T;
getchar();
for(int qwe=1;qwe<=T;qwe++){
mx=n=0; s=""; ans.clear(); name.clear();
while(c=getchar(), c != '\n'){
if(c == ' ') n++, name.push_back(s), ans.insert(s),s="";
else s+=c;
}
name.push_back(s); ans.insert(s); s=""; n++;
for(int i=0; i<n; i++) cin >> a[i];
getchar();
for(int i=0 ;i<n;i++)
dp[i] = 1, dp2[i].clear(), dp2[i].insert(name[i]);
for(int i=1; i<n; i++){
for(int j=0; j<i;j++){
if(a[j] < a[i] && dp[j]+1 >= dp[i]){
dp[i] = dp[j]+1;
set<string> tmp = dp2[j];
tmp.insert(name[i]);
if(compSet(tmp, dp2[i])) dp2[i] = tmp;
}
}
mx = max(mx, dp[i]);
}
set<string> tmp;
for(int i=0; i<n; i++) {
if(dp[i] == mx) if(compSet(dp2[i], tmp)) tmp = dp2[i];
}
for(auto x : tmp)
ans.erase(x);
printf("Case #%d: ", qwe);
for(auto it = ans.begin(); it!=ans.end(); ){
cout << *it;
if(++it!= ans.end()) cout << ' ';
else cout << '\n';
}
}
return 0;
}
Well based on the problem you have specified, if i tell you that you don't need to tell me the order or name of the products, rather you just need to tell me -
The number of the product values that will change.
What would your answer be?
Basically then the problem has reduced to the following statement -
You are given a list of numbers and you want to make some changes to the list such that the numbers are now in increasing order. But you want your changes made to the individual elements of the list to be minimum.
How would you solve this?
If you find out the Longest Increasing Sub-sequence in the list of numbers you have, then you just need to subtract the length of the list from that LIS value.
Why you ask?
Well because if you want the number of changes made to the list to be minimum then if i leave the longest increasing sub-sequence as it is and change the other values i will definitely get the most optimal answer.
Let's take your example -
We have - 2 10 4 6 8
How many changes would be made to this list?
The longest increasing subsequence length is - 4.
So if we leave 4 item values as they are and change the other remaining values then we would only have to change 5(list length) - 4 = 1 values.
Now addressing your original problem, you need to print the product names. Well if you exclude the elements present in the LIS you should get your answer.
But wait!
What happens when you have many subsequences with the same LIS length? How will you choose the lexicographically smallest answer?
Well why don't you think about it in terms of LIS itself. This should be good enough to get you started right?

Hoare Logic Loop variant on user specified value

I have the following problem:
pre-condition is True
int n = askUser();
int i = 0;
while(i<n){
...
i++;
}
The variant I am thinking of is: n-i.
However, I don't think there is anything that stops the user from supplying a negative value, and in that case the variant will be negative (which contradicts its definition).
Is it possible to specify the invariant as |n| - i, or does n>=0 has to be included as pre-condition?
Any help or suggestions will be greatly appreciated.

finding the position of a fraction in farey sequence

For finding the position of a fraction in farey sequence, i tried to implement the algorithm given here http://www.math.harvard.edu/~corina/publications/farey.pdf under "initial algorithm" but i can't understand where i'm going wrong, i am not getting the correct answers . Could someone please point out my mistake.
eg. for order n = 7 and fractions 1/7 ,1/6 i get same answers.
Here's what i've tried for given degree(n), and a fraction a/b:
sum=0;
int A[100000];
A[1]=a;
for(i=2;i<=n;i++)
A[i]=i*a-a;
for(i=2;i<=n;i++)
{
for(j=i+i;j<=n;j+=i)
A[j]-=A[i];
}
for(i=1;i<=n;i++)
sum+=A[i];
ans = sum/b;
Thanks.
Your algorithm doesn't use any particular properties of a and b. In the first part, every relevant entry of the array A is a multiple of a, but the factor is independent of a, b and n. Setting up the array ignoring the factor a, i.e. starting with A[1] = 1, A[i] = i-1 for 2 <= i <= n, after the nested loops, the array contains the totients, i.e. A[i] = phi(i), no matter what a, b, n are. The sum of the totients from 1 to n is the number of elements of the Farey sequence of order n (plus or minus 1, depending on which of 0/1 and 1/1 are included in the definition you use). So your answer is always the approximation (a*number of terms)/b, which is close but not exact.
I've not yet looked at how yours relates to the algorithm in the paper, check back for updates later.
Addendum: Finally had time to look at the paper. Your initialisation is not what they give. In their algorithm, A[q] is initialised to floor(x*q), for a rational x = a/b, the correct initialisation is
for(i = 1; i <= n; ++i){
A[i] = (a*i)/b;
}
in the remainder of your code, only ans = sum/b; has to be changed to ans = sum;.
A non-algorithmic way of finding the position t of a fraction in the Farey sequence of order n>1 is shown in Remark 7.10(ii)(a) of the paper, under m:=n-1, where mu-bar stands for the number-theoretic Mobius function on positive integers taking values from the set {-1,0,1}.
Here's my Java solution that works. Add head(0/1), tail(1/1) nodes to a SLL.
Then start by passing headNode,tailNode and setting required orderLevel.
public void generateSequence(Node leftNode, Node rightNode){
Fraction left = (Fraction) leftNode.getData();
Fraction right= (Fraction) rightNode.getData();
FractionNode midNode = null;
int midNum = left.getNum()+ right.getNum();
int midDenom = left.getDenom()+ right.getDenom();
if((midDenom <=getMaxLevel())){
Fraction middle = new Fraction(midNum,midDenom);
midNode = new FractionNode(middle);
}
if(midNode!= null){
leftNode.setNext(midNode);
midNode.setNext(rightNode);
generateSequence(leftNode, midNode);
count++;
}else if(rightNode.next()!=null){
generateSequence(rightNode, rightNode.next());
}
}

Given a dictionary, find all possible letter orderings

I was recently asked the following interview question:
You have a dictionary page written in an alien language. Assume that
the language is similar to English and is read/written from left to
right. Also, the words are arranged in lexicographic order. For
example the page could be: ADG, ADH, BCD, BCF, FM, FN
You have to give all lexicographic orderings possible of the character
set present in the page.
My approach is as follows:
A has higher precedence than B and G has higher precedence than H.
Therefore we have the information about ordering for some characters:
A->B, B->F, G->H, D->F, M->N
The possible orderings can be ABDFGNHMC, ACBDFGNHMC, ...
My approach was to use an array as position holder and generate all permutations to identify all valid orderings. The worst case time complexity for this is N! where N is the size of character set.
Can we do better than the brute force approach.
Thanks in advance.
Donald Knuth has written the paper A Structured Program to Generate all Topological Sorting Arrangements. This paper was originally pupblished in 1974. The following quote from the paper brought me to a better understanding of the problem (in the text the relation i < j stands for "i precedes j"):
A natural way to solve this problem is to let x1 be an
element having no predecessors, then to erase all relations of the
from x1 < j and to let x2 be an element ≠
x1 with no predecessors in the system as it now exists,
then to erase all relations of the from x2 < j , etc. It is
not difficult to verify that this method will always succeed unless
there is an oriented cycle in the input. Moreover, in a sense it is
the only way to proceed, since x1 must be an element
without predecessors, and x2 must be without predecessors
when all relations x1 < j are deleted, etc. This
observation leads naturally to an algorithm that finds all
solutions to the topological sorting problem; it is a typical example
of a "backtrack" procedure, where at every stage we consider a
subproblem of the from "Find all ways to complete a given partial
permutation x1x2...xk to a
topological sort x1x2...xn ." The
general method is to branch on all possible choices of
xk+1. A central problem in backtrack applications is
to find a suitable way to arrange the data so that it is easy to
sequence through the possible choices of xk+1 ; in this
case we need an efficient way to discover the set of all elements ≠
{x1,...,xk} which have no predecessors other
than x1,...,xk, and to maintain this knowledge
efficiently as we move from one subproblem to another.
The paper includes a pseudocode for a efficient algorithm. The time complexity for each output is O(m+n), where m ist the number of input relations and n is the number of letters. I have written a C++ program, that implements the algorithm described in the paper – maintaining variable and function names –, which takes the letters and relations from your question as input. I hope that nobody complains about giving the program to this answer – because of the language-agnostic tag.
#include <iostream>
#include <deque>
#include <vector>
#include <iterator>
#include <map>
// Define Input
static const char input[] =
{ 'A', 'D', 'G', 'H', 'B', 'C', 'F', 'M', 'N' };
static const char crel[][2] =
{{'A', 'B'}, {'B', 'F'}, {'G', 'H'}, {'D', 'F'}, {'M', 'N'}};
static const int n = sizeof(input) / sizeof(char);
static const int m = sizeof(crel) / sizeof(*crel);
std::map<char, int> count;
std::map<char, int> top;
std::map<int, char> suc;
std::map<int, int> next;
std::deque<char> D;
std::vector<char> buffer;
void alltopsorts(int k)
{
if (D.empty())
return;
char base = D.back();
do
{
char q = D.back();
D.pop_back();
buffer[k] = q;
if (k == (n - 1))
{
for (std::vector<char>::const_iterator cit = buffer.begin();
cit != buffer.end(); ++cit)
std::cout << (*cit);
std::cout << std::endl;
}
// erase relations beginning with q:
int p = top[q];
while (p >= 0)
{
char j = suc[p];
count[j]--;
if (!count[j])
D.push_back(j);
p = next[p];
}
alltopsorts(k + 1);
// retrieve relations beginning with q:
p = top[q];
while (p >= 0)
{
char j = suc[p];
if (!count[j])
D.pop_back();
count[j]++;
p = next[p];
}
D.push_front(q);
}
while (D.back() != base);
}
int main()
{
// Prepare
std::fill_n(std::back_inserter(buffer), n, 0);
for (int i = 0; i < n; i++) {
count[input[i]] = 0;
top[input[i]] = -1;
}
for (int i = 0; i < m; i++) {
suc[i] = crel[i][1]; next[i] = top[crel[i][0]];
top[crel[i][0]] = i; count[crel[i][1]]++;
}
for (std::map<char, int>::const_iterator cit = count.begin();
cit != count.end(); ++cit)
if (!(*cit).second)
D.push_back((*cit).first);
alltopsorts(0);
}
There is no algorithm that can do better than O(N!) if there are N! answers. But I think there is a better way to understand the problem:
You can build a directed graph in this way: if A appears before B, then there is an edge from A to B. After building the graph, you just need to find all possible topological sort results. Still O(N!), but easier to code and better than your approach (don't have to generate invalid ordering).
I would solve it like this:
Look at first letter: (A -> B -> F)
Look at second letter, but only account those who have same first letter: (D), (C), (M -> N)
Look at third letter, but only account those who have same 1. and 2. letter: (G -> H), (D -> F)
And so on, while it is something remaining... (Look at Nth letter, group by the previous letters)
What is in parentheses is all the information you get from set (all the possible orderings). Ignore parentheses with only one letter, because they do not represent ordering. Then take everthing in parentheses and topologically sort.
ok, i admit straight away that i don't have an estimate of time complexity for the average case, but maybe the following two observations will help.
first, this is an obvious candidate for a constraint library. if you were doing this in practice (like, it was some task at work) then you would get a constraint solver, give it the various pair-wise orderings you have, and then ask for a list of all results.
second, that is typically implemented as a search. if you have N characters consider a tree whose root node has N children (selection of the first character); next node has N-1 children (selection of second character); etc. clearly this is N! worst case for full exploration.
even with a "dumb" search, you can see that you can often prune searches by checking your order at any point against the pairs that you have.
but since you know that a total ordering exists, even though you (may) only have partial information, you can make the search more efficient. for example, you know that the first character must not appear to the "right" of < for any pair (if we assume that each character is given a numerical value, with the first character being lowest). similarly, moving down the tree, for the appropriately reduced data.
in short, you can enumerate possible solutions by exploring a tree, using the incomplete ordering information to constrain possible choices at each node.
hope that helps some.

Is it possible to rearrange an array in place in O(N)?

If I have a size N array of objects, and I have an array of unique numbers in the range 1...N, is there any algorithm to rearrange the object array in-place in the order specified by the list of numbers, and yet do this in O(N) time?
Context: I am doing a quick-sort-ish algorithm on objects that are fairly large in size, so it would be faster to do the swaps on indices than on the objects themselves, and only move the objects in one final pass. I'd just like to know if I could do this last pass without allocating memory for a separate array.
Edit: I am not asking how to do a sort in O(N) time, but rather how to do the post-sort rearranging in O(N) time with O(1) space. Sorry for not making this clear.
I think this should do:
static <T> void arrange(T[] data, int[] p) {
boolean[] done = new boolean[p.length];
for (int i = 0; i < p.length; i++) {
if (!done[i]) {
T t = data[i];
for (int j = i;;) {
done[j] = true;
if (p[j] != i) {
data[j] = data[p[j]];
j = p[j];
} else {
data[j] = t;
break;
}
}
}
}
}
Note: This is Java. If you do this in a language without garbage collection, be sure to delete done.
If you care about space, you can use a BitSet for done. I assume you can afford an additional bit per element because you seem willing to work with a permutation array, which is several times that size.
This algorithm copies instances of T n + k times, where k is the number of cycles in the permutation. You can reduce this to the optimal number of copies by skipping those i where p[i] = i.
The approach is to follow the "permutation cycles" of the permutation, rather than indexing the array left-to-right. But since you do have to begin somewhere, everytime a new permutation cycle is needed, the search for unpermuted elements is left-to-right:
// Pseudo-code
N : integer, N > 0 // N is the number of elements
swaps : integer [0..N]
data[N] : array of object
permute[N] : array of integer [-1..N] denoting permutation (used element is -1)
next_scan_start : integer;
next_scan_start = 0;
while (swaps < N )
{
// Search for the next index that is not-yet-permtued.
for (idx_cycle_search = next_scan_start;
idx_cycle_search < N;
++ idx_cycle_search)
if (permute[idx_cycle_search] >= 0)
break;
next_scan_start = idx_cycle_search + 1;
// This is a provable invariant. In short, number of non-negative
// elements in permute[] equals (N - swaps)
assert( idx_cycle_search < N );
// Completely permute one permutation cycle, 'following the
// permutation cycle's trail' This is O(N)
while (permute[idx_cycle_search] >= 0)
{
swap( data[idx_cycle_search], data[permute[idx_cycle_search] )
swaps ++;
old_idx = idx_cycle_search;
idx_cycle_search = permute[idx_cycle_search];
permute[old_idx] = -1;
// Also '= -idx_cycle_search -1' could be used rather than '-1'
// and would allow reversal of these changes to permute[] array
}
}
Do you mean that you have an array of objects O[1..N] and then you have an array P[1..N] that contains a permutation of numbers 1..N and in the end you want to get an array O1 of objects such that O1[k] = O[P[k]] for all k=1..N ?
As an example, if your objects are letters A,B,C...,Y,Z and your array P is [26,25,24,..,2,1] is your desired output Z,Y,...C,B,A ?
If yes, I believe you can do it in linear time using only O(1) additional memory. Reversing elements of an array is a special case of this scenario. In general, I think you would need to consider decomposition of your permutation P into cycles and then use it to move around the elements of your original array O[].
If that's what you are looking for, I can elaborate more.
EDIT: Others already presented excellent solutions while I was sleeping, so no need to repeat it here. ^_^
EDIT: My O(1) additional space is indeed not entirely correct. I was thinking only about "data" elements, but in fact you also need to store one bit per permutation element, so if we are precise, we need O(log n) extra bits for that. But most of the time using a sign bit (as suggested by J.F. Sebastian) is fine, so in practice we may not need anything more than we already have.
If you didn't mind allocating memory for an extra hash of indexes, you could keep a mapping of original location to current location to get a time complexity of near O(n). Here's an example in Ruby, since it's readable and pseudocode-ish. (This could be shorter or more idiomatically Ruby-ish, but I've written it out for clarity.)
#!/usr/bin/ruby
objects = ['d', 'e', 'a', 'c', 'b']
order = [2, 4, 3, 0, 1]
cur_locations = {}
order.each_with_index do |orig_location, ordinality|
# Find the current location of the item.
cur_location = orig_location
while not cur_locations[cur_location].nil? do
cur_location = cur_locations[cur_location]
end
# Swap the items and keep track of whatever we swapped forward.
objects[ordinality], objects[cur_location] = objects[cur_location], objects[ordinality]
cur_locations[ordinality] = orig_location
end
puts objects.join(' ')
That obviously does involve some extra memory for the hash, but since it's just for indexes and not your "fairly large" objects, hopefully that's acceptable. Since hash lookups are O(1), even though there is a slight bump to the complexity due to the case where an item has been swapped forward more than once and you have to rewrite cur_location multiple times, the algorithm as a whole should be reasonably close to O(n).
If you wanted you could build a full hash of original to current positions ahead of time, or keep a reverse hash of current to original, and modify the algorithm a bit to get it down to strictly O(n). It'd be a little more complicated and take a little more space, so this is the version I wrote out, but the modifications shouldn't be difficult.
EDIT: Actually, I'm fairly certain the time complexity is just O(n), since each ordinality can have at most one hop associated, and thus the maximum number of lookups is limited to n.
#!/usr/bin/env python
def rearrange(objects, permutation):
"""Rearrange `objects` inplace according to `permutation`.
``result = [objects[p] for p in permutation]``
"""
seen = [False] * len(permutation)
for i, already_seen in enumerate(seen):
if not already_seen: # start permutation cycle
first_obj, j = objects[i], i
while True:
seen[j] = True
p = permutation[j]
if p == i: # end permutation cycle
objects[j] = first_obj # [old] p -> j
break
objects[j], j = objects[p], p # p -> j
The algorithm (as I've noticed after I wrote it) is the same as the one from #meriton's answer in Java.
Here's a test function for the code:
def test():
import itertools
N = 9
for perm in itertools.permutations(range(N)):
L = range(N)
LL = L[:]
rearrange(L, perm)
assert L == [LL[i] for i in perm] == list(perm), (L, list(perm), LL)
# test whether assertions are enabled
try:
assert 0
except AssertionError:
pass
else:
raise RuntimeError("assertions must be enabled for the test")
if __name__ == "__main__":
test()
There's a histogram sort, though the running time is given as a bit higher than O(N) (N log log n).
I can do it given O(N) scratch space -- copy to new array and copy back.
EDIT: I am aware of the existance of an algorithm that will proceed through. The idea is to perform the swaps on the array of integers 1..N while at the same time mirroring the swaps on your array of large objects. I just cannot find the algorithm right now.
The problem is one of applying a permutation in place with minimal O(1) extra storage: "in-situ permutation".
It is solvable, but an algorithm is not obvious beforehand.
It is described briefly as an exercise in Knuth, and for work I had to decipher it and figure out how it worked. Look at 5.2 #13.
For some more modern work on this problem, with pseudocode:
http://www.fernuni-hagen.de/imperia/md/content/fakultaetfuermathematikundinformatik/forschung/berichte/bericht_273.pdf
I ended up writing a different algorithm for this, which first generates a list of swaps to apply an order and then runs through the swaps to apply it. The advantage is that if you're applying the ordering to multiple lists, you can reuse the swap list, since the swap algorithm is extremely simple.
void make_swaps(vector<int> order, vector<pair<int,int>> &swaps)
{
// order[0] is the index in the old list of the new list's first value.
// Invert the mapping: inverse[0] is the index in the new list of the
// old list's first value.
vector<int> inverse(order.size());
for(int i = 0; i < order.size(); ++i)
inverse[order[i]] = i;
swaps.resize(0);
for(int idx1 = 0; idx1 < order.size(); ++idx1)
{
// Swap list[idx] with list[order[idx]], and record this swap.
int idx2 = order[idx1];
if(idx1 == idx2)
continue;
swaps.push_back(make_pair(idx1, idx2));
// list[idx1] is now in the correct place, but whoever wanted the value we moved out
// of idx2 now needs to look in its new position.
int idx1_dep = inverse[idx1];
order[idx1_dep] = idx2;
inverse[idx2] = idx1_dep;
}
}
template<typename T>
void run_swaps(T data, const vector<pair<int,int>> &swaps)
{
for(const auto &s: swaps)
{
int src = s.first;
int dst = s.second;
swap(data[src], data[dst]);
}
}
void test()
{
vector<int> order = { 2, 3, 1, 4, 0 };
vector<pair<int,int>> swaps;
make_swaps(order, swaps);
vector<string> data = { "a", "b", "c", "d", "e" };
run_swaps(data, swaps);
}

Resources