Hiring Problem - Approach Visualization Issue? - data-structures

I was reading about Secretary Problem over here https://en.wikipedia.org/wiki/Secretary_problem
I am having doubt about its validity because of the approach i did take to prove it with coding.
The problem tells that the best suited candidate would most probably be found in 1/e i.e in 37% of the first candidates according to probabilistic analysis.
I tried to visualize this with code (JAVA). What I tried is following :-
Have rankings of candidates in a test randomly filled in a list.
Find the position of ranking 1. This would mean the percentage of candidates required for that particular test needed for best candidate is the position of best ranking 1.
Run some such iterations.
Average out the best candidate position calculated for iterations.
package com.pravesh.designpattern.creational;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.stream.Collectors;
public class HiringProblem {
public static void main(String[] args) {
Random rd = new Random();
int numberOfCandidates = 1000;
int numberOfTests = 10000;
List<Double> bestCandidatePositionPositionPercentages = new ArrayList<>();
for (int testNumber = 0; testNumber < numberOfTests; testNumber++) {
// Generate a random array of candidates with their rankings
List<Integer> candidateRankings = new ArrayList<>();
int candidateRanking = 0;
while (candidateRanking < numberOfCandidates) {
int random = rd.nextInt(numberOfCandidates + 1);
if (!candidateRankings.contains(Integer.valueOf(random)) && random > 0
&& random <= numberOfCandidates) {
candidateRanking++;
candidateRankings.add(Integer.valueOf(random));
}
}
// Index starts from 0 and hence position+1
int bestCandidatePosition = candidateRankings.indexOf(1) + 1;
double bestCandidatePositionPercentage = (((double) bestCandidatePosition) / numberOfCandidates) * 100;
bestCandidatePositionPositionPercentages.add(bestCandidatePositionPercentage);
}
double averagedBestPositionsPercentage = bestCandidatePositionPositionPercentages.stream()
.collect(Collectors.summingDouble(n -> n)) / bestCandidatePositionPositionPercentages.size();
System.out.println((int) averagedBestPositionsPercentage);
}
}
I was expecting a average value of 37% but it comes in range 45-53 each time when i ran the program most of the time.
Could anybody let me know if the approach is lacking something.

Related

Write a program that calculates N! / K! for given N and K.in Java - overflow problem

import java.util.Scanner;
public class CalculateAgain {
public static void main (String[] Args) {
Scanner input = new Scanner(System.in);
long N = input.nextLong();
long K = input.nextLong();
double result = 1;
while ( N > K) {
result*=N;
N--;
}
System.out.printf("%.0f", result);
}
}
I am unable to print bigfactorial numbers. I have tried with the variable result as long but when I println it I get wrong result.
I have also tried to use BigDecimal but I cannot cast it to long or double.
I am quite a beginner - I have learned loops, arrays, conditional statements. I have no understanding of methods.
Use import java.math.BigInteger;
try this way, after scanner. Assuming I am using BigInteger, you can change as per your requirements.
int n = Integer.parseInt(scanner.nextLine());
int k = Integer.parseInt(scanner.nextLine());
BigInteger result = new BigInteger("1");
int range = n - k;
for (int i = range; i > 0; i--) {
result = result.multiply(new BigInteger(String.valueOf(n)));
n--;
}

Get percentage unique numbers generated by random generator

I am using random generator in my python code. I want to get the percentage of unique random numbers generated over a huge range like from random(0:10^8).I need to generate 10^12 numbers What could be the efficient algorithm in terms of space complexity?
the code is similar to :
import random
dif = {}
for i in range(0,1000):
rannum = random.randint(0,50)
dif[rannum] = "True"
dif_len = len(dif)
print dif_len
per = float(dif_len)/50
print per
You have to keep track of each number the generator generates or there is no way to know whether some new number has been seen before. What is the best way to do that? It depends on how many numbers you are going to examine. For small N, use a HashSet. At some large number of N it becomes more efficient to use a bitmap.
For small N...
public class Accumulator {
private int uniqueNumbers = 0;
private int totalAccumulated = 0;
private HashSet<int> set = new HashSet<int>();
public void Add(int i) {
if (!set.Contains(i)) {
set.Add(i);
uniqueNumbers++;
}
totalAccumulated++;
}
public double PercentUnique() {
return 100.0 * uniqueNumbers / totalAccumulated;
}
}

Sampling from a discrete distribution while accounting for past occurrences

I have a discrete distribution that I need to sample from. It is heavily skewed and an example is as follows
a - 1-40
b - 40-80
c - 80-85
d - 85-90
e - 90-95
f - 95-100
To currently sample from this distribution, I'm choosing a random number in the interval [1,100] and choosing the corresponding value.
However, I'd like to be certain that if I see one of [c,d,e,f] I don't see the exact same value being sampled for the next x samples. The context being that they are powerups in a game I'm building. I'd like the powerups to be random, but not hand out the same powerup to the player repeatedly.
Is there any method that incorporates past occurrences of samples into generating a value or do I have to repeatedly sample till I get a value I'd prefer?
One way to do this is to shuffle an array containing the values 1-100 and then iterate through as necessary.
For example, an implementation of this in Java:
public class PowerUpSelector
{
private int current;
private int x;
private int[] distribution;
public PowerUpSelector(int x)
{
this.x = x;
current = 0;
distribution = new int[100];
for (int i = 0; i < distribution.length; i++)
distribution[i] = i;
shuffle(distribution[i]); //implement fisher-yates shuffle here.
}
public int returnPowerUpID()
{
if (current >= x)
{
shuffle(distribution);
current = 0;
}
return distribution[current++];
}
}

Dynamic programming: Algorithm to solve the following?

I have recently completed the following interview exercise:
'A robot can be programmed to run "a", "b", "c"... "n" kilometers and it takes ta, tb, tc... tn minutes, respectively. Once it runs to programmed kilometers, it must be turned off for "m" minutes.
After "m" minutes it can again be programmed to run for a further "a", "b", "c"... "n" kilometers.
How would you program this robot to go an exact number of kilometers in the minimum amount of time?'
I thought it was a variation of the unbounded knapsack problem, in which the size would be the number of kilometers and the value, the time needed to complete each stretch. The main difference is that we need to minimise, rather than maximise, the value. So I used the equivalent of the following solution: http://en.wikipedia.org/wiki/Knapsack_problem#Unbounded_knapsack_problem
in which I select the minimum.
Finally, because we need an exact solution (if there is one), over the map constructed by the algorithm for all the different distances, I iterated through each and trough each robot's programmed distance to find the exact distance and minimum time among those.
I think the pause the robot takes between runs is a bit of a red herring and you just need to include it in your calculations, but it does not affect the approach taken.
I am probably wrong, because I failed the test. I don't have any other feedback as to the expected solution.
Edit: maybe I wasn't wrong after all and I failed for different reasons. I just wanted to validate my approach to this problem.
import static com.google.common.collect.Sets.*;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.log4j.Logger;
import com.google.common.base.Objects;
import com.google.common.base.Preconditions;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
public final class Robot {
static final Logger logger = Logger.getLogger (Robot.class);
private Set<ProgrammedRun> programmedRuns;
private int pause;
private int totalDistance;
private Robot () {
//don't expose default constructor & prevent subclassing
}
private Robot (int[] programmedDistances, int[] timesPerDistance, int pause, int totalDistance) {
this.programmedRuns = newHashSet ();
for (int i = 0; i < programmedDistances.length; i++) {
this.programmedRuns.add (new ProgrammedRun (programmedDistances [i], timesPerDistance [i] ) );
}
this.pause = pause;
this.totalDistance = totalDistance;
}
public static Robot create (int[] programmedDistances, int[] timesPerDistance, int pause, int totalDistance) {
Preconditions.checkArgument (programmedDistances.length == timesPerDistance.length);
Preconditions.checkArgument (pause >= 0);
Preconditions.checkArgument (totalDistance >= 0);
return new Robot (programmedDistances, timesPerDistance, pause, totalDistance);
}
/**
* #returns null if no strategy was found. An empty map if distance is zero. A
* map with the programmed runs as keys and number of time they need to be run
* as value.
*
*/
Map<ProgrammedRun, Integer> calculateOptimalStrategy () {
//for efficiency, consider this case first
if (this.totalDistance == 0) {
return Maps.newHashMap ();
}
//list of solutions for different distances. Element "i" of the list is the best set of runs that cover at least "i" kilometers
List <Map<ProgrammedRun, Integer>> runsForDistances = Lists.newArrayList();
//special case i = 0 -> empty map (no runs needed)
runsForDistances.add (new HashMap<ProgrammedRun, Integer> () );
for (int i = 1; i <= totalDistance; i++) {
Map<ProgrammedRun, Integer> map = new HashMap<ProgrammedRun, Integer> ();
int minimumTime = -1;
for (ProgrammedRun pr : programmedRuns) {
int distance = Math.max (0, i - pr.getDistance ());
int time = getTotalTime (runsForDistances.get (distance) ) + pause + pr.getTime();
if (minimumTime < 0 || time < minimumTime) {
minimumTime = time;
//new minimum found
map = new HashMap<ProgrammedRun, Integer> ();
map.putAll(runsForDistances.get (distance) );
//increase count
Integer num = map.get (pr);
if (num == null) num = Integer.valueOf (1);
else num++;
//update map
map.put (pr, num);
}
}
runsForDistances.add (map );
}
//last step: calculate the combination with exact distance
int minimumTime2 = -1;
int bestIndex = -1;
for (int i = 0; i <= totalDistance; i++) {
if (getTotalDistance (runsForDistances.get (i) ) == this.totalDistance ) {
int time = getTotalTime (runsForDistances.get (i) );
if (time > 0) time -= pause;
if (minimumTime2 < 0 || time < minimumTime2 ) {
minimumTime2 = time;
bestIndex = i;
}
}
}
//if solution found
if (bestIndex != -1) {
return runsForDistances.get (bestIndex);
}
//try all combinations, since none of the existing maps run for the exact distance
List <Map<ProgrammedRun, Integer>> exactRuns = Lists.newArrayList();
for (int i = 0; i <= totalDistance; i++) {
int distance = getTotalDistance (runsForDistances.get (i) );
for (ProgrammedRun pr : programmedRuns) {
//solution found
if (distance + pr.getDistance() == this.totalDistance ) {
Map<ProgrammedRun, Integer> map = new HashMap<ProgrammedRun, Integer> ();
map.putAll (runsForDistances.get (i));
//increase count
Integer num = map.get (pr);
if (num == null) num = Integer.valueOf (1);
else num++;
//update map
map.put (pr, num);
exactRuns.add (map);
}
}
}
if (exactRuns.isEmpty()) return null;
//finally return the map with the best time
minimumTime2 = -1;
Map<ProgrammedRun, Integer> bestMap = null;
for (Map<ProgrammedRun, Integer> m : exactRuns) {
int time = getTotalTime (m);
if (time > 0) time -= pause; //remove last pause
if (minimumTime2 < 0 || time < minimumTime2 ) {
minimumTime2 = time;
bestMap = m;
}
}
return bestMap;
}
private int getTotalTime (Map<ProgrammedRun, Integer> runs) {
int time = 0;
for (Map.Entry<ProgrammedRun, Integer> runEntry : runs.entrySet()) {
time += runEntry.getValue () * runEntry.getKey().getTime ();
//add pauses
time += this.pause * runEntry.getValue ();
}
return time;
}
private int getTotalDistance (Map<ProgrammedRun, Integer> runs) {
int distance = 0;
for (Map.Entry<ProgrammedRun, Integer> runEntry : runs.entrySet()) {
distance += runEntry.getValue() * runEntry.getKey().getDistance ();
}
return distance;
}
class ProgrammedRun {
private int distance;
private int time;
private transient float speed;
ProgrammedRun (int distance, int time) {
this.distance = distance;
this.time = time;
this.speed = (float) distance / time;
}
#Override public String toString () {
return "(distance =" + distance + "; time=" + time + ")";
}
#Override public boolean equals (Object other) {
return other instanceof ProgrammedRun
&& this.distance == ((ProgrammedRun)other).distance
&& this.time == ((ProgrammedRun)other).time;
}
#Override public int hashCode () {
return Objects.hashCode (Integer.valueOf (this.distance), Integer.valueOf (this.time));
}
int getDistance() {
return distance;
}
int getTime() {
return time;
}
float getSpeed() {
return speed;
}
}
}
public class Main {
/* Input variables for the robot */
private static int [] programmedDistances = {1, 2, 3, 5, 10}; //in kilometers
private static int [] timesPerDistance = {10, 5, 3, 2, 1}; //in minutes
private static int pause = 2; //in minutes
private static int totalDistance = 41; //in kilometers
/**
* #param args
*/
public static void main(String[] args) {
Robot r = Robot.create (programmedDistances, timesPerDistance, pause, totalDistance);
Map<ProgrammedRun, Integer> strategy = r.calculateOptimalStrategy ();
if (strategy == null) {
System.out.println ("No strategy that matches the conditions was found");
} else if (strategy.isEmpty ()) {
System.out.println ("No need to run; distance is zero");
} else {
System.out.println ("Strategy found:");
System.out.println (strategy);
}
}
}
Simplifying slightly, let ti be the time (including downtime) that it takes the robot to run distance di. Assume that t1/d1 ≤ … ≤ tn/dn. If t1/d1 is significantly smaller than t2/d2 and d1 and the total distance D to be run are large, then branch and bound likely outperforms dynamic programming. Branch and bound solves the integer programming formulation
minimize ∑i ti xi
subject to
∑i di xi = D
∀i xi &in; N
by using the value of the relaxation where xi can be any nonnegative real as a guide. The latter is easily verified to be at most (t1/d1)D, by setting x1 to D/d1 and ∀i ≠ 1 xi = 0, and at least (t1/d1)D, by setting the sole variable of the dual program to t1/d1. Solving the relaxation is the bound step; every integer solution is a fractional solution, so the best integer solution requires time at least (t1/d1)D.
The branch step takes one integer program and splits it in two whose solutions, taken together, cover the entire solution space of the original. In this case, one piece could have the extra constraint x1 = 0 and the other could have the extra constraint x1 ≥ 1. It might look as though this would create subproblems with side constraints, but in fact, we can just delete the first move, or decrease D by d1 and add the constant t1 to the objective. Another option for branching is to add either the constraint xi = ⌊D/di⌋ or xi ≤ ⌊D/di⌋ - 1, which requires generalizing to upper bounds on the number of repetitions of each move.
The main loop of branch and bound selects one of a collection of subproblems, branches, computes bounds for the two subproblems, and puts them back into the collection. The efficiency over brute force comes from the fact that, when we have a solution with a particular value, every subproblem whose relaxed value is at least that much can be thrown away. Once the collection is emptied this way, we have the optimal solution.
Hybrids of branch and bound and dynamic programming are possible, for example, computing optimal solutions for small D via DP and using those values instead of branching on subproblems that have been solved.
Create array of size m and for 0 to m( m is your distance) do:
a[i] = infinite;
a[0] = 0;
a[i] = min{min{a[i-j] + tj + m for all j in possible kilometers of robot. and j≠i} , ti if i is in possible moves of robot}
a[m] is lowest possible value. Also you can have array like b to save a[i]s selection. Also if a[m] == infinite means it's not possible.
Edit: we can solve it in another way by creating a digraph, again our graph is dependent to m length of path, graph has nodes labeled {0..m}, now start from node 0 connect it to all possible nodes; means if you have a kilometer i you can connect 0 and vi with weight ti, except for node 0->x, for all other nodes you should connect node i->j with weight tj-i + m for j>i and j-i is available in input kilometers. now you should find shortest path from v0 to vn. but this algorithm still is O(nm).
Let G be the desired distance run.
Let n be the longest possible distance run without pause.
Let L = G / n (Integer arithmetic, discard fraction part)
Let R = G mod n (ie. The remainder from the above division)
Make the robot run it's longest distance (ie. n) L times, and then whichever distance (a, b, c, etc.) is greater than R by the least amount (ie the smallest available distance that is equal to or greater than R)
Either I understood the problem wrong, or you're all over thinking it
I am a big believer in showing instead of telling. Here is a program that may be doing what you are looking for. Let me know if it satisfies your question. Simply copy, paste, and run the program. You should of course test with your own data set.
import java.util.Arrays;
public class Speed {
/***
*
* #param distance
* #param sprints ={{A,Ta},{B,Tb},{C,Tc}, ..., {N,Tn}}
*/
public static int getFastestTime(int distance, int[][] sprints){
long[] minTime = new long[distance+1];//distance from 0 to distance
Arrays.fill(minTime,Integer.MAX_VALUE);
minTime[0]=0;//key=distance; value=time
for(int[] speed: sprints)
for(int d=1; d<minTime.length; d++)
if(d>=speed[0] && minTime[d] > minTime[d-speed[0]]+speed[1])
minTime[d]=minTime[d-speed[0]]+speed[1];
return (int)minTime[distance];
}//
public static void main(String... args){
//sprints ={{A,Ta},{B,Tb},{C,Tc}, ..., {N,Tn}}
int[][] sprints={{3,2},{5,3},{7,5}};
int distance = 21;
System.out.println(getFastestTime(distance,sprints));
}
}

Shuffle list, ensuring that no item remains in same position

I want to shuffle a list of unique items, but not do an entirely random shuffle. I need to be sure that no element in the shuffled list is at the same position as in the original list. Thus, if the original list is (A, B, C, D, E), this result would be OK: (C, D, B, E, A), but this one would not: (C, E, A, D, B) because "D" is still the fourth item. The list will have at most seven items. Extreme efficiency is not a consideration. I think this modification to Fisher/Yates does the trick, but I can't prove it mathematically:
function shuffle(data) {
for (var i = 0; i < data.length - 1; i++) {
var j = i + 1 + Math.floor(Math.random() * (data.length - i - 1));
var temp = data[j];
data[j] = data[i];
data[i] = temp;
}
}
You are looking for a derangement of your entries.
First of all, your algorithm works in the sense that it outputs a random derangement, ie a permutation with no fixed point. However it has a enormous flaw (which you might not mind, but is worth keeping in mind): some derangements cannot be obtained with your algorithm. In other words, it gives probability zero to some possible derangements, so the resulting distribution is definitely not uniformly random.
One possible solution, as suggested in the comments, would be to use a rejection algorithm:
pick a permutation uniformly at random
if it hax no fixed points, return it
otherwise retry
Asymptotically, the probability of obtaining a derangement is close to 1/e = 0.3679 (as seen in the wikipedia article). Which means that to obtain a derangement you will need to generate an average of e = 2.718 permutations, which is quite costly.
A better way to do that would be to reject at each step of the algorithm. In pseudocode, something like this (assuming the original array contains i at position i, ie a[i]==i):
for (i = 1 to n-1) {
do {
j = rand(i, n) // random integer from i to n inclusive
} while a[j] != i // rejection part
swap a[i] a[j]
}
The main difference from your algorithm is that we allow j to be equal to i, but only if it does not produce a fixed point. It is slightly longer to execute (due to the rejection part), and demands that you be able to check if an entry is at its original place or not, but it has the advantage that it can produce every possible derangement (uniformly, for that matter).
I am guessing non-rejection algorithms should exist, but I would believe them to be less straight-forward.
Edit:
My algorithm is actually bad: you still have a chance of ending with the last point unshuffled, and the distribution is not random at all, see the marginal distributions of a simulation:
An algorithm that produces uniformly distributed derangements can be found here, with some context on the problem, thorough explanations and analysis.
Second Edit:
Actually your algorithm is known as Sattolo's algorithm, and is known to produce all cycles with equal probability. So any derangement which is not a cycle but a product of several disjoint cycles cannot be obtained with the algorithm. For example, with four elements, the permutation that exchanges 1 and 2, and 3 and 4 is a derangement but not a cycle.
If you don't mind obtaining only cycles, then Sattolo's algorithm is the way to go, it's actually much faster than any uniform derangement algorithm, since no rejection is needed.
As #FelixCQ has mentioned, the shuffles you are looking for are called derangements. Constructing uniformly randomly distributed derangements is not a trivial problem, but some results are known in the literature. The most obvious way to construct derangements is by the rejection method: you generate uniformly randomly distributed permutations using an algorithm like Fisher-Yates and then reject permutations with fixed points. The average running time of that procedure is e*n + o(n) where e is Euler's constant 2.71828... That would probably work in your case.
The other major approach for generating derangements is to use a recursive algorithm. However, unlike Fisher-Yates, we have two branches to the algorithm: the last item in the list can be swapped with another item (i.e., part of a two-cycle), or can be part of a larger cycle. So at each step, the recursive algorithm has to branch in order to generate all possible derangements. Furthermore, the decision of whether to take one branch or the other has to be made with the correct probabilities.
Let D(n) be the number of derangements of n items. At each stage, the number of branches taking the last item to two-cycles is (n-1)D(n-2), and the number of branches taking the last item to larger cycles is (n-1)D(n-1). This gives us a recursive way of calculating the number of derangements, namely D(n)=(n-1)(D(n-2)+D(n-1)), and gives us the probability of branching to a two-cycle at any stage, namely (n-1)D(n-2)/D(n-1).
Now we can construct derangements by deciding to which type of cycle the last element belongs, swapping the last element to one of the n-1 other positions, and repeating. It can be complicated to keep track of all the branching, however, so in 2008 some researchers developed a streamlined algorithm using those ideas. You can see a walkthrough at http://www.cs.upc.edu/~conrado/research/talks/analco08.pdf . The running time of the algorithm is proportional to 2n + O(log^2 n), a 36% improvement in speed over the rejection method.
I have implemented their algorithm in Java. Using longs works for n up to 22 or so. Using BigIntegers extends the algorithm to n=170 or so. Using BigIntegers and BigDecimals extends the algorithm to n=40000 or so (the limit depends on memory usage in the rest of the program).
package io.github.edoolittle.combinatorics;
import java.math.BigInteger;
import java.math.BigDecimal;
import java.math.MathContext;
import java.util.Random;
import java.util.HashMap;
import java.util.TreeMap;
public final class Derangements {
// cache calculated values to speed up recursive algorithm
private static HashMap<Integer,BigInteger> numberOfDerangementsMap
= new HashMap<Integer,BigInteger>();
private static int greatestNCached = -1;
// load numberOfDerangementsMap with initial values D(0)=1 and D(1)=0
static {
numberOfDerangementsMap.put(0,BigInteger.valueOf(1));
numberOfDerangementsMap.put(1,BigInteger.valueOf(0));
greatestNCached = 1;
}
private static Random rand = new Random();
// private default constructor so class isn't accidentally instantiated
private Derangements() { }
public static BigInteger numberOfDerangements(int n)
throws IllegalArgumentException {
if (numberOfDerangementsMap.containsKey(n)) {
return numberOfDerangementsMap.get(n);
} else if (n>=2) {
// pre-load the cache to avoid stack overflow (occurs near n=5000)
for (int i=greatestNCached+1; i<n; i++) numberOfDerangements(i);
greatestNCached = n-1;
// recursion for derangements: D(n) = (n-1)*(D(n-1) + D(n-2))
BigInteger Dn_1 = numberOfDerangements(n-1);
BigInteger Dn_2 = numberOfDerangements(n-2);
BigInteger Dn = (Dn_1.add(Dn_2)).multiply(BigInteger.valueOf(n-1));
numberOfDerangementsMap.put(n,Dn);
greatestNCached = n;
return Dn;
} else {
throw new IllegalArgumentException("argument must be >= 0 but was " + n);
}
}
public static int[] randomDerangement(int n)
throws IllegalArgumentException {
if (n<2)
throw new IllegalArgumentException("argument must be >= 2 but was " + n);
int[] result = new int[n];
boolean[] mark = new boolean[n];
for (int i=0; i<n; i++) {
result[i] = i;
mark[i] = false;
}
int unmarked = n;
for (int i=n-1; i>=0; i--) {
if (unmarked<2) break; // can't move anything else
if (mark[i]) continue; // can't move item at i if marked
// use the rejection method to generate random unmarked index j &lt i;
// this could be replaced by more straightforward technique
int j;
while (mark[j=rand.nextInt(i)]);
// swap two elements of the array
int temp = result[i];
result[i] = result[j];
result[j] = temp;
// mark position j as end of cycle with probability (u-1)D(u-2)/D(u)
double probability
= (new BigDecimal(numberOfDerangements(unmarked-2))).
multiply(new BigDecimal(unmarked-1)).
divide(new BigDecimal(numberOfDerangements(unmarked)),
MathContext.DECIMAL64).doubleValue();
if (rand.nextDouble() < probability) {
mark[j] = true;
unmarked--;
}
// position i now becomes out of play so we could mark it
//mark[i] = true;
// but we don't need to because loop won't touch it from now on
// however we do have to decrement unmarked
unmarked--;
}
return result;
}
// unit tests
public static void main(String[] args) {
// test derangement numbers D(i)
for (int i=0; i<100; i++) {
System.out.println("D(" + i + ") = " + numberOfDerangements(i));
}
System.out.println();
// test quantity (u-1)D_(u-2)/D_u for overflow, inaccuracy
for (int u=2; u<100; u++) {
double d = numberOfDerangements(u-2).doubleValue() * (u-1) /
numberOfDerangements(u).doubleValue();
System.out.println((u-1) + " * D(" + (u-2) + ") / D(" + u + ") = " + d);
}
System.out.println();
// test derangements for correctness, uniform distribution
int size = 5;
long reps = 10000000;
TreeMap<String,Integer> countMap = new TreeMap&ltString,Integer>();
System.out.println("Derangement\tCount");
System.out.println("-----------\t-----");
for (long rep = 0; rep < reps; rep++) {
int[] d = randomDerangement(size);
String s = "";
String sep = "";
if (size > 10) sep = " ";
for (int i=0; i<d.length; i++) {
s += d[i] + sep;
}
if (countMap.containsKey(s)) {
countMap.put(s,countMap.get(s)+1);
} else {
countMap.put(s,1);
}
}
for (String key : countMap.keySet()) {
System.out.println(key + "\t\t" + countMap.get(key));
}
System.out.println();
// large random derangement
int size1 = 1000;
System.out.println("Random derangement of " + size1 + " elements:");
int[] d1 = randomDerangement(size1);
for (int i=0; i<d1.length; i++) {
System.out.print(d1[i] + " ");
}
System.out.println();
System.out.println();
System.out.println("We start to run into memory issues around u=40000:");
{
// increase this number from 40000 to around 50000 to trigger
// out of memory-type exceptions
int u = 40003;
BigDecimal d = (new BigDecimal(numberOfDerangements(u-2))).
multiply(new BigDecimal(u-1)).
divide(new BigDecimal(numberOfDerangements(u)),MathContext.DECIMAL64);
System.out.println((u-1) + " * D(" + (u-2) + ") / D(" + u + ") = " + d);
}
}
}
In C++:
template <class T> void shuffle(std::vector<T>&arr)
{
int size = arr.size();
for (auto i = 1; i < size; i++)
{
int n = rand() % (size - i) + i;
std::swap(arr[i-1], arr[n]);
}
}

Resources