I kind of struggle when I get asked these questions in interviews. Say for example, I have a question where I need to find the converted currency amount from one currency to another and I am given the list of currencies. How do I build the adjacency/relationship mapping so that I could get to the correct amount. Even an algorithm that explains the logic would suffice. Appreciate your suggestions on this !.
For example:
Lets say I am given a list of currency Objects that contains different conversion rates(USD->INR = 75, INR -> XXX = 100). I need to find the conversion from USD-> XXX = 7500. I should also be able to do the conversion backwards say INR->USD. How do I find it by building a graph ?.
public Currency{
String fromCurrency;
String toCurrency;
double rate;
}
public double currencyConverter(List<Currency> currencies, fromCurrency, toCurrency){
return convertedCurrency;
}
In the problem you mentioned, the graph is directed graph. Let us say you represent the graph using adjacency matrix. Fill the matrix with the data you have for all the currencies. For example, USD->INR has R1 rate, so INR->USD would be 1/R1 rate. After filling the adjacency matrix, use algorithms to compute transitive closure of a directed graph, for example Floyd–Warshall algorithm.
I am not sure how to use Floyd-Warshall Algorithm to solve this problem. However, I was able to solve this problem using dynamic programming. Here is my solution:
class Currency{
String fromCurrency;
String toCurrency;
double rate;
public Currency(String fromCurrency, String toCurrency, double rate) {
this.fromCurrency = fromCurrency;
this.toCurrency = toCurrency;
this.rate = rate;
}
}
public class CurrencyConverter {
public static double currencyConverter(List<Currency> currencies, String fromCurrency, String toCurrency) {
Set<String> currencyNotes = new LinkedHashSet<>();
for(Currency currency : currencies) {
currencyNotes.add(currency.fromCurrency);
currencyNotes.add(currency.toCurrency);
}
Map<String, Integer> currencyMap = new TreeMap<>();
int idx = 0;
for(String currencyNote : currencyNotes) {
currencyMap.putIfAbsent(currencyNote, idx++);
}
double[][] dp = new double[currencyNotes.size()][currencyNotes.size()];
for(double[] d : dp) {
Arrays.fill(d, -1.0);
}
for(int i=0;i<currencyNotes.size();i++) {
dp[i][i] = 1;
}
for(Currency currency : currencies) {
Integer fromCurrencyValue = currencyMap.get(currency.fromCurrency);
Integer toCurrencyValue = currencyMap.get(currency.toCurrency);
dp[fromCurrencyValue][toCurrencyValue] = currency.rate;
dp[toCurrencyValue][fromCurrencyValue] = 1/(currency.rate);
}
for(int i=currencyNotes.size()-2;i>=0;i--) {
for(int j= i+1;j<currencyNotes.size();j++) {
dp[i][j] = dp[i][j-1]*dp[i+1][j]/(dp[i+1][j-1]);
dp[j][i] = 1/dp[i][j];
}
}
return dp[currencyMap.get(fromCurrency)][currencyMap.get(toCurrency)];
}
I believe the best way to solve transitive dependency problems is to first identify the nodes and their relationships, and backtrack it. As joker from the dark knight says "Sometimes all it takes is a little push" :)
Related
I am using random generator in my python code. I want to get the percentage of unique random numbers generated over a huge range like from random(0:10^8).I need to generate 10^12 numbers What could be the efficient algorithm in terms of space complexity?
the code is similar to :
import random
dif = {}
for i in range(0,1000):
rannum = random.randint(0,50)
dif[rannum] = "True"
dif_len = len(dif)
print dif_len
per = float(dif_len)/50
print per
You have to keep track of each number the generator generates or there is no way to know whether some new number has been seen before. What is the best way to do that? It depends on how many numbers you are going to examine. For small N, use a HashSet. At some large number of N it becomes more efficient to use a bitmap.
For small N...
public class Accumulator {
private int uniqueNumbers = 0;
private int totalAccumulated = 0;
private HashSet<int> set = new HashSet<int>();
public void Add(int i) {
if (!set.Contains(i)) {
set.Add(i);
uniqueNumbers++;
}
totalAccumulated++;
}
public double PercentUnique() {
return 100.0 * uniqueNumbers / totalAccumulated;
}
}
Set cover algorithms tend to provide just one solution for finding a minimum number of sets to cover. How to go about finding all such solutions?
It depends on what you mean by "minimal" as this will vary the number of set covers you get. For example if you had the target set of ABC and the sets AB,AC,C to choose from you could cover with either (AB,C) or (AB,AC) or all 3 (AB,AC,C) and if you're defining "minimal" as for example the 2 lowest choices i.e. with the fewest overlaps or fewest repeated elements then you would choose the 1st 2 ((AB,C) and (AB,AC)). "Minimal" could also be defined in terms number of sets chosen so for the above example the lowest number would be 2 and either (AB,C) or (AB,AC) would work. But if you wanted all possible set covers you could start with just brute force going through each combo so
import java.util.*;
public class f {
// abcd
static int target = 0b1111;
// ab,ac,acd,cd
static int[] groups = {0b1100,0b1010,0b1011,0b0011};
// check if sets cover target
// for example 1100 and 0011 would cover 1111
static boolean covers(boolean[] A){
int or = 0;
for(int i=0;i<A.length;i++){
if(A[i]) or = or | groups[i];
}
int t = target;
while (t>0){
if(t%2!=1 || or%2!=1)
return false;
t = t>>1;
or = or>>1;
}
return true;
}
// go through all choices
static void combos(boolean A[],int i,int j){
if(i>j){
if(covers(A)) System.out.println(Arrays.toString(A));
return;
}
combos(A,i+1,j);
A[i]=!A[i];
combos(A,i+1,j);
}
public static void main(String args[]){
boolean[] A = new boolean[groups.length];
combos(A,0,groups.length-1);
}
}
I have a list of objects with each Item having a cost and a set of resources associated with it (see below). I'm looking for a way to select a subset from this list based on the combined cost and each resource must be contained at most once (not every resource has to be included though). The way the subset's combined cost is calculated should be exchangeable (e.g. max, min, avg). If two subsets have the same combined cost the subset with more items is selected.
Item | cost resources [1..3]
================================
P1 | 0.5 B
P2 | 4 A B C
P3 | 1.5 A B
P4 | 2 C
P5 | 2 A
This would allow for these combinations:
Variant | Items sum
==========================
V1 | P1 P4 P5 4.5
V2 | P2 4
V3 | P3 P4 3.5
For a maximum selection V1 would be selected. The number of items can span from anywhere between 1 and a few dozen, the same is true for the number of resources.
My brute force approach would just sum up the cost of all possible permutations and select the max/min one, but I assume there is a much more efficient way to do this. I'm coding in Java 8 but I'm fine with pseudocode or Matlab.
I found some questions which appeared to be similar (i.e. (1), (2), (3)) but I couldn't quite transfer them to my problem, so forgive me if you think this is a duplicate :/
Thanks in advance!
~
Clarification
A friend of mine was confused about what kinds of sets I want. No matter how I select my subset in the end, I always want to generate subsets with as many items in them as possible. If I have added P3 to my subset and can add P4 without creating a conflict (that is, a resource is used twice within the subset) then I want P3+P4, not just P3.
Clarification2
"Variants don't have to contain all resources" means that if it's impossible to add an item to fill in a missing resource slot without creating a conflict (because all items with the missing resource also have another resource already present) then the subset is complete.
This problem is NP-Hard, even without the "Resources" factor, you are dealing with the knapsack-problem.
If you can transform your costs to relatively small integers, you may be able to modify the Dynamic Programming solution of Knapsack by adding one more dimension per resource allocated, and have a formula similar to (showing concept, make sure all edge cases work or modify if needed):
D(_,_,2,_,_) = D(_,_,_,2,_) = D(_,_,_,_,2) = -Infinity
D(x,_,_,_,_) = -Infinity x < 0
D(x,0,_,_,_) = 0 //this stop clause is "weaker" than above stop clauses - it can applies only if above don't.
D(x,i,r1,r2,r3) = max{1+ D(x-cost[i],i-1,r1+res1[i],r2+res2[i],r3+res3[i]) , D(x,i-1,r1,r2,r3)}
Where cost is array of costs, and res1,res2,res3,... are binary arrays of resources needed by eahc item.
Complexity will be O(W*n*2^#resources)
After giving my problem some more thoughts I came up with a solution I am quite proud of. This solution:
will find all possible complete variants, that is, variants where no additional item can be added without causing a conflict
will also find a few non-complete variants. I can live with that.
can select the final variant by any means you want.
works with non-integer item-values.
I realized that this is indeed not a variant of the knapsack problem, as the items have a value but no weight associated with them (or, you could interpret it as a variant of the multi-dimensional knapsack problem variant but with all weights equal). The code uses some lambda expressions, if you don't use Java 8 you'll have to replace those.
public class BenefitSelector<T extends IConflicting>
{
public ArrayList<T> select(ArrayList<T> proposals, Function<T, Double> valueFunction)
{
if (proposals.isEmpty())
return null;
ArrayList<ArrayList<T>> variants = findVariants(proposals);
double value = 0;
ArrayList<T> selected = null;
for (ArrayList<T> v : variants)
{
double x = 0;
for (T p : v)
x += valueFunction.apply(p);
if (x > value)
{
value = x;
selected = v;
}
}
return selected;
}
private ArrayList<ArrayList<T>> findVariants(ArrayList<T> list)
{
ArrayList<ArrayList<T>> ret = new ArrayList<>();
Conflict c = findConflicts(list);
if (c == null)
ret.add(list);
else
{
ret.addAll(findVariants(c.v1));
ret.addAll(findVariants(c.v2));
}
return ret;
}
private Conflict findConflicts(ArrayList<T> list)
{
// Sort conflicts by the number of items remaining in the first list
TreeSet<Conflict> ret = new TreeSet<>((c1, c2) -> Integer.compare(c1.v1.size(), c2.v1.size()));
for (T p : list)
{
ArrayList<T> conflicting = new ArrayList<>();
for (T p2 : list)
if (p != p2 && p.isConflicting(p2))
conflicting.add(p2);
// If conflicts are found create subsets by
// - v1: removing p
// - v2: removing all objects offended by p
if (!conflicting.isEmpty())
{
Conflict c = new Conflict(p);
c.v1.addAll(list);
c.v1.remove(p);
c.v2.addAll(list);
c.v2.removeAll(conflicting);
ret.add(c);
}
}
// Return only the conflict with the highest number of elements in v1 remaining.
// The algorithm seems to behave in such a way that it is sufficient to only
// descend into this one conflict. As the root list contains all items and we use
// the remainder of objects there should be no way to miss an item.
return ret.isEmpty() ? null
: ret.last();
}
private class Conflict
{
/** contains all items from the superset minus the offending object */
private final ArrayList<T> v1 = new ArrayList<>();
/** contains all items from the superset minus all offended objects */
private final ArrayList<T> v2 = new ArrayList<>();
// Not used right now but useful for debugging
private final T offender;
private Conflict(T offender)
{
this.offender = offender;
}
}
}
Tested with variants of the following setup:
public static void main(String[] args)
{
BenefitSelector<Scavenger> sel = new BenefitSelector<>();
ArrayList<Scavenger> proposals = new ArrayList<>();
proposals.add(new Scavenger("P1", new Resource[] {Resource.B}, 0.5));
proposals.add(new Scavenger("P2", new Resource[] {Resource.A, Resource.B, Resource.C}, 4));
proposals.add(new Scavenger("P3", new Resource[] {Resource.C}, 2));
proposals.add(new Scavenger("P4", new Resource[] {Resource.A, Resource.B}, 1.5));
proposals.add(new Scavenger("P5", new Resource[] {Resource.A}, 2));
proposals.add(new Scavenger("P6", new Resource[] {Resource.C, Resource.D}, 3));
proposals.add(new Scavenger("P7", new Resource[] {Resource.D}, 1));
ArrayList<Scavenger> result = sel.select(proposals, (p) -> p.value);
System.out.println(result);
}
private static class Scavenger implements IConflicting
{
private final String name;
private final Resource[] resources;
private final double value;
private Scavenger(String name, Resource[] resources, double value)
{
this.name = name;
this.resources = resources;
this.value = value;
}
#Override
public boolean isConflicting(IConflicting other)
{
return !Collections.disjoint(Arrays.asList(resources), Arrays.asList(((Scavenger) other).resources));
}
#Override
public String toString()
{
return name;
}
}
This results in [P1(B), P5(A), P6(CD)] with a combined value of 5.5, which is higher than any other combination (e.g. [P2(ABC), P7(D)]=5). As variants aren't lost until they are selected dealing with equal variants is easy as well.
I have a complicated function defined (4 double parameters), which has a lot of different local optima. I have no reason to think that it should be differentiable either. The only thing I can tell is the hypercube in which the (interesting) optima can be found.
I wrote a really crude and slow algorithm to optimize the function:
public static OptimalParameters brutForce(Model function) throws FunctionEvaluationException, OptimizationException {
System.out.println("BrutForce");
double startingStep = 0.02;
double minStep = 1e-6;
int steps = 30;
double[] start = function.startingGuess();
int n = start.length;
Comparer comparer = comparer(function);
double[] minimum = start;
double result = function.value(minimum);
double step = startingStep;
while (step > minStep) {
System.out.println("STEP step=" + step);
GridGenerator gridGenerator = new GridGenerator(steps, step, minimum);
double[] point;
while ((point = gridGenerator.NextPoint()) != null) {
double value = function.value(point);
if (comparer.better(value, result)) {
System.out.println("New optimum " + value + " at " + model.timeSeries(point));
result = value;
minimum = point;
}
}
step /= 1.93;
}
return new OptimalParameters(result, function.timeSeries(minimum));
}
private static Comparer comparer(Model model) {
if (model.goalType() == GoalType.MINIMIZE) {
return new Comparer() {
#Override
public boolean better(double newVal, double optimumSoFar) {
return newVal < optimumSoFar;
}
};
}
return new Comparer() {
#Override
public boolean better(double newVal, double optimumSoFar) {
return newVal > optimumSoFar;
}
};
}
private static interface Comparer {
boolean better(double newVal, double optimumSoFar);
}
Note that it is more important to find a better local optimum than speed of the algorithm.
Are there any better algorithms to do this kind of optimization? Would you have any ideas how to improve this design?
You can use simplex based optimization. It is suitable exactly for problems like you have.
If you can use Matlab, at least for the prototyping, try using fminsearch
http://www.mathworks.com/help/techdoc/ref/fminsearch.html
[1] Lagarias, J.C., J. A. Reeds, M. H. Wright, and P. E. Wright, "Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions," SIAM Journal of Optimization, Vol. 9 Number 1, pp. 112-147, 1998.
Try something classic: http://en.wikipedia.org/wiki/Golden_section_search
Your problem sounds as if metaheuristics would be an ideal solution. You can try a metaheuristic such as Evolution Strategies (ES). ES was designed for tough multimodal real vector functions. ES and several real functions are implemented (Rosenbrock, Rastrigin, Ackley, etc.) in our software HeuristicLab. You can implement your own function there and have it optimized. You don't need to add a lot of code and you can directly copy from the other functions that can work as examples for you. You would need to port your code to C# though, but only the evaluation, the other parts are not needed.
An advantage is that if you have your function implemented in HeuristicLab you can also try to optimize it with a Particle Swarm Optimization (PSO) method, Genetic Algorithm, or Simulated Annealing which are also already implemented and see which one works best. You only need to implement the evaluation function once.
Or you just scan the literature for papers on Evolution Strategies and reimplement it yourself. IIRC Beyer has implementations on his website - it's written for MatLab.
Given two points P,Q and a delta, I defined the equivalence relation ~=, where P ~= Q if EuclideanDistance(P,Q) <= delta. Now, given a set S of n points, in the example S = (A, B, C, D, E, F) and n = 6 (the fact points are actually endpoints of segments is negligible), is there an algorithm that has complexity better than O(n^2) in the average case to find a partition of the set (the representative element of the subsets is unimportant)?
Attempts to find theoretical definitions of this problem were unsuccessful so far: k-means clustering, nearest neighbor search and others seems to me different problems. The picture shows what I need to do in my application.
Any hint? Thanks
EDIT: while the actual problem (cluster near points given some kind of invariant) should be solvable in better better than O(n^2) in the average case, there's a serious flaw in my problem definition: =~ is not a equivalence relation because of the simple fact it doesn't respect the transitive property. I think this is the main reason this problem is not easy to solve and need advanced techiques. Will post very soon my actual solution: should work when near points all satisfy the =~ as defined. Can fail when poles apart points doesn't respect the relation but they are in relation with the center of gravity of clustered points. It works well with my input data space, may not with yours. Do anyone know a full formal tratment of this problem (with solution)?
One way to restate the problem is as follows: given a set of n 2D points, for each point p find the set of points that are contained with the circle of diameter delta centred at p.
A naive linear search gives the O(n^2) algorithm you allude to.
It seems to me that this is the best one can do in the worst case. When all points in the set are contained within a circle of diameter <= delta, each of n queries would have to return O(n) points, giving an O(n^2) overall complexity.
However, one should be able to do better on more reasonable datasets.
Take a look at this (esp. the section on space partitioning) and KD-trees. The latter should give you a sub-O(n^2) algorithm in reasonable cases.
There might be a different way of looking at the problem, one that would give better complexity; I can't think of anything off the top of my head.
Definitely a problem for Quadtree.
You could also try sorting on each coordonate and playing with these two lists (sorting is n*log(n), and you can check only the points that satisfies dx <= delta && dy <= delta. Also, you could put them in a sorted list with two levels of pointers: one for parsing on OX and another for OY.
For each point, calculate the distance D(n) from the origin, this is an O(n) operation.
Use a O(n^2) algorithm to find matches where D(a-b) < delta, skipping D(a)-D(b) > delta.
The result, on average, must be better than O(n^2) due to the (hopefully large) number skipped.
This is a C# KdTree implementation that should solve the "Find all neighbors of a point P within a delta". It makes heavy use of functional programming techniques (yes, I love Python). It's tested but I still have doubts doubts in understanding _TreeFindNearest(). The code (or pseudo code) to solve the problem "Partition a set of n points given a ~= relation in better than O(n^2) in the average case" is posted in another answer.
/*
Stripped C# 2.0 port of ``kdtree'', a library for working with kd-trees.
Copyright (C) 2007-2009 John Tsiombikas <nuclear#siggraph.org>
Copyright (C) 2010 Francesco Pretto <ceztko#gmail.com>
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
OF SUCH DAMAGE.
*/
using System;
using System.Collections.Generic;
using System.Text;
namespace ITR.Data.NET
{
public class KdTree<T>
{
#region Fields
private Node _Root;
private int _Count;
private int _Dimension;
private CoordinateGetter<T>[] _GetCoordinate;
#endregion // Fields
#region Constructors
public KdTree(params CoordinateGetter<T>[] coordinateGetters)
{
_Dimension = coordinateGetters.Length;
_GetCoordinate = coordinateGetters;
}
#endregion // Constructors
#region Public methods
public void Insert(T location)
{
_TreeInsert(ref _Root, 0, location);
_Count++;
}
public void InsertAll(IEnumerable<T> locations)
{
foreach (T location in locations)
Insert(location);
}
public IEnumerable<T> FindNeighborsRange(T location, double range)
{
return _TreeFindNeighborsRange(_Root, 0, location, range);
}
#endregion // Public methods
#region Tree traversal
private void _TreeInsert(ref Node current, int currentPlane, T location)
{
if (current == null)
{
current = new Node(location);
return;
}
int nextPlane = (currentPlane + 1) % _Dimension;
if (_GetCoordinate[currentPlane](location) <
_GetCoordinate[currentPlane](current.Location))
_TreeInsert(ref current._Left, nextPlane, location);
else
_TreeInsert(ref current._Right, nextPlane, location);
}
private IEnumerable<T> _TreeFindNeighborsRange(Node current, int currentPlane,
T referenceLocation, double range)
{
if (current == null)
yield break;
double squaredDistance = 0;
for (int it = 0; it < _Dimension; it++)
{
double referenceCoordinate = _GetCoordinate[it](referenceLocation);
double currentCoordinate = _GetCoordinate[it](current.Location);
squaredDistance +=
(referenceCoordinate - currentCoordinate)
* (referenceCoordinate - currentCoordinate);
}
if (squaredDistance <= range * range)
yield return current.Location;
double coordinateRelativeDistance =
_GetCoordinate[currentPlane](referenceLocation)
- _GetCoordinate[currentPlane](current.Location);
Direction nextDirection = coordinateRelativeDistance <= 0.0
? Direction.LEFT : Direction.RIGHT;
int nextPlane = (currentPlane + 1) % _Dimension;
IEnumerable<T> subTreeNeighbors =
_TreeFindNeighborsRange(current[nextDirection], nextPlane,
referenceLocation, range);
foreach (T location in subTreeNeighbors)
yield return location;
if (Math.Abs(coordinateRelativeDistance) <= range)
{
subTreeNeighbors =
_TreeFindNeighborsRange(current.GetOtherChild(nextDirection),
nextPlane, referenceLocation, range);
foreach (T location in subTreeNeighbors)
yield return location;
}
}
#endregion // Tree traversal
#region Node class
public class Node
{
#region Fields
private T _Location;
internal Node _Left;
internal Node _Right;
#endregion // Fields
#region Constructors
internal Node(T nodeValue)
{
_Location = nodeValue;
_Left = null;
_Right = null;
}
#endregion // Contructors
#region Children Indexers
public Node this[Direction direction]
{
get { return direction == Direction.LEFT ? _Left : Right; }
}
public Node GetOtherChild(Direction direction)
{
return direction == Direction.LEFT ? _Right : _Left;
}
#endregion // Children Indexers
#region Properties
public T Location
{
get { return _Location; }
}
public Node Left
{
get { return _Left; }
}
public Node Right
{
get { return _Right; }
}
#endregion // Properties
}
#endregion // Node class
#region Properties
public int Count
{
get { return _Count; }
set { _Count = value; }
}
public Node Root
{
get { return _Root; }
set { _Root = value; }
}
#endregion // Properties
}
#region Enums, delegates
public enum Direction
{
LEFT = 0,
RIGHT
}
public delegate double CoordinateGetter<T>(T location);
#endregion // Enums, delegates
}
The following C# method, together with KdTree class, Join() (enumerate all collections passed as argument) and Shuffled() (returns a shuffled version of the passed collection) methods solve the problem of my question. There may be some flawed cases (read EDITs in the question) when referenceVectors are the same vectors as vectorsToRelocate, as I do in my problem.
public static Dictionary<Vector2D, Vector2D> FindRelocationMap(
IEnumerable<Vector2D> referenceVectors,
IEnumerable<Vector2D> vectorsToRelocate)
{
Dictionary<Vector2D, Vector2D> ret = new Dictionary<Vector2D, Vector2D>();
// Preliminary filling
IEnumerable<Vector2D> allVectors =
Utils.Join(referenceVectors, vectorsToRelocate);
foreach (Vector2D vector in allVectors)
ret[vector] = vector;
KdTree<Vector2D> kdTree = new KdTree<Vector2D>(
delegate(Vector2D vector) { return vector.X; },
delegate(Vector2D vector) { return vector.Y; });
kdTree.InsertAll(Utils.Shuffled(ret.Keys));
HashSet<Vector2D> relocatedVectors = new HashSet<Vector2D>();
foreach (Vector2D vector in referenceVectors)
{
if (relocatedVectors.Contains(vector))
continue;
relocatedVectors.Add(vector);
IEnumerable<Vector2D> neighbors =
kdTree.FindNeighborsRange(vector, Tolerances.EUCLID_DIST_TOLERANCE);
foreach (Vector2D neighbor in neighbors)
{
ret[neighbor] = vector;
relocatedVectors.Add(neighbor);
}
}
return ret;
}