I have a complicated function defined (4 double parameters), which has a lot of different local optima. I have no reason to think that it should be differentiable either. The only thing I can tell is the hypercube in which the (interesting) optima can be found.
I wrote a really crude and slow algorithm to optimize the function:
public static OptimalParameters brutForce(Model function) throws FunctionEvaluationException, OptimizationException {
System.out.println("BrutForce");
double startingStep = 0.02;
double minStep = 1e-6;
int steps = 30;
double[] start = function.startingGuess();
int n = start.length;
Comparer comparer = comparer(function);
double[] minimum = start;
double result = function.value(minimum);
double step = startingStep;
while (step > minStep) {
System.out.println("STEP step=" + step);
GridGenerator gridGenerator = new GridGenerator(steps, step, minimum);
double[] point;
while ((point = gridGenerator.NextPoint()) != null) {
double value = function.value(point);
if (comparer.better(value, result)) {
System.out.println("New optimum " + value + " at " + model.timeSeries(point));
result = value;
minimum = point;
}
}
step /= 1.93;
}
return new OptimalParameters(result, function.timeSeries(minimum));
}
private static Comparer comparer(Model model) {
if (model.goalType() == GoalType.MINIMIZE) {
return new Comparer() {
#Override
public boolean better(double newVal, double optimumSoFar) {
return newVal < optimumSoFar;
}
};
}
return new Comparer() {
#Override
public boolean better(double newVal, double optimumSoFar) {
return newVal > optimumSoFar;
}
};
}
private static interface Comparer {
boolean better(double newVal, double optimumSoFar);
}
Note that it is more important to find a better local optimum than speed of the algorithm.
Are there any better algorithms to do this kind of optimization? Would you have any ideas how to improve this design?
You can use simplex based optimization. It is suitable exactly for problems like you have.
If you can use Matlab, at least for the prototyping, try using fminsearch
http://www.mathworks.com/help/techdoc/ref/fminsearch.html
[1] Lagarias, J.C., J. A. Reeds, M. H. Wright, and P. E. Wright, "Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions," SIAM Journal of Optimization, Vol. 9 Number 1, pp. 112-147, 1998.
Try something classic: http://en.wikipedia.org/wiki/Golden_section_search
Your problem sounds as if metaheuristics would be an ideal solution. You can try a metaheuristic such as Evolution Strategies (ES). ES was designed for tough multimodal real vector functions. ES and several real functions are implemented (Rosenbrock, Rastrigin, Ackley, etc.) in our software HeuristicLab. You can implement your own function there and have it optimized. You don't need to add a lot of code and you can directly copy from the other functions that can work as examples for you. You would need to port your code to C# though, but only the evaluation, the other parts are not needed.
An advantage is that if you have your function implemented in HeuristicLab you can also try to optimize it with a Particle Swarm Optimization (PSO) method, Genetic Algorithm, or Simulated Annealing which are also already implemented and see which one works best. You only need to implement the evaluation function once.
Or you just scan the literature for papers on Evolution Strategies and reimplement it yourself. IIRC Beyer has implementations on his website - it's written for MatLab.
Related
I'm working on an implementation of Ukkonen's linear time suffix tree construction algorithm, and planning to implement improvements suggested by e.g. Kurtz and NJ Larsson (for example edge links instead of suffix links).
While testing, I experienced mixed green and red lights based on the specific strings I tested, and had similar experiences with a few algorithms I found online. Which made me wonder:
Are there any known, specifically built (preferably simple/short) strings for unit-testing suffix trees to ensure the algorithm works precisely in all branching scenarios?
Furthermore, are there any good methods to separate the testing of the tree building algorithm from the testing of the traversal/lookup algorithm?
I know this question doesn't have a single specific correct answer, but I think it could serve as a good reference point for people working on similar algorithms.
My current unit-testing approach is quite primitive (C# with NUnit):
[TestCase]
public void Contains_Simple_ShouldReturnTrue()
{
var s = "bananasbanananananananananabananas";
var st = SuffixTree.Build(s);
var t1 = s.Substring(0, 10);
Assert.IsTrue(st.Contains(t1));
}
// ... Other simple test cases
[TestCase]
// This test fails, but it's not particularly helpful for bugfixing
public void Contains_DynamicBarrage_OnLongString_ShouldReturnTrue()
{
const int CYCLES = 200,
MAXLEN = 200;
var s = "olbafuynhfcxzqhnebecxjrfwfttw"; // Shortened for sanity
var st = SuffixTree.Build(s);
var r = new Random();
for (int i = 0; i < CYCLES; i++)
{
var pos = r.Next(0, s.Length - 2);
var len = r.Next(1, Math.Min(s.Length - pos, MAXLEN));
Assert.IsTrue(st.Contains(s.Substring(pos, len)));
}
}
I kind of struggle when I get asked these questions in interviews. Say for example, I have a question where I need to find the converted currency amount from one currency to another and I am given the list of currencies. How do I build the adjacency/relationship mapping so that I could get to the correct amount. Even an algorithm that explains the logic would suffice. Appreciate your suggestions on this !.
For example:
Lets say I am given a list of currency Objects that contains different conversion rates(USD->INR = 75, INR -> XXX = 100). I need to find the conversion from USD-> XXX = 7500. I should also be able to do the conversion backwards say INR->USD. How do I find it by building a graph ?.
public Currency{
String fromCurrency;
String toCurrency;
double rate;
}
public double currencyConverter(List<Currency> currencies, fromCurrency, toCurrency){
return convertedCurrency;
}
In the problem you mentioned, the graph is directed graph. Let us say you represent the graph using adjacency matrix. Fill the matrix with the data you have for all the currencies. For example, USD->INR has R1 rate, so INR->USD would be 1/R1 rate. After filling the adjacency matrix, use algorithms to compute transitive closure of a directed graph, for example Floyd–Warshall algorithm.
I am not sure how to use Floyd-Warshall Algorithm to solve this problem. However, I was able to solve this problem using dynamic programming. Here is my solution:
class Currency{
String fromCurrency;
String toCurrency;
double rate;
public Currency(String fromCurrency, String toCurrency, double rate) {
this.fromCurrency = fromCurrency;
this.toCurrency = toCurrency;
this.rate = rate;
}
}
public class CurrencyConverter {
public static double currencyConverter(List<Currency> currencies, String fromCurrency, String toCurrency) {
Set<String> currencyNotes = new LinkedHashSet<>();
for(Currency currency : currencies) {
currencyNotes.add(currency.fromCurrency);
currencyNotes.add(currency.toCurrency);
}
Map<String, Integer> currencyMap = new TreeMap<>();
int idx = 0;
for(String currencyNote : currencyNotes) {
currencyMap.putIfAbsent(currencyNote, idx++);
}
double[][] dp = new double[currencyNotes.size()][currencyNotes.size()];
for(double[] d : dp) {
Arrays.fill(d, -1.0);
}
for(int i=0;i<currencyNotes.size();i++) {
dp[i][i] = 1;
}
for(Currency currency : currencies) {
Integer fromCurrencyValue = currencyMap.get(currency.fromCurrency);
Integer toCurrencyValue = currencyMap.get(currency.toCurrency);
dp[fromCurrencyValue][toCurrencyValue] = currency.rate;
dp[toCurrencyValue][fromCurrencyValue] = 1/(currency.rate);
}
for(int i=currencyNotes.size()-2;i>=0;i--) {
for(int j= i+1;j<currencyNotes.size();j++) {
dp[i][j] = dp[i][j-1]*dp[i+1][j]/(dp[i+1][j-1]);
dp[j][i] = 1/dp[i][j];
}
}
return dp[currencyMap.get(fromCurrency)][currencyMap.get(toCurrency)];
}
I believe the best way to solve transitive dependency problems is to first identify the nodes and their relationships, and backtrack it. As joker from the dark knight says "Sometimes all it takes is a little push" :)
I'm using GA, so I took example from this page (http://www.ai-junkie.com/ga/intro/gat3.html) and tried to do on my own.
The problem is, it doesn't work. For example, maximum fitness does not always grow in the next generation, but becomes smallest. Also, after some number of generations, it just stops getting better. For example, in first 100 generations, it found the largest circle with radius 104. And in next 900 largest radius is 107. And after drawing it, I see that it can grow much more.
Here is my code connected with GA. I leave out generating random circles, decoding and drawing.
private Genome ChooseParent(Genome[] population, Random r)
{
double sumFitness = 0;
double maxFitness = 0;
for (int i = 0; i < population.Length; i++)
{
sumFitness += population[i].fitness;
if (i == 0 || maxFitness < population[i].fitness)
{
maxFitness = population[i].fitness;
}
}
sumFitness = population.Length * maxFitness - sumFitness;
double randNum = r.NextDouble() *sumFitness;
double acumulatedSum = 0;
for(int i=0;i<population.Length;i++)
{
acumulatedSum += population[i].fitness;
if(randNum<acumulatedSum)
{
return population[i];
}
}
return population[0];
}
private void Crossover(Genome parent1, Genome parent2, Genome child1, Genome child2, Random r)
{
double d=r.NextDouble();
if(d>this.crossoverRate || child1.Equals(child2))
{
for (int i = 0; i < parent1.bitNum; i++)
{
child1.bit[i] = parent1.bit[i];
child2.bit[i] = parent2.bit[i];
}
}
else
{
int cp = r.Next(parent1.bitNum - 1);
for (int i = 0; i < cp; i++)
{
child1.bit[i] = parent1.bit[i];
child2.bit[i] = parent2.bit[i];
}
for (int i = cp; i < parent1.bitNum; i++)
{
child1.bit[i] = parent2.bit[i];
child2.bit[i] = parent1.bit[i];
}
}
}
private void Mutation(Genome child, Random r)
{
for(int i=0;i<child.bitNum;i++)
{
if(r.NextDouble()<=this.mutationRate)
{
child.bit[i] = (byte)(1 - child.bit[i]);
}
}
}
public void Run()
{
for(int generation=0;generation<1000;generation++)
{
CalculateFitness(population);
System.Diagnostics.Debug.WriteLine(maxFitness);
population = population.OrderByDescending(x => x).ToArray();
//ELITIZM
Copy(population[0], newpopulation[0]);
Copy(population[1], newpopulation[1]);
for(int i=1;i<this.populationSize/2;i++)
{
Genome parent1 = ChooseParent(population, r);
Genome parent2 = ChooseParent(population, r);
Genome child1 = newpopulation[2 * i];
Genome child2 = newpopulation[2 * i + 1];
Crossover(parent1, parent2, child1, child2, r);
Mutation(child1, r);
Mutation(child2, r);
}
Genome[] tmp = population;
population = newpopulation;
newpopulation = tmp;
DekodePopulation(population); //decoding and fitness calculation for each member of population
}
}
If someone can point on potential problem that caused such behaviour and ways to fix it, I'll be grateful.
Welcome to the world of genetic algorithms!
I'll go through your issues and suggest a potential problem. Here we go:
maximum fitness does not always grow in the next generation, but becomes smallest - You probably meant smaller. This is weird since you employed elitism, so each generation's best individual should be at least as good as in the previous one. I suggest you check your code for mistakes because this really should not happen. However, the fitness does not need to always grow. It is impossible to achieve this in GA - it's a stochastic algorithm, working with randomness - suppose that, by chance, no mutation nor crossover happens in a generation - then the fitness cannot improve to the next generation since there is no change.
after some number of generations, it just stops getting better. For example, in first 100 generations, it found the largest circle with radius 104. And in next 900 largest radius is 107. And after drawing it, I see that it can grow much more. - this is (probably) a sign of a phenomenon called premature convergence and it's, unfortunately, a "normal" thing in genetic algorithm. Premature convergence is a situation when the whole population converges to a single solution or to a set of solutions which are near each other and which is/are sub-optimal (i.e. it is not the best possible soluion). When this happens, the GA has a very hard time escaping this local optimum. You can try to tweak the parameters, especially the mutation probability, to force more exploration.
Also, another very important thing that can cause problems is the encoding, i.e. how is the bit string mapped to the circle. If the encoding is much too indirect, it can lead to poor performance of the GA. GAs work when there are some building blocks in the genotype which can be exchanged between among the population. If there are no such blocks, the performance of a GA is usually going to be poor.
I have implemented this exercise and achieved good results. Here is the link:
https://github.com/ManhTruongDang/ai-junkie
Hope this can be of use to you.
Given two points P,Q and a delta, I defined the equivalence relation ~=, where P ~= Q if EuclideanDistance(P,Q) <= delta. Now, given a set S of n points, in the example S = (A, B, C, D, E, F) and n = 6 (the fact points are actually endpoints of segments is negligible), is there an algorithm that has complexity better than O(n^2) in the average case to find a partition of the set (the representative element of the subsets is unimportant)?
Attempts to find theoretical definitions of this problem were unsuccessful so far: k-means clustering, nearest neighbor search and others seems to me different problems. The picture shows what I need to do in my application.
Any hint? Thanks
EDIT: while the actual problem (cluster near points given some kind of invariant) should be solvable in better better than O(n^2) in the average case, there's a serious flaw in my problem definition: =~ is not a equivalence relation because of the simple fact it doesn't respect the transitive property. I think this is the main reason this problem is not easy to solve and need advanced techiques. Will post very soon my actual solution: should work when near points all satisfy the =~ as defined. Can fail when poles apart points doesn't respect the relation but they are in relation with the center of gravity of clustered points. It works well with my input data space, may not with yours. Do anyone know a full formal tratment of this problem (with solution)?
One way to restate the problem is as follows: given a set of n 2D points, for each point p find the set of points that are contained with the circle of diameter delta centred at p.
A naive linear search gives the O(n^2) algorithm you allude to.
It seems to me that this is the best one can do in the worst case. When all points in the set are contained within a circle of diameter <= delta, each of n queries would have to return O(n) points, giving an O(n^2) overall complexity.
However, one should be able to do better on more reasonable datasets.
Take a look at this (esp. the section on space partitioning) and KD-trees. The latter should give you a sub-O(n^2) algorithm in reasonable cases.
There might be a different way of looking at the problem, one that would give better complexity; I can't think of anything off the top of my head.
Definitely a problem for Quadtree.
You could also try sorting on each coordonate and playing with these two lists (sorting is n*log(n), and you can check only the points that satisfies dx <= delta && dy <= delta. Also, you could put them in a sorted list with two levels of pointers: one for parsing on OX and another for OY.
For each point, calculate the distance D(n) from the origin, this is an O(n) operation.
Use a O(n^2) algorithm to find matches where D(a-b) < delta, skipping D(a)-D(b) > delta.
The result, on average, must be better than O(n^2) due to the (hopefully large) number skipped.
This is a C# KdTree implementation that should solve the "Find all neighbors of a point P within a delta". It makes heavy use of functional programming techniques (yes, I love Python). It's tested but I still have doubts doubts in understanding _TreeFindNearest(). The code (or pseudo code) to solve the problem "Partition a set of n points given a ~= relation in better than O(n^2) in the average case" is posted in another answer.
/*
Stripped C# 2.0 port of ``kdtree'', a library for working with kd-trees.
Copyright (C) 2007-2009 John Tsiombikas <nuclear#siggraph.org>
Copyright (C) 2010 Francesco Pretto <ceztko#gmail.com>
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
OF SUCH DAMAGE.
*/
using System;
using System.Collections.Generic;
using System.Text;
namespace ITR.Data.NET
{
public class KdTree<T>
{
#region Fields
private Node _Root;
private int _Count;
private int _Dimension;
private CoordinateGetter<T>[] _GetCoordinate;
#endregion // Fields
#region Constructors
public KdTree(params CoordinateGetter<T>[] coordinateGetters)
{
_Dimension = coordinateGetters.Length;
_GetCoordinate = coordinateGetters;
}
#endregion // Constructors
#region Public methods
public void Insert(T location)
{
_TreeInsert(ref _Root, 0, location);
_Count++;
}
public void InsertAll(IEnumerable<T> locations)
{
foreach (T location in locations)
Insert(location);
}
public IEnumerable<T> FindNeighborsRange(T location, double range)
{
return _TreeFindNeighborsRange(_Root, 0, location, range);
}
#endregion // Public methods
#region Tree traversal
private void _TreeInsert(ref Node current, int currentPlane, T location)
{
if (current == null)
{
current = new Node(location);
return;
}
int nextPlane = (currentPlane + 1) % _Dimension;
if (_GetCoordinate[currentPlane](location) <
_GetCoordinate[currentPlane](current.Location))
_TreeInsert(ref current._Left, nextPlane, location);
else
_TreeInsert(ref current._Right, nextPlane, location);
}
private IEnumerable<T> _TreeFindNeighborsRange(Node current, int currentPlane,
T referenceLocation, double range)
{
if (current == null)
yield break;
double squaredDistance = 0;
for (int it = 0; it < _Dimension; it++)
{
double referenceCoordinate = _GetCoordinate[it](referenceLocation);
double currentCoordinate = _GetCoordinate[it](current.Location);
squaredDistance +=
(referenceCoordinate - currentCoordinate)
* (referenceCoordinate - currentCoordinate);
}
if (squaredDistance <= range * range)
yield return current.Location;
double coordinateRelativeDistance =
_GetCoordinate[currentPlane](referenceLocation)
- _GetCoordinate[currentPlane](current.Location);
Direction nextDirection = coordinateRelativeDistance <= 0.0
? Direction.LEFT : Direction.RIGHT;
int nextPlane = (currentPlane + 1) % _Dimension;
IEnumerable<T> subTreeNeighbors =
_TreeFindNeighborsRange(current[nextDirection], nextPlane,
referenceLocation, range);
foreach (T location in subTreeNeighbors)
yield return location;
if (Math.Abs(coordinateRelativeDistance) <= range)
{
subTreeNeighbors =
_TreeFindNeighborsRange(current.GetOtherChild(nextDirection),
nextPlane, referenceLocation, range);
foreach (T location in subTreeNeighbors)
yield return location;
}
}
#endregion // Tree traversal
#region Node class
public class Node
{
#region Fields
private T _Location;
internal Node _Left;
internal Node _Right;
#endregion // Fields
#region Constructors
internal Node(T nodeValue)
{
_Location = nodeValue;
_Left = null;
_Right = null;
}
#endregion // Contructors
#region Children Indexers
public Node this[Direction direction]
{
get { return direction == Direction.LEFT ? _Left : Right; }
}
public Node GetOtherChild(Direction direction)
{
return direction == Direction.LEFT ? _Right : _Left;
}
#endregion // Children Indexers
#region Properties
public T Location
{
get { return _Location; }
}
public Node Left
{
get { return _Left; }
}
public Node Right
{
get { return _Right; }
}
#endregion // Properties
}
#endregion // Node class
#region Properties
public int Count
{
get { return _Count; }
set { _Count = value; }
}
public Node Root
{
get { return _Root; }
set { _Root = value; }
}
#endregion // Properties
}
#region Enums, delegates
public enum Direction
{
LEFT = 0,
RIGHT
}
public delegate double CoordinateGetter<T>(T location);
#endregion // Enums, delegates
}
The following C# method, together with KdTree class, Join() (enumerate all collections passed as argument) and Shuffled() (returns a shuffled version of the passed collection) methods solve the problem of my question. There may be some flawed cases (read EDITs in the question) when referenceVectors are the same vectors as vectorsToRelocate, as I do in my problem.
public static Dictionary<Vector2D, Vector2D> FindRelocationMap(
IEnumerable<Vector2D> referenceVectors,
IEnumerable<Vector2D> vectorsToRelocate)
{
Dictionary<Vector2D, Vector2D> ret = new Dictionary<Vector2D, Vector2D>();
// Preliminary filling
IEnumerable<Vector2D> allVectors =
Utils.Join(referenceVectors, vectorsToRelocate);
foreach (Vector2D vector in allVectors)
ret[vector] = vector;
KdTree<Vector2D> kdTree = new KdTree<Vector2D>(
delegate(Vector2D vector) { return vector.X; },
delegate(Vector2D vector) { return vector.Y; });
kdTree.InsertAll(Utils.Shuffled(ret.Keys));
HashSet<Vector2D> relocatedVectors = new HashSet<Vector2D>();
foreach (Vector2D vector in referenceVectors)
{
if (relocatedVectors.Contains(vector))
continue;
relocatedVectors.Add(vector);
IEnumerable<Vector2D> neighbors =
kdTree.FindNeighborsRange(vector, Tolerances.EUCLID_DIST_TOLERANCE);
foreach (Vector2D neighbor in neighbors)
{
ret[neighbor] = vector;
relocatedVectors.Add(neighbor);
}
}
return ret;
}
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Fastest way to determine if an integer's square root is an integer
What's a way to see if a number is a perfect square?
bool IsPerfectSquare(long input)
{
// TODO
}
I'm using C# but this is language agnostic.
Bonus points for clarity and simplicity (this isn't meant to be code-golf).
Edit: This got much more complex than I expected! It turns out the problems with double precision manifest themselves a couple ways. First, Math.Sqrt takes a double which can't precisely hold a long (thanks Jon).
Second, a double's precision will lose small values ( .000...00001) when you have a huge, near perfect square. e.g., my implementation failed this test for Math.Pow(10,18)+1 (mine reported true).
bool IsPerfectSquare(long input)
{
long closestRoot = (long) Math.Sqrt(input);
return input == closestRoot * closestRoot;
}
This may get away from some of the problems of just checking "is the square root an integer" but possibly not all. You potentially need to get a little bit funkier:
bool IsPerfectSquare(long input)
{
double root = Math.Sqrt(input);
long rootBits = BitConverter.DoubleToInt64Bits(root);
long lowerBound = (long) BitConverter.Int64BitsToDouble(rootBits-1);
long upperBound = (long) BitConverter.Int64BitsToDouble(rootBits+1);
for (long candidate = lowerBound; candidate <= upperBound; candidate++)
{
if (candidate * candidate == input)
{
return true;
}
}
return false;
}
Icky, and unnecessary for anything other than really large values, but I think it should work...
bool IsPerfectSquare(long input)
{
long SquareRoot = (long) Math.Sqrt(input);
return ((SquareRoot * SquareRoot) == input);
}
In Common Lisp, I use the following:
(defun perfect-square-p (n)
(= (expt (isqrt n) 2)
n))