Approximate matching of two lists of events (with duration) - algorithm

I have a black box algorithm that analyses a time series and "detects" certain events in the series. It returns a list of events, each containing a start time and end time. The events do not overlap.
I also have a list of the "true" events, again with start time and end time for each event, not overlapping.
I want to compare the two lists and match detected and true events that fall within a certain time tolerance (True Positives). The complication is that the algorithm may detect events that are not really there (False Positives) or might miss events that were there (False Negatives).
What is an algorithm that optimally pairs events from the two lists and leaves the proper events unpaired? I am pretty sure I am not the first one to tackle this problem and that such a method exists, but I haven't been able to find it, perhaps because I do not know the right terminology.
Speed requirement:
The lists will contain no more than a few hundred entries, and speed is not a major factor. Accuracy is more important. Anything taking less than a few seconds on an ordinary computer will be fine.

Here's a quadratic-time algorithm that gives a maximum likelihood estimate with respect to the following model. Let A1 < ... < Am be the true intervals and let B1 < ... < Bn be the reported intervals. The quantity sub(i, j) is the log-likelihood that Ai becomes Bj. The quantity del(i) is the log-likelihood that Ai is deleted. The quantity ins(j) is the log-likelihood that Bj is inserted. Make independence assumptions everywhere! I'm going to choose sub, del, and ins so that, for every i < i' and every j < j', we have
sub(i, j') + sub(i', j) <= max {sub(i, j ) + sub(i', j')
,del(i) + ins(j') + sub(i', j )
,sub(i, j') + del(i') + ins(j)
}.
This ensures that the optimal matching between intervals is noncrossing and thus that we can use the following Levenshtein-like dynamic program.
The dynamic program is presented as a memoized recursive function, score(i, j), that computes the optimal score of matching A1, ..., Ai with B1, ..., Bj. The root of the call tree is score(m, n). It can be modified to return the sequence of sub(i, j) operations in the optimal solution.
score(i, j) | i == 0 && j == 0 = 0
| i > 0 && j == 0 = del(i) + score(i - 1, 0 )
| i == 0 && j > 0 = ins(j) + score(0 , j - 1)
| i > 0 && j > 0 = max {sub(i, j) + score(i - 1, j - 1)
,del(i) + score(i - 1, j )
,ins(j) + score(i , j - 1)
}
Here are some possible definitions for sub, del, and ins. I'm not sure if they will be any good; you may want to multiply their values by constants or use powers other than 2. If Ai = [s, t] and Bj = [u, v], then define
sub(i, j) = -(|u - s|^2 + |v - t|^2)
del(i) = -(t - s)^2
ins(j) = -(v - u)^2.
(Apologies to the undoubtedly extant academic who published something like this in the bioinformatics literature many decades ago.)

Related

Product of consecutive numbers f(n) = n(n-1)(n-2)(n-3)(n- ...) find the value of n

Is there a way to find programmatically the consecutive natural numbers?
On the Internet I found some examples using either factorization or polynomial solving.
Example 1
For n(n−1)(n−2)(n−3) = 840
n = 7, -4, (3+i√111)/2, (3-i√111)/2
Example 2
For n(n−1)(n−2)(n−3) = 1680
n = 8, −5, (3+i√159)/2, (3-i√159)/2
Both of those examples give 4 results (because both are 4th degree equations), but for my use case I'm only interested in the natural value. Also the solution should work for any sequences size of consecutive numbers, in other words, n(n−1)(n−2)(n−3)(n−4)...
The solution can be an algorithm or come from any open math library. The parameters passed to the algorithm will be the product and the degree (sequences size), like for those two examples the product is 840 or 1640 and the degree is 4 for both.
Thank you
If you're interested only in natural "n" solution then this reasoning may help:
Let's say n(n-1)(n-2)(n-3)...(n-k) = A
The solution n=sthen verifies:
remainder of A/s = 0
remainder of A/(s-1) = 0
remainder of A/(s-2) = 0
and so on
Now, we see that s is in the order of t= A^(1/k) : A is similar to s*s*s*s*s... k times. So we can start with v= (t-k) and finish at v= t+1. The solution will be between these two values.
So the algo may be, roughly:
s= 0
t= (int) (A^(1/k)) //this truncation by leave out t= v+1. Fix it in the loop
theLoop:
for (v= t-k to v= t+1, step= +1)
{ i=0
while ( i <= k )
{ if (A % (v - k + i) > 0 ) // % operator to find the reminder
continue at theLoop
i= i+1
}
// All are valid divisors, solution found
s = v
break
}
if (s==0)
not natural solution
Assuming that:
n is an integer, and
n > 0, and
k < n
Then approximately:
n = FLOOR( (product ** (1/(k+1)) + (k+1)/2 )
The only cases I have found where this isn't exactly right is when k is very close to n. You can of course check it by back-calculating the product and see if it matches. If not, it almost certainly is only 1 or 2 in higher than this estimate, so just keep incrementing n until the product matches. (I can write this up in pseudocode if you need it)

How to determine probability of 0 to N events occurring given probability of each of those N events?

First time posting here, so if I make a mistake with something let me know and I'd be more than happy to fix it!
Given N events, each of which have an individual probability (from 0 to 100%) of occurring, I'd like to determine the probability of 0 to N of those events occurring together.
For example, if I have event 1, 2, 3,...,N and 5 (E1, E2, E3...,EN) where the individual probability of a specific event occurring is as follows:
E1 = 30% probability of occurring
E2 = 40% probability of occurring
E3 = 50% probability of occurring
...
EN = x% probability of occurring
I'd like to know the probability of having:
none of these events occurring
any 1 of these events occurring
any 2 of these events occurring
any 3 of these events occurring
...
all N of these events occurring
I understand that having 0 events occurring is (1-E1)(1-E2)...(1-EN) and that having all N events occurring is E1*E2*...*E3. However, I do not know how to calculate the other possibilities (1 to N-1 events occurring).
I have been looking for some recursive algorithm (binomial compound distribution) that could solve this but I have not found any explicit formula that does this. Wondering if any of you guys could help!
Thanks in advance!
EDIT: The events are indeed independent.
Sounds like Poisson binomial
wikipedia link.
There's an explicit recursive formula but beware of numerical stability.
where
Something like the following recursive program should work.
function ans = probability_vector(probabilities)
if len(probabilities) == 0
% No events can happen.
ans = [1];
elseif len(probabilities) == 1
% 0 or 1 events can happen.
ans = [1 - probabilities[1], probabilities[1]];
else
half = ceil(len(probabilities)/2);
ans_half1 = probability_vector(probabilities[1: half]);
ans_half2 = probability_vector(probabilities[half + 1: end]);
ans = convolve(ans_half1, ans_half2)
end
return
end
And if p is a probability vector, then p[i+1] is the probability of i of the events happening.
See http://matlabtricks.com/post-3/the-basics-of-convolution for an explanation of the magic conv operator that does the meat of the work.
You need to compute your own version of Pascal's triangle, with probabilities (instead of counts) in each location. Row 0 will be the single figure 1.00; row 1 consists of two values, P(E1) and 1-P(E1). Below that, in row k, each position is P(Ek)[above-right entry] + (1-P(Ek))[above-left entry]. I recommend a lower-triangular matrix for this, something like:
1.00
0.30 0.70
0.12 0.46 0.42 # These are 0.3*0.4 | 0.3*0.6 + 0.7*0.4 | 0.7*0.6
0.06 0.29 0.44 0.21 # 0.12*0.5 | 0.12*0.5 + 0.46*0.5 | ...
See how it works? In array / matrix notation for a matrix M, given event probabilities in vector P, this looks something like
M[k, i] = P[k] * M[k-1, i] +
(1-P[k]) * M[k-1, i] + P[k] * M[k-1, i-1]
The above is a nice recursive definition. Note that my earlier "above-right" reference in the lower-matrix notation is simply a row above; above-left is exactly that: row k-1, column i-1.
When you're done, the bottom row of the matrix will be the probabilities of getting N, N-1, N-2, ... 0 of the events. If you want these probabilities in the opposite order, then simply switch the coefficients P[k] and 1-P[k]
Does that get you moving toward a solution?
After tons of research and some help from the answers here, I've come up with the following code:
function [ prob_numSites ] = probability_activationSite( prob_distribution_site )
N = length(prob_distribution_site); % number of events
notProb = 1 - prob_distribution_site; % find probability of no occurrence
syms x; % create symbolic variable
prob_number = 1; % initializing prob_number to 1
for i = 1:N
prob_number = prob_number*(prob_distribution_site(i)*x + notProb(i));
end
prob_number_polynomial = expand(prob_number); % expands the function into a polynomial
revProb_numSites = coeffs(prob_number_polynomial); % returns the coefficients of the above polynomial (ie probability of 0 to N events, where first coefficient is N events occurring, last coefficient is 0 events occurring)
prob_numSites = fliplr(revProb_numSites); % reverses order of coefficients
This takes in probability of certain number of individual events occurring and returns array of the probability of 0 to N events occurring.
(This answer helped a lot).
None of these answers seemed to worked/was understandable for me so I computed it and made it myself in python:
def combin(n, k):
if k > n//2:
k = n-k
x = 1
y = 1
i = n-k+1
while i <= n:
x = (x*i)//y
y += 1
i += 1
return x
# proba being the probability of each of the N evenments, each being different from one another.
for i in range(N,0,-1):
print(i)
if sums[i]> 0:
continue
print(combin(N,i))
for j in itertools.combinations(proba, i):
sums[i]+=np.prod(j)
for i in range(N,0,-1):
for j in range(i+1,N+1):
icomb = combin(j,i)
sums[str(i)] -= icomb*sums[str(j)]
the math is not super simple:
Let $C_{w_n}$ be the set of all unordered sets $(i,j,k...n)$
where $i,j,k...n\in w$
$Co(i,proba) = sum{C_{w_i}} - sum_{u from i+1..n}{(u \choose i) sum{C_{w_u}}}$*
$Co(i, P)$ being the probability of i events occuring given $P = {p_i...p_n}$, the bernouilli probability of each event.

Dynamic Programming and Probability

I've been staring at this problem for hours and I'm still as lost as I was at the beginning. It's been a while since I took discrete math or statistics so I tried watching some videos on youtube, but I couldn't find anything that would help me solve the problem in less than what seems to be exponential time. Any tips on how to approach the problem below would be very much appreciated!
A certain species of fern thrives in lush rainy regions, where it typically rains almost every day.
However, a drought is expected over the next n days, and a team of botanists is concerned about
the survival of the species through the drought. Specifically, the team is convinced of the following
hypothesis: the fern population will survive if and only if it rains on at least n/2 days during the
n-day drought. In other words, for the species to survive there must be at least as many rainy days
as non-rainy days.
Local weather experts predict that the probability that it rains on a day i ∈ {1, . . . , n} is
pi ∈ [0, 1], and that these n random events are independent. Assuming both the botanists and
weather experts are correct, show how to compute the probability that the ferns survive the drought.
Your algorithm should run in time O(n2).
Have an (n + 1)×n matrix such that C[i][j] denotes the probability that after ith day there will have been j rainy days (i runs from 1 to n, j runs from 0 to n). Initialize:
C[1][0] = 1 - p[1]
C[1][1] = p[1]
C[1][j] = 0 for j > 1
Now loop over the days and set the values of the matrix like this:
C[i][0] = (1 - p[i]) * C[i-1][0]
C[i][j] = (1 - p[i]) * C[i-1][j] + p[i] * C[i - 1][j - 1] for j > 0
Finally, sum the values from C[n][n/2] to C[n][n] to get the probability of fern survival.
Dynamic programming problems can be solved in a top down or bottom up fashion.
You've already had the bottom up version described. To do the top-down version, write a recursive function, then add a caching layer so you don't recompute any results that you already computed. In pseudo-code:
cache = {}
function whatever(args)
if args not in cache
compute result
cache[args] = result
return cache[args]
This process is called "memoization" and many languages have ways of automatically memoizing things.
Here is a Python implementation of this specific example:
def prob_survival(daily_probabilities):
days = len(daily_probabilities)
days_needed = days / 2
# An inner function to do the calculation.
cached_odds = {}
def prob_survival(day, rained):
if days_needed <= rained:
return 1.0
elif days <= day:
return 0.0
elif (day, rained) not in cached_odds:
p = daily_probabilities[day]
p_a = p * prob_survival(day+1, rained+1)
p_b = (1- p) * prob_survival(day+1, rained)
cached_odds[(day, rained)] = p_a + p_b
return cached_odds[(day, rained)]
return prob_survival(0, 0)
And then you would call it as follows:
print(prob_survival([0.2, 0.4, 0.6, 0.8])

Optimized way of finding similar strings

Suppose I have a large list of words. (about 4-5 thousands and increasing). Someone searched for a string. Unfortunately, The string was not found on the wordlist. Now what would be the best and optimized way to find words similar to the input string? The first thing that came to my mind was calculating Levenshtein distance between each entry of the wordlist and the input string. But is that the optimized way to do that?
(Note that, this is not language-specific question)
EDIT: new solution
Yes, calculating Levenshtein distances between your input and the word list can be a reasonable approach, but takes a lot of time. BK-trees can improve this, but they become slow quickly as the Levenshtein distance becomes bigger. It seems we can speed up the Levenshtein distance calculations using a trie, as described in this excellent blog post:
Fast and Easy Levenshtein distance using a Trie
It relies on the fact that the dynamic programming lookup table for Levenshtein distance has common rows in different invocations i.e. levenshtein(kate,cat) and levenshtein(kate,cats).
Running the Python program given on that page with the TWL06 dictionary gives:
> python dict_lev.py HACKING 1
Read 178691 words into 395185 nodes
('BACKING', 1)
('HACKING', 0)
('HACKLING', 1)
('HANKING', 1)
('HARKING', 1)
('HAWKING', 1)
('HOCKING', 1)
('JACKING', 1)
('LACKING', 1)
('PACKING', 1)
('SACKING', 1)
('SHACKING', 1)
('RACKING', 1)
('TACKING', 1)
('THACKING', 1)
('WHACKING', 1)
('YACKING', 1)
Search took 0.0189998 s
That's really fast, and would be even faster in other languages. Most of the time is spent in building the trie, which is irrelevant as it needs to be done just once and stored in memory.
The only minor downside to this is that tries take up a lot of memory (which can be reduced with a DAWG, at the cost of some speed).
Another approach: Peter Norvig has an great article (with complete source code) on spelling correction.
http://norvig.com/spell-correct.html
The idea is to build possible edits of the words, and then choose the most likely spelling correction of that word.
I think that something better than this exists, but BK trees are a good optimization from brute force at least.
It uses the property of Levenshtein distance being a metric space, and hence if you get a Levenshtein distance of d between your query and an arbitrary string s from the dict, then all your results must be at a distance from (d+n) to (d-n) to s. Here n is the maximum Levenshtein distance from the query you want to output.
It's explained in detail here: http://blog.notdot.net/2007/4/Damn-Cool-Algorithms-Part-1-BK-Trees
If you are interested in the very code itself I implemented an algorithm for finding optimal alignment between two strings. It basically shows how to transform one string into the other with k operations (where k is Levenstein/Edit Distance of the strings). It could be bit simplified for your needs (as you only need the distance itself). By the way it works in O(mn) where m and n are lengths of the strings. My implementation is based on: this and this.
#optimalization: using int instead of strings:
#1 ~ "left", "insertion"
#7 ~ "up", "deletion"
#17 ~ "up-left", "match/mismatch"
def GetLevenshteinDistanceWithBacktracking(sequence1,sequence2):
distances = [[0 for y in range(len(sequence2)+1)] for x in range(len(sequence1)+1)]
backtracking = [[1 for y in range(len(sequence2)+1)] for x in range(len(sequence1)+1)]
for i in range(1, len(sequence1)+1):
distances[i][0]=i
for i in range(1, len(sequence2)+1):
distances[0][i]=i
for j in range(1, len(sequence2)+1):
for i in range(1, len(sequence1)+1):
if sequence1[i-1] == sequence2[j-1]:
distances[i][j]=distances[i-1][j-1]
backtracking[i][j] = 17
else:
deletion = distances[i-1][j]+1
substitution = distances[i-1][j-1]+1
insertion = distances[i][j-1] + 1
distances[i][j]=min( deletion, substitution, insertion)
if distances[i][j] == deletion:
backtracking[i][j] = 7
elif distances[i][j] == insertion:
backtracking[i][j] = 1
else:
backtracking[i][j] = 17
return (distances[len(sequence1)][len(sequence2)], backtracking)
def Alignment(sequence1, sequence2):
cost, backtracking = GetLevenshteinDistanceWithBacktracking(sequence1, sequence2)
alignment1 = alignment2 = ""
i = len(sequence1)
j = len(sequence2)
#from backtracking-matrix we get optimal-alignment
while(i > 0 or j > 0):
if i > 0 and j > 0 and backtracking[i][j] == 17:
alignment1 = sequence1[i-1] + alignment1
alignment2 = sequence2[j-1] + alignment2
i -= 1
j -= 1
elif i > 0 and backtracking[i][j] == 7:
alignment1 = sequence1[i-1] + alignment1
alignment2 = "-" + alignment2
i -= 1
elif j > 0 and backtracking[i][j]==1:
alignment2 = sequence2[j-1] + alignment2
alignment1 = "-" + alignment1
j -= 1
elif i > 0:
alignment1 = sequence1[i-1] + alignment1
alignment2 = "-" + alignment2
i -= 1
return (cost, (alignment1, alignment2))
It depends on broader context and how accurate you want to be. But what I would (probably) start with:
Only consider the subset which starts with the same character as the query word. It would decrease the amount of work by ~20 for a single query.
I would categorized words according to their lengths and for each category would allow the maximal distance to be a different number. In case of 4 categories e.g.:
0 -- if length is between 0 and 2; 1 -- if length is between 3 and 5; 2 -- if length is between 6 and 8; 3 -- if length is 9+. Then based on the query length you could just check the words from given category. Moreover it should not be hard to implement the algorithm to stop when the max. distance has been exceeded.
If needed would start to think about implementing some machine learning approach.

How would you look at developing an algorithm for this hotel problem?

There is a problem I am working on for a programming course and I am having trouble developing an algorithm to suit the problem. Here it is:
You are going on a long trip. You start on the road at mile post 0. Along the way there are n
hotels, at mile posts a1 < a2 < ... < an, where each ai is measured from the starting point. The
only places you are allowed to stop are at these hotels, but you can choose which of the hotels
you stop at. You must stop at the final hotel (at distance an), which is your destination.
You'd ideally like to travel 200 miles a day, but this may not be possible (depending on the spacing
of the hotels). If you travel x miles during a day, the penalty for that day is (200 - x)^2. You want
to plan your trip so as to minimize the total penalty that is, the sum, over all travel days, of the
daily penalties.
Give an efficient algorithm that determines the optimal sequence of hotels at which to stop.
So, my intuition tells me to start from the back, checking penalty values, then somehow match them going back the forward direction (resulting in an O(n^2) runtime, which is optimal enough for the situation).
Anyone see any possible way to make this idea work out or have any ideas on possible implmentations?
If x is a marker number, ax is the mileage to that marker, and px is the minimum penalty to get to that marker, you can calculate pn for marker n if you know pm for all markers m before n.
To calculate pn, find the minimum of pm + (200 - (an - am))^2 for all markers m where am < an and (200 - (an - am))^2 is less than your current best for pn (last part is optimization).
For the starting marker 0, a0 = 0 and p0 = 0, for marker 1, p1 = (200 - a1)^2. With that starting information you can calculate p2, then p3 etc. up to pn.
edit: Switched to Java code, using the example from OP's comment. Note that this does not have the optimization check described in second paragraph.
public static void printPath(int path[], int i) {
if (i == 0) return;
printPath(path, path[i]);
System.out.print(i + " ");
}
public static void main(String args[]) {
int hotelList[] = {0, 200, 400, 600, 601};
int penalties[] = {0, (int)Math.pow(200 - hotelList[1], 2), -1, -1, -1};
int path[] = {0, 0, -1, -1, -1};
for (int i = 2; i <= hotelList.length - 1; i++) {
for(int j = 0; j < i; j++){
int tempPen = (int)(penalties[j] + Math.pow(200 - (hotelList[i] - hotelList[j]), 2));
if(penalties[i] == -1 || tempPen < penalties[i]){
penalties[i] = tempPen;
path[i] = j;
}
}
}
for (int i = 1; i < hotelList.length; i++) {
System.out.print("Hotel: " + hotelList[i] + ", penalty: " + penalties[i] + ", path: ");
printPath(path, i);
System.out.println();
}
}
Output is:
Hotel: 200, penalty: 0, path: 1
Hotel: 400, penalty: 0, path: 1 2
Hotel: 600, penalty: 0, path: 1 2 3
Hotel: 601, penalty: 1, path: 1 2 4
It looks like you can solve this problem with dynamic programming. The subproblem is the following:
d(i) : The minimum penalty possible when travelling from the start to hotel i.
The recursive formula is as follows:
d(0) = 0 where 0 is the starting position.
d(i) = min_{j=0, 1, ... , i-1} ( d(j) + (200-(ai-aj))^2)
The minimum penalty for reaching hotel i is found by trying all stopping places for the previous day, adding today's penalty and taking the minimum of those.
In order to find the path, we store in a separate array (path[]) which hotel we had to travel from in order to achieve the minimum penalty for that particular hotel. By traversing the array backwards (from path[n]) we obtain the path.
The runtime is O(n^2).
This is equivalent to finding the shortest path between two nodes in a directional acyclic graph. Dijkstra's algorithm will run in O(n^2) time.
Your intuition is better, though. Starting at the back, calculate the minimum penalty of stopping at that hotel. The first hotel's penalty is just (200-(200-x)^2)^2. Then, for each of the other hotels (in reverse order), scan forward to find the lowest-penalty hotel. A simple optimization is to stop as soon as the penalty costs start increasing, since that means you've overshot the global minimum.
I don't think you can do it as easily as sysrqb states.
On a side note, there is really no difference to starting from start or end; the goal is to find the minimum amount of stops each way, where each stop is as close to 200m as possible.
The question as stated seems to allow travelling beyond 200m per day, and the penalty is equally valid for over or under (since it is squared). This prefers an overage of miles per day rather than underage, since the penalty is equal, but the goal is closer.
However, given this layout
A ----- B----C-------D------N
0 190 210 390 590
It is not always true. It is better to go to B->D->N for a total penalty of only (200-190)^2 = 100. Going further via C->D->N gives a penalty of 100+400=500.
The answer looks like a full breadth first search with active pruning if you already have an optimal solution to reach point P, removing all solutions thus far where
sum(penalty-x) > sum(penalty-p) AND distance-to-x <= distance-to-p - 200
This would be an O(n^2) algorithm
Something like...
Quicksort all hotels by distance from start (discard any that have distance > hotelN)
Create an array/list of solutions, each containing (ListOfHotels, I, DistanceSoFar, Penalty)
Inspect each hotel in order, for each hotel_I
Calculate penalty to I, starting from each prior solution
Pruning
For each prior solution that is beyond 200 distanceSoFar from
current, and Penalty>current.penalty, remove it from list
loop
Following is the MATLAB code for hotel problem.
clc
clear all
% Data
% a = [0;50;180;150;50;40];
% a = [0, 200, 400, 600, 601];
a = [0,10,180,350,450,600];
% a = [0,1,2,3,201,202,203,403];
n = length(a);
opt(1) = 0;
prev(1)= 1;
for i=2:n
opt(i) =Inf;
for j = 1:i-1
if(opt(i)>(opt(j)+ (200-a(i)+a(j))^2))
opt(i)= opt(j)+ (200-a(i)+a(j))^2;
prev(i) = j;
end
end
S(i) = opt(i);
end
k = 1;
i = n;
sol(1) = n;
while(i>1)
k = k+1;
sol(k)=prev(i);
i = prev(i);
end
for i =k:-1:1
stops(i) = sol(i);
end
stops
Step 1 of 2
Sub-problem:
In this scenario, "C(j)" has been considered as sub-problem for minimum penalty gained up to the hotel "ai" when "0<=i<=n". The required value for the problem is "C(n)".
Algorithm to find minimum total penalty:
If the trip is stopped at the location "aj" then the previous stop will be "ai" and the value of i and should be less than j. Then all the possibilities of "ai", has been follows:
C(j) min{C(i)+(200-(aj-ai))^2}, 0<=i<=j.
Initialize the value of "C(0)" as "0" and “a0" as "0" to find the remaining values.
To find the optimal route, increase the value of "j" and "i" for each iteration of and use this detail to backtrack from "C(n)".
Here, "C(n)" refers the penalty of the last hotel (That is, the value of "i" is between "0" and "n").
Pseudocode:
//Function definition
Procedure min_tot()
//Outer loop to represent the value of for j = 1 to n:
//Calculate the distance of each stop C(j) = (200 — aj)^2
//Inner loop to represent the value of for i=1 to j-1:
//Compute total penalty and assign the minimum //total penalty to
"c(j)"
C(j) = min (C(i), C(i) + (200 — (aj — ai))^2}
//Return the value of total penalty of last hotel
return C(n)
Step 2 of 2
Explanation:
The above algorithm is used to find the minimum total penalty from the starting point to the end point.
It uses the function "min()" to find the total penalty for the each stop in the trip and computes the minimum
penalty value.
Running time of the algorithm:
This algorithm contains "n" sub-problems and each sub-problem take "O(n)" times to resolve.
It is needed to compute only the minimum values of "O(n)".
And the backtracking process takes "O(n)" times.
The total running time of the algorithm is nxn = n^2 = O(n^2) .
Therefore, this algorithm totally takes "0(n^2)" times to solve the whole problem.
I have come across this problem recently and wanted to share my solution written in Javascript.
Not dissimilar to the most of the above solutions, I have used dynamic programming approach. To calculate penalties[i], we need to search for such stopping place for the previous day so that the penalty is minimum.
penalties(i) = min_{j=0, 1, ... , i-1} ( penalties(j) + (200-(hotelList[i]-hotelList[j]))^2) The solution does not assume that the first penalty is Math.pow(200 - hotelList[1], 2). We don't know whether or not it is optimal to stop at the first top so this assumption should not be made.
In order to find the optimal path and store all the stops along the way, the helper array path is being used. Finally, the array is being traversed backwards to calculate the finalPath.
function calculateOptimalRoute(hotelList) {
const path = [];
const penalties = [];
for (i = 0; i < hotelList.length; i++) {
penalties[i] = Math.pow(200 - hotelList[i], 2)
path[i] = 0
for (j = 0; j < i; j++) {
const temp = penalties[j] + Math.pow((200 - (hotelList[i] - hotelList[j])), 2)
if (temp < penalties[i]) {
penalties[i] = temp;
path[i] = (j + 1);
}
}
}
const finalPath = [];
let index = path.length - 1
while (index >= 0) {
finalPath.unshift(index + 1);
index = path[index] - 1;
}
console.log('min penalty is ', penalties[hotelList.length - 1])
console.log('final path is ', finalPath)
return finalPath;
}
// calculateOptimalRoute([20, 40, 60, 940, 1500])
// Outputs [3, 4, 5]
// calculateOptimalRoute([190, 420, 550, 660, 670])
// Outputs [1, 2, 5]
// calculateOptimalRoute([200, 400, 600, 601])
// Outputs [1, 2, 4]
// calculateOptimalRoute([])
// Outputs []
To answer your question concisely, a PSPACE-complete algorithm is usually considered "efficient" for most Constraint Satisfaction Problems, so if you have an O(n^2) algorithm, that's "efficient".
I think the simplest method, given N total miles and 200 miles per day, would be to divide N by 200 to get X; the number of days you will travel. Round that to the nearest whole number of days X', then divide N by X' to get Y, the optimal number of miles to travel in a day. This is effectively a constant-time operation. If there were a hotel every Y miles, stopping at those hotels would produce the lowest possible score, by minimizing the effect of squaring each day's penalty. For instance, if the total trip is 605 miles, the penalty for travelling 201 miles per day (202 on the last) is 1+1+4 = 6, far less than 0+0+25 = 25 (200+200+205) you would get by minimizing each individual day's travel penalty as you went.
Now, you can traverse the list of hotels. The fastest method would be to simply pick the hotel that is the closest to each multiple of Y miles. It's linear-time and will produce a "good" result. However, I do not think this will produce the "best" result in all cases.
The more complex but foolproof method is to get the two closest hotels to each multiple of Y; the one immediately before and the one immediately after. This produces an array of X' pairs, which can be traversed in all possible permutations in 2^X' time. You can shorten this by applying Dijkstra to a map of these pairs, which will determine the least costly path for each day's travel, and will execute in roughly (2X')^2 time. This will probably be the most efficient algorithm that is guaranteed to produce the optimal result.
As #rmmh mentioned you are finding minimum distance path. Here distance is penalty ( 200-x )^2
So you will try to find a stopping plan by finding minimum penalty.
Lets say D(ai) gives distance of ai from starting point
P(i) = min { P(j) + (200 - (D(ai) - D(dj)) ^2 } where j is : 0 <= j < i
From a casual analysis it looks to be
O(n^2) algorithm ( = 1 + 2 + 3 + 4 + .... + n ) = O(n^2)
As a proof of concept, here is my JavaScript solution in Dynamic Programming without nested loops.
We start at zero miles.
We find the next stop by keeping the penalty as low as we can by comparing the penalty of a current hotel in the loop to the previous hotel's penalty.
Once we have our current minimum, we have found our stop for the day. We assign this point as our next starting point.
Optionally, we could keep the total of the penalties:
let hotels = [40, 80, 90, 200, 250, 450, 680, 710, 720, 950, 1000, 1080, 1200, 1480]
function findOptimalPath(arr) {
let start = 0
let stops = []
for (let i = 0; i < arr.length; i++) {
if (Math.pow((start + 200) - arr[i-1], 2) < Math.pow((start + 200) - arr[i], 2)) {
stops.push(arr[i-1])
start = arr[i-1]
}
}
console.log(stops)
}
findOptimalPath(hotels)
Here is my Python solution using Dynamic Programming:
distance = [150,180,250,340]
def hotelStop(distance):
n = len(distance)
DP = [0 for _ in distance]
for i in range(n-2,-1,-1):
min_penalty = float("inf")
for j in range(i+1,n):
# going from hotel i to j in first day
x = distance[j]-distance[i]
penalty = (200-x)**2
total_pentalty = penalty+ DP[j]
min_penalty = min(min_penalty,total_pentalty)
DP[i] = min_penalty
return DP[0]
hotelStop(distance)

Resources