Why does this equality hold in the Floyd–Warshall algorithm? - algorithm

https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
Looking at Wikipedia, I'm studying Floyd-Warshall.
In the example section, On one distance matrix When j and k are the same or j and k same, I want to know why (i,j,k)==(i,j,k-1) With the meaning of k.

If i=k or j=k, then k is not useful as an intermediate point (it's already usable as an endpoint). So adding k to the set of allowable intermediate points doesn't give any useful new abilities, and the cost is the same.

Related

Minimal non-contiguous sequence of exactly k elements

The problem I'm having can be reduced to:
Given an array of N positive numbers, find the non-contiguous sequence of exactly K elements with the minimal sum.
Ok-ish: report the sum only. Bonus: the picked elements can be identified (at least one set of indices, if many can realize the same sum).
(in layman terms: pick any K non-neighbouring elements from N values so that their sum is minimal)
Of course, 2*K <= N+1 (otherwise no solution is possible), the problem is insensitive to positive/negative (just shift the array values with the MIN=min(A...) then add back K*MIN to the answer).
What I got so far (the naive approach):
select K+2 indexes of the values closest to the minimum. I'm not sure about this, for K=2 this seems to be the required to cover all the particular cases, but I don't know if it is required/sufficient for K>2**
brute force the minimal sum from the values of indices resulted at prev step respecting the non-contiguity criterion - if I'm right and K+2 is enough, I can live brute-forcing a (K+1)*(K+2) solution space but, as I said. I'm not sure K+2 is enough for K>2 (if in fact 2*K points are necessary, then brute-forcing goes out of window - the binomial coefficient C(2*K, K) grows prohibitively fast)
Any clever idea of how this can be done with minimal time/space complexity?
** for K=2, a non-trivial example where 4 values closest to the absolute minimum are necessary to select the objective sum [4,1,0,1,4,3,4] - one cannot use the 0 value for building the minimal sum, as it breaks the non-contiguity criterion.
PS - if you feel like showing code snippets, C/C++ and/or Java will be appreciated, but any language with decent syntax or pseudo-code will do (I reckon "decent syntax" excludes Perl, doesn't it?)
Let's assume input numbers are stored in array a[N]
Generic approach is DP: f(n, k) = min(f(n-1, k), f(n-2, k-1)+a[n])
It takes O(N*K) time and has 2 options:
for lazy backtracking recursive solution O(N*K) space
for O(K) space for forward cycle
In special case of big K there is another possibility:
use recursive back-tracking
instead of helper array of N*K size use map(n, map(k, pair(answer, list(answer indexes))))
save answer and list of indexes for this answer
instantly return MAX_INT if k>N/2
This way you'll have lower time than O(NK) for K~=N/2, something like O(Nlog(N)). It will increase up to O(N*log(N)Klog(K)) for small K, so decision between general approach or special case algorithm is important.
There should be a dynamic programming approach to this.
Work along the array from left to right. At each point i, for each value of j from 1..k, find the value of the right answer for picking j non-contiguous elements from 1..i. You can work out the answers at i by looking at the answers at i-1, i-2, and the value of array[i]. The answer you want is the answer at n for an array of length n. After you have done this you should be able to work out what the elements are by back-tracking along the array to work out whether the best decision at each point involves selecting the array element at that point, and therefore whether it used array[i-1][k] or array[i-2][k-1].

Can I get an explanation for how optimal substructure is used to find the longest increasing subsequence in this powerpoint slide?

I'm learning about finding optimal solutions in my algorithms class at the moment and one of the topics is about finding optimal substructures in problems.
My understanding of it so far is that we see if we can find an optimal solution for a problem of size n. If we can, then we increase the size of the problem by 1 so it's n+1. If the optimal solution for n+1 includes the entire optimal solution of n plus the new solution introduced by the +1, then we have optimal substructure.
I was given an example of using optimal substructure to find the longest increasing subsequence given a set of numbers. This is shown on the powerpoint slide here:
Can someone explain to me the notation on the bottom of the slide and give me a proof that this problem can be solved using optimal substructure?
Lower(i) means a set of positions j in S to the left of the current index i such that Sj is less than Si. In other words, elements Sj and Si are in increasing order, even though there may be other elements in between them.
The expression with the brace on the left explains how we construct the answer:
First line says that if the set Lower(i) is empty (i.e. Si is the largest number in the sequence so far) then the answer is 1. This is the base case: a single number is treated as one-element sequence
Second line says that if Lower(i) is not empty, then we pick the max element from it, and add 1. In other words, we look to the left of the number Si for another number Sj that is smaller than Si, and ends the longest ascending subsequence among Lower(i).
All of this is incredibly long way of writing these six lines of pseudocode:
L[0] = 1
for i = 1..N
L[i] = 1
for j = i..0
if S[i] > S[j] // Member of Lower(i) ?
L[i] = MAX(L[i], L[j]+1)
Just to add to #dasblinkenlight answer:
This is an iterative approach based on optimal substructure because at any given iteration i, we will figure out the length of the longest increasing subsequence ending at index i. Hence by the time we reach this iteration all corresponding LIS are already established for any index j < i. Using this information we find the answer for index i, i+1 and so on. Now the original question is asking for the LIS, but it has to have an ending index, so it is enough to take the maximum LIS among all indexes.
Such approach is strongly correlated with Mathematical Induction and quite broad programming/algorithm method Dynamic Programming.
P.S.
There exists another, slightly more complicated approach, which allows to compute LIS in a more efficient way using binary search. The algorithm from the slides is O(n^2), when O(n*log(n)) algorithm does exist as well.

Algorithm for finding all combinations of (x,y,z,j) that satisfy w+x = y+j, where w,x,y,j are integers between -N...N inclusive

I'm working on a problem that requires an array (dA[j], j=-N..N) to be calculated from the values of another array (A[i], i=-N..N) based on a conservation of momentum rule (x+y=z+j). This means that for a given index j for all the valid combinations of (x,y,z) I calculate A[x]A[y]A[z]. dA[j] is equal to the sum of these values.
I'm currently precomputing the valid indices for each dA[j] by looping x=-N...+N,y=-N...+N and calculating z=x+y-j and storing the indices if abs(z) <= N.
Is there a more efficient method of computing this?
The reason I ask is that in future I'd like to also be able to efficiently find for each dA[j] all the terms that have a specific A[i]. Essentially to be able to compute the Jacobian of dA[j] with respect to dA[i].
Update
For the sake of completeness I figured out a way of doing this without any if statements: if you parametrize the equation x+y=z+j given that j is a constant you get the equation for a plane. The constraint that x,y,z need to be integers between -N..N create boundaries on this plane. The points that define this boundary are functions of N and j. So all you have to do is loop over your parametrized variables (s,t) within these boundaries and you'll generate all the valid points by using the vectors defined by the plane (s*u + t*v + j*[0,0,1]).
For example, if you choose u=[1,0,-1] and v=[0,1,1] all the valid solutions for every value of j are bounded by a 6 sided polygon with points (-N,-N),(-N,-j),(j,N),(N,N),(N,-j), and (j,-N).
So for each j, you go through all (2N)^2 combinations to find the correct x's and y's such that x+y= z+j; the running time of your application (per j) is O(N^2). I don't think your current idea is bad (and after playing with some pseudocode for this, I couldn't improve it significantly). I would like to note that once you've picked a j and a z, there is at most 2N choices for x's and y's. So overall, the best algorithm would still complete in O(N^2).
But consider the following improvement by a factor of 2 (for the overall program, not per j): if z+j= x+y, then (-z)+(-j)= (-x)+(-y) also.

Skipping more than one node in Floyd's cycle finding Algorithm

Today I was reading Floyd's algorithm of detecting loop in a linked list.
I was just wondering that won't it be better if we skip more than one node, (say 2)
for faster loop detection?
For example:
fastptr=fastptr->next->next->next.
Note that the side effects will be taken into account while changing fastptr.
Your suggestion still is correct, but it doesn't change the speed of algorithm. If you take a look at tortoise and hare algorithm in wiki:
The key insight in the algorithm is that, for any integers i ≥ μ and k
≥ 0, xi = xi + kλ, where λ is the length of the
loop to be found and μ is start position of loop. In particular,
whenever i = kλ ≥ μ, it follows that xi =
x2i.
In the bold part, you could also say xi = x3i, or any other coefficient, but key insight is finding the i, it's not important with how many jumps you will find it, and order of algorithm, depends to the location of i.

Point covering problem

I recently had this problem on a test: given a set of points m (all on the x-axis) and a set n of lines with endpoints [l, r] (again on the x-axis), find the minimum subset of n such that all points are covered by a line. Prove that your solution always finds the minimum subset.
The algorithm I wrote for it was something to the effect of:
(say lines are stored as arrays with the left endpoint in position 0 and the right in position 1)
algorithm coverPoints(set[] m, set[][] n):
chosenLines = []
while m is not empty:
minX = min(m)
bestLine = n[0]
for i=1 to length of n:
if n[i][0] <= minX and n[i][1] > bestLine[1] then
bestLine = n[i]
add bestLine to chosenLines
for i=0 to length of m:
if m[i] <= bestLine[1] then delete m[i] from m
return chosenLines
I'm just not sure if this always finds the minimum solution. It's a simple greedy algorithm so my gut tells me it won't, but one of my friends who is much better than me at this says that for this problem a greedy algorithm like this always finds the minimal solution. For proving mine always finds the minimal solution I did a very hand wavy proof by contradiction where I made an assumption that probably isn't true at all. I forget exactly what I did.
If this isn't a minimal solution, is there a way to do it in less than something like O(n!) time?
Thanks
Your greedy algorithm IS correct.
We can prove this by showing that ANY other covering can only be improved by replacing it with the cover produced by your algorithm.
Let C be a valid covering for a given input (not necessarily an optimal one), and let S be the covering according to your algorithm. Now lets inspect the points p1, p2, ... pk, that represent the min points you deal with at each iteration step. The covering C must cover them all as well. Observe that there is no segment in C covering two of these points; otherwise, your algorithm would have chosen this segment! Therefore, |C|>=k. And what is the cost (segments count) in your algorithm? |S|=k.
That completes the proof.
Two notes:
1) Implementation: Initializing bestLine with n[0] is incorrect, since the loop may be unable to improve it, and n[0] does not necessarily cover minX.
2) Actually this problem is a simplified version of the Set Cover problem. While the original is NP-complete, this variation results to be polynomial.
Hint: first try proving your algorithm works for sets of size 0, 1, 2... and see if you can generalise this to create a proof by induction.

Resources