Related
The details are a bit cringe, fair warning lol:
I want to set up meters on the floor of my building to catch someone; assume my floor is a number line from 0 to length L. The specific type of meter I am designing has a radius of detection that is 4.7 meters in the -x and +x direction (diameter of 9.4 meters of detection). I want to set them up in such a way that if the person I am trying to find steps foot anywhere in the floor, I will know. However, I can't just setup a meter anywhere (it may annoy other residents); therefore, there are only n valid locations that I can setup a meter. Additionally, these meters are expensive and time consuming to make, so I would like to use as few as possible.
For simplicity, you can assume the meter has 0 width, and that each valid location is just a point on the number line aformentioned. What is a greedy algorithm that places as few meters as possible, while being able to detect the entire hallway of length L like I want it to, or, if detecting the entire hallway is not possible, will output false for the set of n locations I have (and, if it isn't able to detect the whole hallway, still uses as few meters as possible while attempting to do so)?
Edit: some clarification on being able to detect the entire hallway or not
Given:
L (hallway length)
a list of N valid positions to place a meter (p_0 ... p_N-1) of radius 4.7
You can determine in O(N) either a valid and minimal ("good") covering of the whole hallway or a proof that no such covering exists given the constraints as follows (pseudo-code):
// total = total length;
// start = current starting position, initially 0
// possible = list of possible meter positions
// placed = list of (optimal) meter placements, initially empty
boolean solve(float total, float start, List<Float> possible, List<Float> placed):
if (total-start <= 0):
return true; // problem solved with no additional meters - woo!
else:
Float next = extractFurthestWithinRange(start, possible, 4.7);
if (next == null):
return false; // no way to cover end of hall: report failure
else:
placed.add(next); // placement decided
return solve(total, next + 4.7, possible, placed);
Where extractFurthestWithinRange(float start, List<Float> candidates, float range) returns null if there are no candidates within range of start, or returns the last position p in candidates such that p <= start + range -- and also removes p, and all candidates c such that p >= c.
The key here is that, by always choosing to place a meter in the next position that a) leaves no gaps and b) is furthest from the previously-placed position we are simultaneously creating a valid covering (= no gaps) and an optimal covering (= no possible valid covering could have used less meters - because our gaps are already as wide as possible). At each iteration, we either completely solve the problem, or take a greedy bite to reduce it to a (guaranteed) smaller problem.
Note that there can be other optimal coverings with different meter positions, but they will use the exact same number of meters as those returned from this pseudo-code. For example, if you adapt the code to start from the end of the hallway instead of from the start, the covering would still be good, but the gaps could be rearranged. Indeed, if you need the lexicographically minimal optimal covering, you should use the adapted algorithm that places meters starting from the end:
// remaining = length (starts at hallway length)
// possible = positions to place meters at, starting by closest to end of hallway
// placed = positions where meters have been placed
boolean solve(float remaining, List<Float> possible, Queue<Float> placed):
if (remaining <= 0):
return true; // problem solved with no additional meters - woo!
else:
// extracts points p up to and including p such that p >= remaining - range
Float next = extractFurthestWithinRange2(remaining, possible, 4.7);
if (next == null):
return false; // no way to cover start of hall: report failure
else:
placed.add(next); // placement decided
return solve(next - 4.7, possible, placed);
To prove that your solution is optimal if it is found, you merely have to prove that it finds the lexicographically last optimal solution.
And you do that by induction on the size of the lexicographically last optimal solution. The case of a zero length floor and no monitor is trivial. Otherwise you demonstrate that you found the first element of the lexicographically last solution. And covering the rest of the line with the remaining elements is your induction step.
Technical note, for this to work you have to be allowed to place monitoring stations outside of the line.
I have this question :
Airline company has N different planes and T pilots. Every pilot has a list of planes he can fly. Every flight needs 2 pilots. The company want to have as much flights simultaneously as possible. Find an algorithm that finds if you can have all the flights simultaneously.
This is the solution I thought about is finding max flow on this graph:
I am just not sure what the capacity should be. Can you help me with that?
Great idea to find the max flow.
For each edge from source --> pilot, assign a capacity of 1. Each pilot can only fly one plane at a time since they are running simultaneously.
For each edge from pilot --> plane, assign a capacity of 1. If this edge is filled with flow of 1, it represents that the given pilot is flying that plane.
For each edge from plane --> sink, assign a capacity of 2. This represents that each plane must be supplied by exactly 2 pilots.
Now, find a maximum flow. If the resulting maximum flow is two times the number of planes, then it's possible to satisfy the constraints. In this case, the edges between planes and pilots that are at capacity represent the matching.
The other answer is fine but you don't really need to involve flow as this can be reduced just as well to ordinary maximum bipartite matching:
For each plane, add another auxiliary plane to the plane partition with edges to the same pilots as the first plane.
Find a maximum bipartite matching M.
The answer is now true if and only if M = 2 N.
If you like, you can think of this as saying that each plane needs a pilot and a co-pilot, and the two vertices associated to each plane now represents those two roles.
The reduction to maximum bipartite matching is linear time, so using e.g. the Hopcroft–Karp algorithm to find the matching, you can solve the problem in O(|E| √|V|) where E is the number of edges between the partitions, and V = T + N.
In practice, the improvement over using a maximum flow based approach should depend on the quality of your implementations as well as the particular choice of representation of the graph, but chances are that you're better off this way.
Implementation example
To illustrate the last point, let's give an idea of how the two reductions could look in practice. One representation of a graph that's often useful due to its built-in memory locality is that of a CSR matrix, so let us assume that the input is such a matrix, whose rows correspond to the planes, and whose columns correspond to the pilots.
We will use the Python library SciPy which comes with algorithms for both maximum bipartite matching and maximum flow, and which works with CSR matrix representations for graphs under the hood.
In the algorithm given above, we will then need to construct the biadjacency matrix of the graph with the additional vertices added. This is nothing but the result of stacking the input matrix on top of itself, which is straightforward to phrase in terms of the CSR data structures: Following Wikipedia's notation, COL_INDEX should just be repeated, and ROW_INDEX should be replaced with ROW_INDEX concatenated with a copy of ROW_INDEX in which all elements are increased by the final element of ROW_INDEX.
In SciPy, a complete implementation which answers yes or no to the problem in OP would look as follows:
import numpy as np
from scipy.sparse.csgraph import maximum_bipartite_matching
def reduce_to_max_matching(a):
i, j = a.shape
data = np.ones(a.nnz * 2, dtype=bool)
indices = np.concatenate([a.indices, a.indices])
indptr = np.concatenate([a.indptr, a.indptr[1:] + a.indptr[-1]])
graph = csr_matrix((data, indices, indptr), shape=(2*i, j))
return (maximum_bipartite_matching(graph) != -1).sum() == 2 * i
In the maximum flow approach given by #HeatherGuarnera's answer, we will need to set up the full adjacency matrix of the new graph. This is also relatively straightforward; the input matrix will appear as a certain submatrix of the adjacency matrix, and we need to add a row for the source vertex and a column for the target. The example section of the documentation for SciPy's max flow solver actually contains an illustration of what this looks like in practice. Adopting this, a complete solution looks as follows:
import numpy as np
from scipy.sparse.csgraph import maximum_flow
def reduce_to_max_flow(a):
i, j = a.shape
n = a.nnz
data = np.concatenate([2*np.ones(i, dtype=int), np.ones(n + j, dtype=int)])
indices = np.concatenate([np.arange(1, i + 1),
a.indices + i + 1,
np.repeat(i + j + 1, j)])
indptr = np.concatenate([[0],
a.indptr + i,
np.arange(n + i + 1, n + i + j + 1),
[n + i + j]])
graph = csr_matrix((data, indices, indptr), shape=(2+i+j, 2+i+j))
flow = maximum_flow(graph, 0, graph.shape[0]-1)
return flow.flow_value == 2*i
Let us compare the timings of the two approaches on a single example consisting of 40 planes and 100 pilots, on a graph whose edge density is 0.1:
from scipy.sparse import random
inp = random(40, 100, density=.1, format='csr', dtype=bool)
%timeit reduce_to_max_matching(inp) # 191 µs ± 3.57 µs per loop
%timeit reduce_to_max_flow(inp) # 1.29 ms ± 20.1 µs per loop
The matching-based approach is faster, but not by a crazy amount. On larger problems, we'll start to see the advantages of using matching instead; with 400 planes and 1000 pilots:
inp = random(400, 1000, density=.1, format='csr', dtype=bool)
%timeit reduce_to_max_matching(inp) # 473 µs ± 5.52 µs per loop
%timeit reduce_to_max_flow(inp) # 68.9 ms ± 555 µs per loop
Again, this exact comparison relies on the use of specific predefined solvers from SciPy and how those are implemented, but if nothing else, this hints that simpler is better.
So here's the problem: given a set of intervals [a,b) where a,b is integers,0<=a < b=10^5, what's the length of all the interval(overlapped parts will only be counted once)? if we want to support two operations: add(a,b), and remove(a,b) ([a,b) exist in the set in the case of remove), which add and remove interval [a,b) to and from the set then return the new total length, can you do them in O(logn), where n is 10^5? .eg: if we have interval [1,3),[2,4), then the total length is 3 in this case.
My approach to it is using Segment tree, nodes of which is
class Segnode:
def __init__(self,start,end):
self.start,self.end = start,end
self.left,self.right = None,None
self.layer,self.cover = 0,0
self.lazy = 0
where layer record how many intervals cover this node (eg: if we have a set of [0,3),[1,3),[1,2), then the Segnode(1,3) will have layer 2, because only [0,3) and [1,3) totally cover it); cover record the length of the part of this node that covered(eg: Segnode(1,3) in last example will have cover 2).
But the thing is I couldn't come up with a correct,LAZY way to update the tree (if we don't use lazy propagation then the problem is trivial, but time complexity could reach O(n) where n could be 10^5 per operation). Can someone help me with this part? is there a correct, lazy approach to this?
Thank you very much.
This question already has answers here:
Dynamic programming - Largest square block
(7 answers)
Closed 1 year ago.
I would like to know the different algorithms to find the biggest square in a two dimensions map dotted with obstacles.
An example, where o would be obstacles:
...........................
....o......................
............o..............
...........................
....o......................
...............o...........
...........................
......o..............o.....
..o.......o................
The biggest square would be (if we choose the first one):
.....xxxxxxx...............
....oxxxxxxx...............
.....xxxxxxxo..............
.....xxxxxxx...............
....oxxxxxxx...............
.....xxxxxxx...o...........
.....xxxxxxx...............
......o..............o.....
..o.......o................
What would be the fastest algorithm to find it? The one with the smallest complexity?
EDIT: I know that people are interested on the algorithm explained in the accepted answer, so I made a document that explains it a bit more, you can find it here:
https://docs.google.com/document/d/19pHCD433tYsvAor0WObxa2qusAjKdx96kaf3z5I8XT8/edit?usp=sharing
Here is how to do this in the optimal amount of time, O(nm). This is built on top of #dukeling's insight that you never need to check a solution of size less than your current known best solution.
The key is to be able to build a data structure that can answer this query in O(1) time.
Is there an obstacle in the square whose top left corner is at r, c and has size k?
To solve that problem, we'll support answering a slightly harder question, also in O(1).
What is the count of items in the rectangle from r1, c1 to r2, c2?
It's easy to answer the square existence question with an answer from the rectangle count question.
To answer the rectangle count question, note that if you had pre-computed the answer for every rectangle that starts in the top left, then you could answer the general question for from r1, c1 to r2, c2 by a kind of clever/inclusion exclusion tactic using only rectangles that start in the top left
c1 c2
-----------------------
| | | |
| A | B | |
|_____________|____| | r1
| | | |
| C | D | |
|_____________|____| | r2
|_____________________|
We want the count of stuff inside D. In terms of our pre-computed counts from the top left.
Count(D) = Count(A ∪ B ∪ C ∪ D) - Count(A ∪ C) - Count(A ∪ B) + Count(A)
You can pre-compute all the top left rectangles in O(nm) by doing some clever row/column partial sums, but I'll leave that to you.
Then to answer the to the problem you want just involves checking possible solutions, starting with solutions that are at least as good as your known best. Your known best will only get better up to min(n, m) times total, so the best_possible increment will happen very rarely and almost all squares will be rejected in O(1) time.
best_possible = 0
for r in range(n):
for c in range(m):
while True:
# this looks O(min(n, m)), but it's amortized O(1) since best_possible
# rarely increased.
if possible(r, c, best_possible+1):
best_possible += 1
else:
break
One idea, making use of binary search.
The basic idea:
Start off in the top-left corner. See if a 1x1 square would work.
If it will work, increase the sides lengths of the square by 1 and repeat.
If it won't work, move right and repeat. If you've reached the right-most position, move to the next line.
The native approach:
We can simply check every possible cell of every square at each step, but this is fairly inefficient.
The optimized approach:
When increasing the square size, we can just do a binary search over the next row and column to see if that row / column contains an obstacle at any of those positions.
When moving to the right, we can do a binary search for each next column to determine if that column contains an obstacle at any of those positions.
When moving down, we can do a similar binary on each of the columns in the target position.
Implementation note:
To start off, we'd need to go through all the rows and columns and set up arrays containing the positions of the obstacles for each of them, which we can use for the binary searches.
Running time:
We do 2 binary searches to increase the square size, and the square size is maximum the size of the grid, so that is fairly small (O(min(m,n) log max(m,n))) and gets dominated by the below.
Beyond that, for each position, we do a single binary search on a column.
So, for a grid with m columns and n rows, the overall complexity is O(mn log m).
But note how little we're actually searching below when the grid is sparse.
Example:
For your example:
012345678901234567890123456
0...........................
1....o......................
2............o..............
3...........................
4....o......................
5...............o...........
6...........................
7......o..............o.....
8..o.......o................
We'd first try a 1x1 square in the top-left corner, which works.
Then a 2x2 square. For this, we do a binary search for the range [0,1] on the row 1, which can be represented simply by {4} - an array of a single position corresponding to where the obstacle is. And we also do a binary search for the range [0,1] on the column 1, which contains no obstacles, thus an empty array - {}.
Then a 3x3 square. For this, we do a binary search for [0,2] on the row 2, which contains 1 obstacles at position 12, thus {12}. And we also do a binary search for [0,2] on the column 2, which contains an obstacle at position 8, thus {8}.
Then a 4x4 square. For this, we do a binary search for [0,3] on the row 3 - {}. And for [0,3] on column 3 - {}.
Then a 5x5 square. For this, we do a binary search for [0,4] on the row 4 - {4}. And for [0,4] column 4 - {1,4}.
Here is the first one we actually find. In the range [0,4], we find 4 in both the row and the column (we only really need to find one of them). So this indicates a fail.
From here we do a binary search on column 4 (again - not really necessary) for [0,4]. Then binary search columns 5-8 for [0,4], none of them found, so a square starting at position 5,0 is the next possible candidate.
So from here we try to increase the square size to 5x5, which works, then 6x6 and 7x7, which works.
Then we try 8x8, which doesn't work.
And so on.
I know binary search, but how does yours work?
So we're basically doing a range search within a set of values. This is fairly easy to do. First search for the starting value of the range, then the end value. If we get to the same point, there are no values in the range.
We don't really care what values exist in the range, just whether or not there are any.
So here's one rough approach.
Store the x-y positions of all the obstacles.
For each obstacle O
find obstacle C that is nearest to it column-wise.
find obstacle R-top that is nearest to it row-wise from the top.
find obstacle R-bottom that is nearest to it row-wise from the bottom.
if (|R-top.y - R-bottom.y| != |O.x - C.x|) continue
Size of the square = Abs((R-top.y - R-bottom.y) * (O.x - C.x))
Keep track of the sizes and positions to find the largest square
Complexity is roughly O(k^2) where k is the number of obstacles. You could reduce it to O(k * log k) if you use binary search.
The following SO articles are identical/similar to the problem you're trying to solve. You may want to look over those answers as well as the responses to your question.
Dynamic programming - Largest square block
dynamic programming: finding largest non-overlapping squares
Dynamic programming: Find largest diamond (rhombus)
Here's the baseline case I'd use, written in simplified Python/pseudocode.
# obstacleMap is a list of list of MapElements, stored in row-major order
max([find_largest_rect(obstacleMap, element) for row in obstacleMap for element in row])
def find_largest_rect(obstacleMap, upper_left_elem):
size = 0
while not has_obstacles(obstacleMap, upper_left_elem, size+1):
size += 1
return size
def has_obstacles(obstacleMap, upper_left_elem, size):
#determines if there are obstacles on the on outside square layer
#for example, if U is the upper left element and size=3, then has_obstacles checks the elements marked p.
# .....
# ..U.p
# ....p
# ..ppp
periphery_row = obstacleMap[upper_left_elem.row][upper_left_elem.col:upper_left_elem.col+size]
periphery_col = [row[upper_left_elem.col+size] for row in obstacleMap[upper_left_elem.row:upper_left_elem.row+size]
return any(is_obstacle(elem) for elem in periphery_row + periphery_col)
def is_obstacle(elem):
return elem.value == 'o'
class MapElement(object):
def __init__(self, row, col, value):
self.row = row
self.col = col
self.value = value
here is an approach using recurrence relation :-
isSquare(R,C1,C2) = noObstacle(R,C1,R,C2) && noObstacle(R,C2,R-(C2-C1),C2) && isSquare(R-1,C1,C2-1)
isSquare(R,C1,C2) = square that has bottom side (R,C1) to (R,C2)
noObstacle(R1,C1,R2,C2) = checks whether there is no obstacle in line segment (R1,C1) to (R2,C2)
Find Max (C2-C1+1) which where isSquare(R,C1,C2) = true
You can use dynamic programming to solve this problem in polynomial time. Use suitable data structure for searching obstacle.
I want to make a linear fit to few data points, as shown on the image. Since I know the intercept (in this case say 0.05), I want to fit only points which are in the linear region with this particular intercept. In this case it will be lets say points 5:22 (but not 22:30).
I'm looking for the simple algorithm to determine this optimal amount of points, based on... hmm, that's the question... R^2? Any Ideas how to do it?
I was thinking about probing R^2 for fits using points 1 to 2:30, 2 to 3:30, and so on, but I don't really know how to enclose it into clear and simple function. For fits with fixed intercept I'm using polyfit0 (http://www.mathworks.com/matlabcentral/fileexchange/272-polyfit0-m) . Thanks for any suggestions!
EDIT:
sample data:
intercept = 0.043;
x = 0.01:0.01:0.3;
y = [0.0530642513911393,0.0600786706929529,0.0673485248329648,0.0794662409166333,0.0895915873196170,0.103837395346484,0.107224784565365,0.120300492775786,0.126318699218730,0.141508831492330,0.147135757370947,0.161734674733680,0.170982455701681,0.191799936622712,0.192312642057298,0.204771365716483,0.222689541632988,0.242582251060963,0.252582727297656,0.267390860166283,0.282890010610515,0.292381165948577,0.307990544720676,0.314264952297699,0.332344368808024,0.355781519885611,0.373277721489254,0.387722683944356,0.413648156978284,0.446500064130389;];
What you have here is a rather difficult problem to find a general solution of.
One approach would be to compute all the slopes/intersects between all consecutive pairs of points, and then do cluster analysis on the intersepts:
slopes = diff(y)./diff(x);
intersepts = y(1:end-1) - slopes.*x(1:end-1);
idx = kmeans(intersepts, 3);
x([idx; 3] == 2) % the points with the intersepts closest to the linear one.
This requires the statistics toolbox (for kmeans). This is the best of all methods I tried, although the range of points found this way might have a few small holes in it; e.g., when the slopes of two points in the start and end range lie close to the slope of the line, these points will be detected as belonging to the line. This (and other factors) will require a bit more post-processing of the solution found this way.
Another approach (which I failed to construct successfully) is to do a linear fit in a loop, each time increasing the range of points from some point in the middle towards both of the endpoints, and see if the sum of the squared error remains small. This I gave up very quickly, because defining what "small" is is very subjective and must be done in some heuristic way.
I tried a more systematic and robust approach of the above:
function test
%% example data
slope = 2;
intercept = 1.5;
x = linspace(0.1, 5, 100).';
y = slope*x + intercept;
y(1:12) = log(x(1:12)) + y(12)-log(x(12));
y(74:100) = y(74:100) + (x(74:100)-x(74)).^8;
y = y + 0.2*randn(size(y));
%% simple algorithm
[X,fn] = fminsearch(#(ii)P(ii, x,y,intercept), [0.5 0.5])
[~,inds] = P(X, y,x,intercept)
end
function [C, inds] = P(ii, x,y,intercept)
% ii represents fraction of range from center to end,
% So ii lies between 0 and 1.
N = numel(x);
n = round(N/2);
ii = round(ii*n);
inds = min(max(1, n+(-ii(1):ii(2))), N);
% Solve linear system with fixed intercept
A = x(inds);
b = y(inds) - intercept;
% and return the sum of squared errors, divided by
% the number of points included in the set. This
% last step is required to prevent fminsearch from
% reducing the set to 1 point (= minimum possible
% squared error).
C = sum(((A\b)*A - b).^2)/numel(inds);
end
which only finds a rough approximation to the desired indices (12 and 74 in this example).
When fminsearch is run a few dozen times with random starting values (really just rand(1,2)), it gets more reliable, but I still wouln't bet my life on it.
If you have the statistics toolbox, use the kmeans option.
Depending on the number of data values, I would split the data into a relative small number of overlapping segments, and for each segment calculate the linear fit, or rather the 1-st order coefficient, (remember you know the intercept, which will be same for all segments).
Then, for each coefficient calculate the MSE between this hypothetical line and entire dataset, choosing the coefficient which yields the smallest MSE.