Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Imagine a straight line through the origin. Since rotations and
reflections are easy, assume the slope is in the range 0 to 1.
We have a grid of the integer points in the cartesian plane.
I want to find the grid point greater than 0 and <= D the
line passes closest to.
The simple approach is for each x from 1 .. D, find the point above and
below the line and calculate the perpendicular distance to the line.
This will take 2 x D comparisons to find the minimum.
That's not bad but I am trying to come up with a log(D) approach.
Is there one?
An equivalent problem would be to find the closest rational
number n / d where d <= D.
This question seems to be equivalent to yours: Finding the closest integer fraction to a given random real
The accepted answer there uses a Farey Sequence.
Also links to this interesting blog post.
Not a full answer, but an optimization in some cases: If the slope of the line is a rational number, then there will be repetition, allowing you to look at fewer points if D is larger than the denominator.
Eg: if the slope is 12/17, then you don't need to look at more than 17 points out from the origin. After that it will repeat.
Of course, if D < 17 in that example, it's no benefit.
Also, if your slope is π, you're out of luck...
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Can you help me with problem? Given N <= 10^5 pairs of points, suppose they are written in
array A, A[i][0] <= A[i][1]. Also, given M <= 10^5 pairs of segments, i-th pair given in form L_1[i], R_1[i], L_2[i], R_2[i]. For each pair of segments I need to find number of pairs from array A, such that for each pair (A[z][0], A[z][1]) it should be L_1[i] <= A[z][0] <= R_1[i] <= L_2[i] <= A[z][1] <= R_2[i].
I think here we can use scan line algorithm, but I don't know how to fit in time and memory. My idea works in N * M * log(N).
If you map A[i] to a point (A[i][0], A[i][1]) on 2d-plane, then for each segment, basically you're just counting the number of points inside the rectangle whose left-bottom corner is (L_1[i], L_2[i]) and right-top corner is (R_1[i], R_2[i]). Counting the points on 2d-plane is a classic question which could be solved in O(n logn). Here are some possible implementations:
Notice that number of points in a rectangle P(l,b,r,t) could be interpreted as P(0,0,r,t)-P(0,0,l-1,t)-P(0,0,r,b-1)+P(0,0,l-1,b-1), so the problem can be simplified to calculating P(0,0,?,?). This could be done easily if we maintain a fenwick tree during the process which basically resembles scan line algorithm.
Build a persistent segment tree for each x-coordinate (in time O(n logn)) and calculate the answers for segments (in time O(m logn)).
Build a kd-tree and answer each query in O(sqrt(n)) time. This is not efficient but could be useful when you want to insert points and count points online.
Sorry for my poor English. Feel free to point out my typos and mistakes.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Suppose I have a image that looks like
x x x x
x x x x
x x x x
Dimensions might change, but just give a rough idea about this.
I am curious if there is an algorithm that will help me quickly check if the current point I am looking at is a corner / points on one of the 4 sides / within the square itself, as well as helping me check all points around the point I am currently looking at.
My current approach is like writing a few helper functions that separately check if the current coordinate is a corner / points on one of the 4 sides / within the square itself? And within each helper function, I use several loops to check all the neighbor points around the point I am currently looking at. But I feel like this approach is extremely ineffective, I believe there must exists a more advanced way to do this, can anyone help me if you have encountered this kind of question before?
Thanks.
Largely you are correct, but there should be no need to use loops. You can make your functions efficient by using some index calculations and using direct access to 1-dimensional array.
Imagine that your image is stored in a 1-dimensional array D. The image is of size (m,n). Hence the array will have a size of m x n. Each data point will have its ID as the index to the array D.
To access neighbors of ID = a, use the following offsets:
a-1, a+1 for left and right neighbors
a-m, a+m for bottom and top neighbors
a-m+1, a-m-1, a+m+1, a+m-1 for diagonal neighbors
After every offset you need to check for the following:
is the neighbor index out of bound for the array D?
does the neighbor index wrap around the x-bounds i.e. assert that
abs((neighbor_id % m)-(a%m)) <= 1 , else neighbor_id is not my neighbor.
Of course, the second test assumes that your image is large enough (perhaps m > 3).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have n (about 10^5) points on a hypersphere of dimension m (between 10^4 to 10^6).
I am going to make a bunch of queries of the form "given a point p, find the closest of the n points to p". I'll make about n of these queries.
(Not sure if the hypersphere fact helps at all.)
The simple naive algorithm to solve this is, for each query, to compare p to all other n points. Doing this n times ends up with a runtime of O(n^2 m), which is far too big for me to be able to compute.
Is there a more efficient algorithm I can use? If I could get it to O(nm) with some log factors that'd be great.
Probably not. Having many dimensions makes efficient indexing extremely hard. That is why people look for opportunities to reduce the number of dimensions to something manageable.
See https://en.wikipedia.org/wiki/Curse_of_dimensionality and https://en.wikipedia.org/wiki/Dimensionality_reduction for more.
Divide your space up into hypercubes -- call these cells -- with edge size chosen so that on average you'll have one point per cube. You'll want a map from hypercells to the set of points they contain.
Then, given a point, check its hypercell for other points. If it is empty, look at the adjacent hypercells (I'd recommend a literal hypercube of hypercells for simplicity rather than some approximation to a hypersphere built out of hypercells). Check that for other points. Keep repeating until you get a point. Assuming your points are randomly distributed, odds are high that you'll find a second point within 1-2 expansions.
Once you find a point, check all hypercells that could possibly contain a closer point. This is possible because the point you find may be in a corner, but there's some closer point outside of the hypercube containing all the hypercells you've inspected so far.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Suppose you have some points in 3D. You would like to sort it either increasing or decreasing order. You don't consider CW/CCW on sorting. How do you sort?
One method of sorting 3D points would be to compare their magnitudes - their distance from the origin point (0, 0, 0) - which may be computed using a 3D analogue of Pythagora's Theorem:
M = sqrt(x^2 + y^2 + z^2) http://www.sciweavers.org/upload/Tex2Img_1392933041/eqn.png
You'll then have a list of floats/doubles that may be sorted using any conventional sorting algorithm.
That's just the most common method, though. There exist infinitely many ways of comparing 3D points, some of which are more sensible than others. For example, which is the "bigger" point, (1, 0, 0) or (-10, -50, 5)? Comparing the X or Y coordinate would suggest the former being larger, while comparing the Z coordinate or magnitude suggests the latter being larger. None of these answers are completely right or wrong; it really depends on what you need your application to do.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am trying to find a way to solve an Optimization problem as follows:
I have 22 different objects that can be selected more than once. I have a evaluation function f that takes the multiplicities and calculates the total value.
f is a product over fractions of linear (affine) terms and as such, differentiable and even smooth in the allowed region.
I want to optimize f with respect to the 22 variables, with the additional conditions that certain sums may not exceed certain values (for example, if a,...,v are my variables, a + e + i + m + q + s <= 9). By this, all of the variables are bounded.
If f were strictly monotonuous, this could be solved optimally by a (minimalistically modified) knapsack solution. However, the function isnt convex. That means it is even impossible to assume that if taking an object A is better than B on an empty knapsack, that this choice holds even when adding a third object C (as C could modify B's benefit to be better than A). This means that a greedy algorithm cannot be used;
Are there similar algorithms that solve such a problem in a optimal (or at least, nearly optimal) way?
EDIT: As requested, an example of what the problem is (I chose 5 variables a,b,c,d,e for simplicity)
for example,
f(a,b,c,d,e) = e*(a*0.45+b*1.2-1)/(c+d)
(Every variable only appears once, if this helps at all)
Also, for example, a+b+c=4, d+e=3
The problem is to optimize that with respect to a,b,c,d,e as integers. There is a bunch of optimization algorithms that hold for convex functions, but very few for non-convex...