Is there a feature in jqplot that calculates the curve and allows you to display values in between the points in a series?
For example the series below will only display these five values when showVerticalLine is set to true. I'd like to display each value along the curve with each tick.
var data = [['2011-05-03 10:15:30', 25],
['2011-05-04 11:30:30', 30],
['2011-05-05 10:15:30', 25],
['2011-05-06 11:30:30', 33],
['2011-05-07 10:15:30', 25]];
I think you could approach it in the following way:
For each curve segment check against a vertical line for intersection (some math on line and curve intersection, but there is more on the web I am sure you might even find a ready JavaScript method).
If they intersect you have the wanted point and you can display its x and y coordinates.
If you use smoothed line option then you could get the points
plotObj.series[0].renderer._smoothedPlotData, as in #Mark's answer, and test for example for point line distance and take the closest. Though the first mentioned approach would be more precise.
Related
I have 2 sets of points in 3D have the same count, I want to know if the have the same pattern, I thought I may project them on XZ,XY and YZ planes then compare the projections in each plane but I am not sure how to do this, I thought the convex hull may help but it won't be accurate.
Is there an easy algorithm to do that? the complexity is not a big issue so far as the points count will be tiny, I implement in Java.
Can I solve this in 3D direct with the same algorithm ?
The attached image shows an example of what I mean.
Edit:
No guarantee for order.
No scale, there are rotation and translation only.
I would gather some information about each point: information that only depends on "shape", not on the actual translation/rotation. For instance, it could be the sum of all the distances between the point and any other point of the shape. Or it could be the largest angle between any two points, as seen from the point under consideration. Choose whatever metric brings the most diversity.
Then sort the points by that metric.
Do the above for both groups of points.
As a first step you can compare both groups by their sorted list of metrics. Allow for a little error margin, since you will be dealing with floating point precision limitations. If they cannot be mapped to each other, abort the algorithm: they are different shapes.
Now translate the point set so that the first point in the ordered list is mapped to the origin (0, 0, 0), i.e. subtract the first point from all points in the group.
Now rotate the point set around the Y axis, so that the second point in the ordered list coincides with XY plane. The rotate the point set around the Z axis, so that that point coincides with the X-axis: it should map to (d, 0, 0), where d is the distance between the first and second point in the sorted list.
Finally, rotate the point set around the X axis, so that the third point in the ordered list coincides with the XY plane. If that point is colinear with the previous points, you need to continue doing this with the next point(s) until you have rotated a non-colinear point.
Do this with both groups of points. Then compare the so-transformed coordinates of both lists.
This is the main algorithm, but I have omitted the cases where the metric value is the same for two points, and thus the sorted list could have permutations without breaking the sort order:
In that case you need to perform the above transformations with the different permutations of those equally valued points at the start of the sorted list, for as long as there is no fit.
Also, while checking the fit, you should take into account that the matching point may not be in the exact same order as in the other group's sorted list, and you should verify the next points that have the same metric as well.
If you have a fixed object with different shapes and movements, pair-wise- or multi-matching can be a helpful solution for you. For example see this paper. This method can be extended for higher-dimensions as well.
If you have two different sets of points that come from different objects and you find the similarity between them, one solution can be computing discrete Frechet distance in both sets of points and then compare their value.
The other related concept is Shape Reconstruction. You can mix the result of a proper shape reconstruction algorithm with two previous methods to compute the similarity:
Multiple points on a 2D plane are given. They represent a window frame of mostly rectangular form with some possible variations. The points which are part of each side are not guaranteed to form a perfect line. Each side of the window should be measured.
A rotating electronic device attached to a window measures the distance in all directions providing a 360 degree measurements. By using the rotation angle and the distance, a set of points are plotted on a 2D coordinate system. So far so good.
Now comes the harder part. The measured window frame could have some variations. The points should be converted to straight lines and the length of each line should be measured.
I imagine that the following steps are required:
Group the different points into straights lines. This means approximating each line “between” the points that form it.
Drawing those lines, getting rid of the separate points used to construct the lines.
Find the points where each two lines intersect.
Measure the distance between those points. However not all distances between all points are interesting. For example diagonals within a frame are irrelevant.
Any Java libraries dealing with geometry that could solve the problem are acceptable. I will write the solution in Kotlin/Java, but any algorithmic insights or code examples and ideas in any other languages or pseudo code are welcome.
Thank you in advance!
New Image
I would solve this in 2 stages:
Data cleaning: round the location (X, Y) of each point to its nearest multiple of N (vary N for varying degrees of precision)
Apply the gift-wrapping algorithm (also known as Jarvis March)
You now have only those points that are not co-linear, and the lines between them, and the order in which they need to be traversed to form the perimeter.
Iterate over the points in order, take point Px and P(x+1), and calculate the distance between them.
From an PNG Image containing a transparent region and a colored region, I would like to generate a polygon with N sides (N being configurable) approximating the best possible edge of the image. I want this polygon to be defined by a series of vector.
For example, let consider the following image: + link to plus. I can manage to detect the edges of the image by counting, for each pixel, the number of transparent pixels around it. I get the following matrix:
0000000000000000
0000053335000000
0000030003000000
0000030003000000
0000020002000000
0533210001233500
0300000000000300
0300000000000300
0300000000000300
0533210001233500
0000020002000000
0000030003000000
0000030003000000
0000053335000000
0000000000000000
0000000000000000
I think, based on this matrix, I should be able to get the coordinate of all the corners and therefore get the vectors, but I cannot figure out how. In this case, I would like my program to return:
[7,2]->[11,2]
[11,2]->[11,6]
[11,6]->[15,6]
...
Do any of you have a suggestion or a link to do that?
Ultimately, I would also like the approximate angle other than 90 and 0, but that's really for a second stage.
I think you will find a number of tools in the CV toolkit can be of use to you. You'll do best to leverage these resources rather than roll your own solution.
The two features I think you'd be interested in extracting are edges and corners.
Edges, like what you were going for, can get you toward the outline of the shape. What you're probably not interested in right now are Edge Detection techniques. These will transform your image into a binary image of edge/space. Instead, you'll want to look into the Hough Transform which can give you end points for each of the lines in your image. If you are dealing with well defined, solid, straight lines as you seem to be, this should work quite well. You've tagged your question as Ruby so maybe you can take a look into OpenCV (OpenCV is written in C but there are ruby-opencv and javacv projects to bind). Here is the Hough Transform documentation for OpenCV. One thing you may find, however, is that the Hough transform doesn't give you lines which connect. This depends on the regularity/irregularity of the actual lines in your image. Because of this, you may need to manually connect the end points of the lines into a structure.
Corners, may work quite well for images such as the one you provided. The standard algorithm is Harris corner detection. Similar to the Hough transform, you can use this technique to return the 'most significant' features in the image. This technique is known for giving consistent results, even for different images of the same thing. As such, it's often used for pattern recognition and the like. However, if your images are as simple as the one provided, you may well be able to extract all of the shape's corners in this manner. Getting the shape of the image would then just be a matter of connecting the points in a meaningful way given your predefined N sides.
You should definitely play with both of these feature spaces and see how they work, and you could probably use both in concert for better results.
As an aside, if your image really is color/intensity on transparent you can convert your image to a 'binary image'. Note that this is not just binary data. Instead, it means you are only representing two colors, one represented by 0 and the other represented by 1. Doing so opens up a whole suite of tools that work on grayscale and binary images. For example, the matrix of numbers you calculated manually above is known as a distance transformation and can be done quite easily and efficiently using tools like OpenCV.
The Hough transform is a standard technique for finding lines, polygons, and other shapes given a set of points. It might exactly what you're looking for here. You could use the Hough transform to find all possible line segments in the image, then group nearby line segments together to get a set of polygons approximating the image.
Hope this helps!
In such a simple situation you can do the following three steps: find the centroid of your shape, sort the points of interest based on the angle between the x axis and the line formed by the current point and the centroid, walk through the sorted points.
Given the situation, the x coordinate of the centroid is the sum of the x coordinates of each point of interest divided by the total number of points of interest (respectively for the y coord of centroid). To calculate the angles, it is a simple matter of using atan2 available in virtually any language. Your points of interest are those that are either presented as 1 or 5, otherwise it is not a corner (based on your input).
Do not be fooled that Hough will solve your question, i.e., it won't give the sorted coordinates you are after. It is also an expensive method. Also, given your matrix, you already have such perfect information that no other method will beat (the problem, of course, is repeating such good result as you presented -- in those occasions, Hough might prove useful).
My Ruby is quite bad, so take the following code as a guideline to your problem:
include Math
data = ["0000000000000000",
"0000053335000000",
"0000030003000000",
"0000030003000000",
"0000020002000000",
"0533210001233500",
"0300000000000300",
"0300000000000300",
"0300000000000300",
"0533210001233500",
"0000020002000000",
"0000030003000000",
"0000030003000000",
"0000053335000000",
"0000000000000000",
"0000000000000000"]
corner_x = []
corner_y = []
data.each_with_index{|line, i|
line.split(//).each_with_index{|col, j|
if col == "1" || col == "5"
# Cartesian coords.
corner_x.push(j + 1)
corner_y.push(data.length - i)
end
}
}
centroid_y = corner_y.reduce(:+)/corner_y.length.to_f
centroid_x = corner_x.reduce(:+)/corner_x.length.to_f
corner = []
corner_x.zip(corner_y).each{|c|
dy = c[1] - centroid_y
dx = c[0] - centroid_x
theta = Math.atan2(dy, dx)
corner.push([theta, c])
}
corner.sort!
corner.each_cons(2) {|c|
puts "%s->%s" % [c[0][1].inspect, c[1][1].inspect]
}
This results in:
[2, 7]->[6, 7]
[6, 7]->[6, 3]
[6, 3]->[10, 3]
[10, 3]->[10, 7]
[10, 7]->[14, 7]
[14, 7]->[14, 11]
[14, 11]->[10, 11]
[10, 11]->[10, 15]
[10, 15]->[6, 15]
[6, 15]->[6, 11]
[6, 11]->[2, 11]
Which are your vertices in anti-clock-wise order starting with the bottom leftmost point (in cartesian coords starting in (1, 1) at left-bottom most position).
I was wondering if give a number of 2D points, how do calculate where to draw a point on a screen in a logarithmic y scale?
I tried to just take the logarithm of all the y-values of points and than plot them 'normally' (plot point [x, log(y)] => on height: height*log(y)/log(max)). But this approach causes problems for y-values under 1. So this makes me wonder if my method in general is the right approach. A tweak i could maybe use would be to use log(y/min) instead of log(y).
Any advice on improvement or better approaches are welcome!
By assuming y values are positive, use your own approach with a small bias, like: height*log(y-min+1)/log(max-min+1) to prevent from very big negative values.
If you plot y/ymin logarithmically, you'll scale the smallest value to 1, guaranteeing that all the logarithmic values are non-negative.
Check out the R implementation of plot which may provide you with some clues. If you use plot(x,y,log='y') then y axis is plotted on log scale.
About points<1, you will face same problem with -ve numbers, right? So basically you need to normalize data such that all points are within visible range on the screen. Use the following transformation:
ny = number_of_pixels*(log(y) - min(log(Y)))/(max(log(Y))-min(log(Y)))
from what i understand, you seem to be trying to plot log(y) but keeping the y axis as it was for y? that doesn't really make sense.
the way you are plotting is fine: you plot a point at (x, log(y)).
but what you need to change is the limits of the y axis. if it originally went from ymin to ymax, it now needs to go from log(ymin) to log(ymax).
if you change the y axis limits that way then the points will fit in just fine.
i have an array of points in 3d (imagine the trajectory of a ball) with X samples.
now, i want to resample these points so that i have a new array with positions with y samples.
y can be bigger or smaller than x but not smaller than 1. there will always be at least 1 sample.
how would an algorithm look like to resample the original array into a new one? thanks!
The basic idea is to take your X points and plot them on a graph. Then interpolate between them using some reasonable interpolation function. You could use linear interpolation, quadric B-splines, etc. Generally, unless you have a specific reason to believe the points represent a higher-order function (e.g. N4) you want to stick to a relatively low-order interpolation function.
Once you've done that, you have (essentially) a continuous line on your graph. To get your Y points, you just select Y points equally spaced along the graph's X axis.
You have to select some kind of interpolation/approximation function based on the original x samples (e.g. some kind of spline). Then you can evaluate this function at y (equally spaced, if wanted) points to get your new samples.
For the math, you can use the Wikipedia article about spline interpolation as starting point.