Bounding boxes always show height as abs(min.y) + max.y - three.js

With particular reference to Three.BoundingBoxHelper is this correct? I want to use the numbers to clone and array a collection of objects a number of times in a specified direction. Perhaps there is a better way?

If you don't want y but z to point up you could consider changing the default up for your three.js project by setting THREE.Object3D.DefaultUp to 0,0,1:
THREE.Object3D.DefaultUp.set( 0, 0, 1 );

A bounding box gives you a couple of Vector3 where you know the minimums and maximums where your object would fit in.
So, yes, the height or Y distance for your object would be:
abs(min.y)+max.y

Related

What is the formula for calculating the change in position of an animated object in relationship to change in distance? (Parallax)

I am trying to create a parallax animation with multiple objects in the scene that have a different distance to the observer(me) for 3-d effect, but i don`t know how to accurately calculate the relationship of change in speed and size to distance.
I think it is easier to approach this question with the difference of displacement rather then speed. So lets say the distance to the first object is 200px. It`s size is going to be
Size = original_size * 1/distance
So when the object at distance with no distance is going to be displaced by it`s original size, the object with distance 250px is going to be displayed by
Displacement = original_size(px) * 1/250(px)
So to get the per pxl change in distance
PerPxlChange = (Displacement(original) - Displacement(250px))/Displacement(original)
And then just multiply the change per pxl by the change in position in original object.

Calculate point of intersection in bounding box

I have two x,y pairs that create a line within a bounding box.
coord1 = 75, 180
coord2 = -30, 300
The bounding box is x0 to x500 and y0 to y400
I want to create an object that can tell me the coordinates of where the line intersects the bounding box.
i.e.
Intercept.new(bounding_box, coord1, coord2).call! returns the intercept point [x,y]
I believe I need to use y = mx + b, but I'm having trouble writting an object that can take these two coordinates, factor in the bounding box, and tell me where the intersection point happens. Can anyone take a shot and help me out here?
EDIT Not a duplicate of the question linked in the comments. That question has a constant of the point B always being in the center of the rectangle.
You're on the right track with utilizing y = mx + b concepts, and some further linear algebra will be required to solve the problem exactly as you're drawing it. However, you stated you were just looking for direction on where to start for approaching this particular problem.
It seems someone ran into a similar problem regarding projectile intersections while developing a game that may be relevant to your struggles. Here's his blog post: http://factore.ca/blog/166-how-to-calculate-the-point-of-intersection-between-a-line-and-a-bounding-box
Here's a link to his ruby specific solution to his problem: https://github.com/adriand/intercept-calculator/blob/master/intercept_math.rb
Hope this helps!

What is the best way to composite several rectangle structs together into one large rectangle

Say I have 2 rectangles (originX, originY, width, height)
0,0,100,100
100,100,100,100
What is the best way to get the rectangle that contains both?
i.e: 0,0,200,200
Here is a crappy paint picture to illustrate what I mean
My plan now is to:
1) Find the smallest origin, that is the origin of the return rect
2) Find the highest originY+height, that is the upper bound of the return rect
3) Find the highest originX+width, that is the right bound of the return rect
But my issue is that there could potentially be thousands of rectangles, so I want to make sure I have the best solution
I think I'm having a hard time finding a solution because I'm wording this poorly. In my mind, this would be something like compositing rectangles-- but is there another way to describe what I'm trying to do here?
No matter what, you will have to look at each rectangle to achieve you answer. If you fail to look at even one, you could miss a point that is outside of your bounds. So no matter what the best solution you can find will be O(n).
Since we are looking for an O(n) solution, it is pretty simple: just iterate over each rectangle. Store the minimum and maximum x and y found so far. Note that maximum x and y are defined in this case by originX + width and originY + height. After iterating over all rectangles, the rectangle formed by the points minX, minY, maxX, maxY will be your solution.

Distance between set of lines

Say that my images are simple shapes - set of lines, dots, curves, and simple objects,
How do I calculate the distance between images - so length is important but total scale is non important, location of line\curve is important, angles is important etc
Attached image For example:
My comparison object is a cube on the top left, score are fictitious just for this example.
that the distance to the Cylinder is 80 (has 2 lines but top geometry is different)
The bottom left cube score is 100 since it exact match lines with different scale.
The bottom right Rectangle score is 90 since it has exact match lines on the top but different scale lines on the side.
I am looking for algorithm name or general approach that will help me to start to think towards a solution....
Thank you for your help.
Here is something to get you started. When jumping into new problems, I don't see much value in trying a lot of complex steps just because they are available somewhere to use. So my focus is on using relatively simple things, that will fail in more varied situations, but hopefully you will see its value and get some sense of the problem.
The approach is fully based on corner detection; two typical methods for this detection are the Harris detector or the one by Shi and Tomasi described in the paper "Good Features to Track", 1994. I will use the second one, just because there is a ready implementation in OpenCV, newer Matlab, and possibly many other places. Its implementation on these packages also allows for easier parameter adjustment, regarding corner quality and minimum distance between corners. So, suppose you can detect all corner points correctly, how do you measure how close one shape is to another one based on these points ? The images have arbitrary size, so my idea is to normalize the point coordinates to the range [0, 1]. This solves for the scaling issue, which is desired according to the original description. Now we have to compare point sets in the range [0, 1]. Here we go for the simplest thing: consider one point p from the shape a, what is the closest point in shape b ? We assume it is one with the minimum absolute different between this point p and any point in b. If we sum all the values, we get a scoring between shapes. The lower the score, the more similar the shapes (according to this approach).
Here are some shapes I drew:
Here are the detected corners:
As you can clearly see in this last set of images, the method will easily confuse a rectangle/square with a cylinder. To handle that you will need to combine the approach with other descriptors. Initially, a simple one that you might consider is the ratio between the shape's area and its bounding box area (which would give 1 for rectangle, and lower for cylinder).
With the method described above, here are the measurements between the first and second shapes, first and third shapes, ..., respectively: 0.02358485, 0.41350339, 0.30128458 0.4980852, 0.18031262. The second cube is a resized version of the first one, and as you see, they are very similar by this metric. The last shape is a resized version of the first cube but without keeping the aspect ratio, and the metric gives a much higher difference.
If you want to play with the code that performs this, here it is (in Python, depends on OpenCV, numpy):
import sys
import cv2 as cv
import numpy
inp = []
for fname in sys.argv[1:]:
img_color = cv.imread(fname)
img = cv.cvtColor(img_color, cv.COLOR_RGB2GRAY)
inp.append((img_color, img))
ptsets = []
# Corner detection parameters.
params = (
200, # max number of corners
0.01, # minimum quality level of corners
10, # minimum distance between corners
)
# Params for visual circle markers.
circle_radii = 3
circle_color = (255, 0, 0)
for i, (img_color, img) in enumerate(inp):
height, width = img.shape
cornerMap = cv.goodFeaturesToTrack(img, *params)
corner = numpy.array([c[0] for c in cornerMap])
for c in corner:
cv.circle(img_color, tuple(c), circle_radii, circle_color, -1)
# Just to visually check for correct corners.
cv.imwrite('temp_%d.png' % i, img_color)
# Convert corner coordinates to [0, 1]
cornerUnity = (corner - corner.min()) / (corner.max() - corner.min())
# You might want to use other descriptors here. XXX
ptsets.append(cornerUnity)
def compare_ptsets(p):
res = numpy.zeros(len(p))
base = p[0]
for i in xrange(1, len(p)):
sum_min_diff = sum(numpy.abs(p[i] - value).min() for value in base)
res[i] = sum_min_diff
return res
res = compare_ptsets(ptsets)
print res
The process to be followed depends on what depth of features you are going to consider and accuracy required.
If you want something more accurate, search some technical papers like this which can give a concrete and well-proven approach or algorithm.
EDIT:
The idea from Waltz algorithm (one method in AI) can be tweaked. This is just my thought. Interpret the original image, generate some constraints out of it. For each candidate, find out the number of constraints it satisfies. The one which satisfies more constraints will be the most similar to the original image.
Try to calculate mass center for each figure. Treat each point of figure as particle with mass equal 1.
Then calculate each distance as sqrt((x1-x2)^2 + (y1-y2)^2), where (xi, yi) is mass center coordinate for figure i.

Logarithmic plotting of points

I was wondering if give a number of 2D points, how do calculate where to draw a point on a screen in a logarithmic y scale?
I tried to just take the logarithm of all the y-values of points and than plot them 'normally' (plot point [x, log(y)] => on height: height*log(y)/log(max)). But this approach causes problems for y-values under 1. So this makes me wonder if my method in general is the right approach. A tweak i could maybe use would be to use log(y/min) instead of log(y).
Any advice on improvement or better approaches are welcome!
By assuming y values are positive, use your own approach with a small bias, like: height*log(y-min+1)/log(max-min+1) to prevent from very big negative values.
If you plot y/ymin logarithmically, you'll scale the smallest value to 1, guaranteeing that all the logarithmic values are non-negative.
Check out the R implementation of plot which may provide you with some clues. If you use plot(x,y,log='y') then y axis is plotted on log scale.
About points<1, you will face same problem with -ve numbers, right? So basically you need to normalize data such that all points are within visible range on the screen. Use the following transformation:
ny = number_of_pixels*(log(y) - min(log(Y)))/(max(log(Y))-min(log(Y)))
from what i understand, you seem to be trying to plot log(y) but keeping the y axis as it was for y? that doesn't really make sense.
the way you are plotting is fine: you plot a point at (x, log(y)).
but what you need to change is the limits of the y axis. if it originally went from ymin to ymax, it now needs to go from log(ymin) to log(ymax).
if you change the y axis limits that way then the points will fit in just fine.

Resources