My problem at hand is:
I have an arbitrarily shaped surface in Sketchup and want to check which part of the surface is in shade and which part is exposed to the sun.
My approach is to create points on the surface and subsequently check whether there are any obstructions between each point and the current position of the sun. I should add that I will need to know which points will be in shade and which points in direct sun, i.e. knowing that 30% of the surface is shaded is not sufficient for what I want to do.
Does anyone know enough about Sketchup's Ruby API to tell me how to create the points? I found the PolygonMesh object which might be useful for me but couldn't get it to work.
In lieu of that, what general algorithms could/should I read up on which could create the points?
Is there a better approach in Sketchup or in general that could achieve what I want?
Many Thanks
I made some progress!
One option is to create a point at the center of mass. For what I call regular polygons, i.e. where the vertices are evenly distributed (e.g. triangle or rectangle), the coordinates for the center of mass are
x_com = average(vertices.x)
y_com = average(vertices.y)
z_com = average(vertices.z)
See here for more details:
http://www.mathworks.com/matlabcentral/newsreader/view_thread/22176
This would allow to create a construction point at the center of mass, like this:
# Find the centre of mass of a polygon based on the average of the x, y, z values.
# A construction point is added to the centre of mass
def centreofmass(aface)
mod = Sketchup.active_model # Open model
ent = mod.entities # All entities in model
vert = aface.vertices
n = 0
x = 0
y = 0
z = 0
vert.each{|i|
n += 1
x += i.position[0]
y += i.position[1]
z += i.position[2]
}
pt = Geom::Point3d.new(x/n,y/n,z/n)
c = ent.add_cpoint pt
end
From there I probably could create triangles by drawing lines from the center of mass to the original vertices. Then repeating the process for the new triangles.
This could work for most somewhat regular shaped surfaces. I believe that there might be issues with polygons that have more vertices one side than the other, and also irregular shaped polygons, e.g. slim L shaped surfaces.
Anyway, it looks like I got a starting point.
Related
I am looking for some information on how to "bend" an arbitrary list of points/vertices similar to the bend modifier you can find in typical 3D modelling programs.
I want to provide a list of points, a rotation focal point, and a "final angle." For my purposes, I will always be saying "points at minimum y of the set will not change, points at maximum y will be rotated at the maximum angle, everything in between is interpolated."
Example Image of starting configuration and desired result, given a 90 degree rotation:
Can anyone suggest a resource that would explain how to go about this? I'm prepared to code it (C++) but I'm wracking my brain on a concept that would make this work. It would be easy to generate the vertices, but the application I'm writing this for takes in user-created content and needs to bend it.
(Edited to add: I don't need a miracle matrix or deep equation solution... if you say something like "for each vertex, do this" that's good enough to get the green checkmark)
Thanks!
You seem to be transforming a straight line to a circular arc, preserving distances between adjacent points.
So, if the point A is the one you want to hold constant, pick another point B to be the center of that circle. (The nearer B is to A, the more severe the bending.) Now for any point C you want to transform, break the vector C-B into the component parallel to A-B (call that component R) and the component perpendicular to it (call that k). The magnitude of R will be the radius of the circle you map C to, and you can transform the magnitude of 'k' into distance along that circle:
theta = |k|/|R|
C` = B + R cos(theta) + k|R|/|k| sin(theta)
i have to solve a distance problem and i´m getting pretty upset because i don´t know how to do it despite having tried nearly everything that i´ve found on the web... here´s my problem:
i work in the automotive industry and we use tessellated data (like STL, in my case the JT-Format). I have a part that needs to be welded. and i have the coordinates of the weldpoint. to ensure that the weldpoint is located correctly i want to calculate if the weldpoint hits the part or, in other words, i want to check if the weldpoint collides with the part. if yes, then the part can be welded. otherwise the weldpoint would be in the air and it couldnt be welded. therefor i want to calculate the distance between the part (which is basically a set of triangles or polygons in the mentioned format) and the point. if the distance to one of the triangles is less then the also given radius of the weldpoint, then there must be a collision and thus the weldpoint is located correctly and can be welded.
a how to, pseudo-code or whatever that could be useful would be very appreciated. i´m coding in c++ using the JTOpen Toolkit. Please note that the point hasn´t necessarily have to lie within the triangle. Maybe an example could help you and me to understand the problems/answers (no collision in the following example):
let v1, v2, v3 be the vertices of a triangle and px, py, pz the coordinates of the weldpoint (radius 1.8). I also get normals (n1, n2, n3) to every vertex but i dont know what to to with them...
v1x = -273.439
v1y = -787.775
v1z = 854.273
v2x = -274.247
v2y = -788.085
v2z = 855.244
v3x = -272.077
v3y = -787.864
v3z = 855.377
px = 140.99
py = -787.78
pz = 458.93
n1x = -0.113447
n1y = 0.97007
n1z = 0.214693
n2x = -0.113423
n2y = 0.970069
n2z = 0.214712
n3x = -0.110158
n3y = 0.969844
n3z = 0.217413
thank you in advance!
The locus of the points at the same distance of a triangle is a complex surface made of
two triangles parallel to the original one, at the given distance;
three half cylindres corresponding to points at equal distances of the edges;
the spheres with points at equal distances of the vertices.
If you look facing the triangle, you will observe that these surfaces are split by
the three triangle sides,
the six normals to the sides at the vertices.
Hence to find the distance of a given point, you need to project it orthogonally to the plane of the triangle and find its location among 7 regions delimited by half-lines and segments. Using an appropriate spatial rotation, the problem can be solved in 2D. Then knowing the region, you will use either the distance to the plane, to an edge or to a vertex.
Note that in the case of a tessellation, several triangles have to be considered. If there are many of them, acceleration systems will be needed. This is a broad and a little technical topic.
I have to use a propriertary graphics-engine for drawing a line. I can rotate the whole drawing by its origin point (P1). What I want, is to rotate it around its center point(M). So basically that it looks like L_correct instead of L_wrong.
I think, it should be possible to correct it, by moving it from P1 to P2. But I cannot figure out what formula could be used, to determine the distance. It must probably involve the angle, width and height...
So basically my question is, if there is a function to determine x2 and y2 based on my available data?
Let's assume you have a primitive method that rotates a drawing by any given angle phi. What you want is to use that primitive to rotate a drawing D around a point M instead. Here is a sketch of how to proceed.
Translate you drawing by -M, i.e., apply the transformation T(P) = P - M to all points P in your drawing. Let T(D) be the translation of D.
Now use the primitive to rotate T(D) by the desired angle phi. Let R(T(D)) be the result.
Now translate the previous result by M and get the rotated drawing. In other words, use the transformation T'(P) = P + M.
Note that in step 1 above M is mapped to the origin 0, where the rotation primitive is known to work. After rotating in step 2, the opposite translation of step 3 puts back the drawing on its original location as this time 0 is mapped to M.
Say that my images are simple shapes - set of lines, dots, curves, and simple objects,
How do I calculate the distance between images - so length is important but total scale is non important, location of line\curve is important, angles is important etc
Attached image For example:
My comparison object is a cube on the top left, score are fictitious just for this example.
that the distance to the Cylinder is 80 (has 2 lines but top geometry is different)
The bottom left cube score is 100 since it exact match lines with different scale.
The bottom right Rectangle score is 90 since it has exact match lines on the top but different scale lines on the side.
I am looking for algorithm name or general approach that will help me to start to think towards a solution....
Thank you for your help.
Here is something to get you started. When jumping into new problems, I don't see much value in trying a lot of complex steps just because they are available somewhere to use. So my focus is on using relatively simple things, that will fail in more varied situations, but hopefully you will see its value and get some sense of the problem.
The approach is fully based on corner detection; two typical methods for this detection are the Harris detector or the one by Shi and Tomasi described in the paper "Good Features to Track", 1994. I will use the second one, just because there is a ready implementation in OpenCV, newer Matlab, and possibly many other places. Its implementation on these packages also allows for easier parameter adjustment, regarding corner quality and minimum distance between corners. So, suppose you can detect all corner points correctly, how do you measure how close one shape is to another one based on these points ? The images have arbitrary size, so my idea is to normalize the point coordinates to the range [0, 1]. This solves for the scaling issue, which is desired according to the original description. Now we have to compare point sets in the range [0, 1]. Here we go for the simplest thing: consider one point p from the shape a, what is the closest point in shape b ? We assume it is one with the minimum absolute different between this point p and any point in b. If we sum all the values, we get a scoring between shapes. The lower the score, the more similar the shapes (according to this approach).
Here are some shapes I drew:
Here are the detected corners:
As you can clearly see in this last set of images, the method will easily confuse a rectangle/square with a cylinder. To handle that you will need to combine the approach with other descriptors. Initially, a simple one that you might consider is the ratio between the shape's area and its bounding box area (which would give 1 for rectangle, and lower for cylinder).
With the method described above, here are the measurements between the first and second shapes, first and third shapes, ..., respectively: 0.02358485, 0.41350339, 0.30128458 0.4980852, 0.18031262. The second cube is a resized version of the first one, and as you see, they are very similar by this metric. The last shape is a resized version of the first cube but without keeping the aspect ratio, and the metric gives a much higher difference.
If you want to play with the code that performs this, here it is (in Python, depends on OpenCV, numpy):
import sys
import cv2 as cv
import numpy
inp = []
for fname in sys.argv[1:]:
img_color = cv.imread(fname)
img = cv.cvtColor(img_color, cv.COLOR_RGB2GRAY)
inp.append((img_color, img))
ptsets = []
# Corner detection parameters.
params = (
200, # max number of corners
0.01, # minimum quality level of corners
10, # minimum distance between corners
)
# Params for visual circle markers.
circle_radii = 3
circle_color = (255, 0, 0)
for i, (img_color, img) in enumerate(inp):
height, width = img.shape
cornerMap = cv.goodFeaturesToTrack(img, *params)
corner = numpy.array([c[0] for c in cornerMap])
for c in corner:
cv.circle(img_color, tuple(c), circle_radii, circle_color, -1)
# Just to visually check for correct corners.
cv.imwrite('temp_%d.png' % i, img_color)
# Convert corner coordinates to [0, 1]
cornerUnity = (corner - corner.min()) / (corner.max() - corner.min())
# You might want to use other descriptors here. XXX
ptsets.append(cornerUnity)
def compare_ptsets(p):
res = numpy.zeros(len(p))
base = p[0]
for i in xrange(1, len(p)):
sum_min_diff = sum(numpy.abs(p[i] - value).min() for value in base)
res[i] = sum_min_diff
return res
res = compare_ptsets(ptsets)
print res
The process to be followed depends on what depth of features you are going to consider and accuracy required.
If you want something more accurate, search some technical papers like this which can give a concrete and well-proven approach or algorithm.
EDIT:
The idea from Waltz algorithm (one method in AI) can be tweaked. This is just my thought. Interpret the original image, generate some constraints out of it. For each candidate, find out the number of constraints it satisfies. The one which satisfies more constraints will be the most similar to the original image.
Try to calculate mass center for each figure. Treat each point of figure as particle with mass equal 1.
Then calculate each distance as sqrt((x1-x2)^2 + (y1-y2)^2), where (xi, yi) is mass center coordinate for figure i.
So first of all I have such image (and ofcourse I have all points coordinates in 2d so I can regenerate lines and check where they cross each other)
(source: narod.ru)
But hey, I have another Image of same lines (I know thay are same) and new coords of my points like on this image
(source: narod.ru)
So... now Having points (coords) on first image, How can I determin plane rotation and Z depth on second image (asuming first one's center was in point (0,0,0) with no rotation)?
What you're trying to find is called a projection matrix. Determining precise inverse projection usually requires that you have firmly established coordinates in both source and destination vectors, which the images above aren't going to give you. You can approximate using pixel positions, however.
This thread will give you a basic walkthrough of the techniques you need to use.
Let me say this up front: this problem is hard. There is a reason Dan Story's linked question has not been answered. Let provide an explanation for people who want to take a stab at it. I hope I'm wrong about how hard it is, though.
I will assume that the 2D screen coordinates and projection/perspective matrix is known to you. You need to know at least this much (if you don't know the projection matrix, essentially you are using a different camera to look at the world). Let's call each pair of 2D screen coordinates (a_i, b_i), and I will assume the projection matrix is of the form
P = [ px 0 0 0 ]
[ 0 py 0 0 ]
[ 0 0 pz pw]
[ 0 0 s 0 ], s = +/-1
Almost any reasonable projection has this form. Working through the rendering pipeline, you find that
a_i = px x_i / (s z_i)
b_i = py y_i / (s z_i)
where (x_i, y_i, z_i) are the original 3D coordinates of the point.
Now, let's assume you know your shape in a set of canonical coordinates (whatever you want), so that the vertices is (x0_i, y0_i, z0_i). We can arrange these as columns of a matrix C. The actual coordinates of the shape are a rigid transformation of these coordinates. Let's similarly organize the actual coordinates as columns of a matrix V. Then these are related by
V = R C + v 1^T (*)
where 1^T is a row vector of ones with the right length, R is an orthogonal rotation matrix of the rigid transformation, and v is the offset vector of the transformation.
Now, you have an expression for each column of V from above: the first column is { s a_1 z_1 / px, s b_1 z_1 / py, z_1 } and so on.
You must solve the set of equations (*) for the set of scalars z_i, and the rigid transformation defined R and v.
Difficulties
The equation is nonlinear in the unknowns, involving quotients of R and z_i
We have assumed up to now that you know which 2D coordinates correspond to which vertices of the original shape (if your shape is a square, this is slightly less of a problem).
We assume there is even a solution at all; if there are errors in the 2D data, then it's hard to say how well equation (*) will be satisfied; the transformation will be nonrigid or nonlinear.
It's called (digital) photogrammetry. Start Googling.
If you are really interested in this kind of problems (which are common in computer vision, tracking objects with cameras etc.), the following book contains a detailed treatment:
Ma, Soatto, Kosecka, Sastry, An Invitation to 3-D Vision, Springer 2004.
Beware: this is an advanced engineering text, and uses many techniques which are mathematical in nature. Skim through the sample chapters featured on the book's web page to get an idea.