Where to add a surface to a grasshopper definition to create result shown? - grasshopper

I am trying to figure out where to add a surface input in a grasshopper definition in order for the resulting rectangles into the surface provided.
First picture is my grasshopper definition with the unconnected surface I set.
Second picture is the current output.
Third picture is what I am trying to create.
Fourth picture is the surface input.
The colors do not matter it is just the

You'll likely have to map or orient that unit object to the target rectangular surfaces. Not a single component. Please share the script so others can help you better.

Related

Building custom Shapes in Konva

I've been asked to build something similar to this so that customers can draw basics shapes of kitchen tops. Similar to that in the image below but also have dimensions.
It looks like konva has support for basic shapes like rectangle and circle etc and it also includes a transformer which allows for resizing. However, I think if I want to build a custom shape like the one in green and have individual sizing i.e. resize each individual line. I am going to have to build something myself.
I was hoping someone could point me in the right direction. I have seen an example where someone has used a "line" class which takes a series of points and then sets the attribute to closed which fills in the shape. Obviously I would need to extend this to allow the custom resizing. However, Im not sure this is the correct path to head down?
Any suggestions?
.
How about using rectangles and having an option to snap them together. It should be fairly simple to do the edge detection and snapping. Then show the result as a Konva.Line around the perimeter.
Then you can show all the control handles for the rectangles except those on the sides where another Rect has joined.

How to select an object and move camera into to another position?

I have a problem I don't known how I can select the position of the selected inside scene and move camera into it
I found a project sample, and I want to build an example like it
Sample project
Thank you so much!
You basically solve this problem in two steps:
First, you have to make 3D objects selectable which can be done via raycasting. There are many official examples that demonstrate 3D interaction based on raycasting, for example:
https://threejs.org/examples/webgl_interactive_cubes
If you know that a certain 3D object was clicked, you animate the camera from its current position to a defined target position. The possible target positions can be defined prior or you compute the in some way on the fly based on the object's bounding volume and the current camera position. The actual animation can be done in many ways. One approach is using a tweening engine like tween.js. Check out the following example to see how it is used together with three.js:
https://threejs.org/examples/css3d_periodictable

Finding point on 3d model based on image

I'm looking for a point where to start and how to do it right. I have a 3d model of an object. On this object are special points. Another thing I have is real photo of this object with source of light
coming from one of the points. What I want to achieve is to in some way comapre this photo and model to be able by basing on source of light to determine what specific point it is.
Which technology/library will allow me to achieve desired result and where I should start looking?
Edit:
To be more accurate. I don't have any data yet. But camera will be placed in fixed position same as metal part. This part will be rotated only in single axis. And this part have different shapes on different angles so it will be easier (I think) to match it with 3d model.

Find my camera's 3D position and orientation according to a 2D marker

I am currently building an Augmented Reality application and stuck on a problem that seem quite easy but is very hard to me ... The problem is as follow:
My device's camera is calibrated and detect a 2D marker (such as a QRCode). I know the focal length, the sensor's position, the distance between my camera and the center of the marker, the real size of the marker and the coordinates of the 4 corners of the marker and of it center on the 2D image I got from the camera. See the following image:
On the image, we know the a,b,c,d distances and the coordinates of the red dots.
What I need to know is the position and the orientation of the camera according to the marker (as represented on the image, the origin is the center of the marker).
Is there an easy and fast way to do so? I tried some method imagined by myself (using Al-Kashi's formulas), but this ended with too much errors :(. Could someone point out a way to get me out of this?
You can find some example code for the EPnP algorithm on this webpage. This code consists in one header file and one source file, plus one file for the usage example, so this shouldn't be too hard to include in your code.
Note that this code is released for research/evaluation purposes only, as mentioned on this page.
EDIT:
I just realized that this code needs OpenCV to work. By the way, although this would add a pretty big dependency to your project, the current version of OpenCV has a builtin function called solvePnP, which does what you want.
You can compute the homography between the image points and the corresponding world points. Then from the homography you can compute the rotation and translation mapping a point from the marker's coordinate system into the camera's coordinate system. The math is described in the paper on camera calibration by Zhang.
Here's an example in MATLAB using the Computer Vision System Toolbox, which does most of what you need. It is using the extrinsics function, which computes a 3D rotation and a translation from matching image and world points. The points need not come from a checkerboard.

How to determine top most object in 2d projection of 3d object?

I have a surface to which a set of 3d objects is drawn. The task is to determine an object by the given coordinates on the surface.
For example: some objects are drawn on the desktop application, I need to determine on which object user clicked.
Could you please advise, how such task is usually resolved? Am I need to create remember a top-most object for each pixel? I don't think it is the best approach.
Any thoughts are welcome!
Thanks!
The name for this task is picking (which ought to help you Google for more help on it). There are two main approaches:
Ray-casting: find the line that starts at the camera position and passes through the surface point you are interested in. (The line "under the mouse", or "under your finger" for a touch screen.) Depending on which 3D system you are using, there may be an API call to generate this line: for example Camera.ViewportPointToRay in Unity3D, or you may have to generate it yourself by inverting the camera transform. Find all the points of intersection between this line and the objects in your scene. Which of these points is closest to the near plane of the camera? You can use space partitioning to speed this up.
Rendering: do an extra render pass, in which instead of writing textures to the frame buffer, you record which objects were drawn. You don't do the render pass for the whole screen, you just do it for the area (e.g. the pixel) you are interested in. (This is GL_SELECT mode in OpenGL: see the Picking Tutorial for details.)
If you've described the surface somehow in 3D space, then the ray, defined by your point of observation and a 3D point that is a solution for where you clicked, should intersect one or more objects in your world, if indeed you clicked on one of them.
Given the equations for the surfaces of the objects, you can determine where this ray intersects the objects, if at all, since you also know the equation for the ray in the same coordinate system.
The object that has the closest intersection point to your point of observation (assuming you're looking at the objects from above) is the winner.

Resources