Update position of Visual3D element when position of another Visual3D element changes - helix-3d-toolkit

I am wondering how I could achieve this:
I have ModelVisual3D with 2 elements. First element is SphereVisual3D and second one is PipeVisual3D. When I change coordinates of Sphere I want that Pipe's Point1 property changes to the same value as sphere's center value.
Thanks in advance

Related

Tracking drag around multiple pivot points with touch input

Suppose that in 2D I have a polygon consists of square cells on a grid, much like a tetronimo but with an arbitrary number of cells that form it. The shape is orthogonally continuous, as in every cell connects to at least one other cell in an orthogonal direction. Here are some examples:
In the project I am working on, these blocks can be moved with the arrow keys in multiple directions around multiple pivot points. For example, this shape below can rotate around any of 4 directions by pressing, in order of rotations shown, the left, down, right and up arrow keys. The shape rotates around the pivot points shown in red.
I would like to add "drag-to-move" support. In other words, you press your finger down onto any point upon the shape, and as you drag and rotate your finger around a pivot point, the shape will rotate with it accordingly. My problem is that I do not know how to go about programatically finding the pivot point to rotate around, or the direction of rotation, from the path of the player's finger alone.
In code, I store a list of pivot points as Vector3s. These Vector3s store the following information:
X-component: x-pos of pivot
Y-component: y-pos of pivot
Z-component: -1 or 1, direction of rotation (1 for clockwise, -1 for anti-clockwise)
For clarity, the Z-component determines which direction the shape can be rotated about a pivot if it is to be rotated. Therefore, the above GIF will might have the following 4 entries:
Entry #1: (0, 0, -1)
Entry #2: (0, 0, 1)
Entry #3: (2, 1, -1)
Entry #4: (2, 1, 1)
Notice 2 entries for each x-y position in this case, as the shape can be rotated in both directions.
I am familiar with Unity's touch system although it is not much help to me here. I plan to use Transform.RotateAround(Vector3 point, Vector3 axis, float angle) to rotate the shape around a pivot and axis incrementally every frame, but I don't know how to calculate, from the player's touch input and touch position, which pivot and by what angle to rotate.
I have seen posts like this and this (using mouse input), and while they would be helpful were I dealing with only one pivot, I am dealing with potentially 2 and even 3 pivot points. Were only one pivot involved, I might try to check the angle difference every frame (the angle between the pivot, current position, and last frame position) and use Transform.RotateAround. Since there are two pivots, I first determine which pivot to rotate around, and then possibly calculate the angle deltas. However I don't know what the best way would be to go about finding which pivot to rotate around. Any ideas?
Apologies for the very long post, thanks for any and all help you can provide! Ask me to clarify anything.
rbjacob

Snapping two objects at runtime at specific points on the object

I have many game objects with line renderers attached to it . They are roughly in the shape of rectangles . How do I go about snapping these rectangle on the edges when these objects are dragged and bought close to each other ?
I have referred to this question . But it doesn't explain how to snap at specific positions.
Here is a sample image of the objects I want to latch.
There are many ways to do this task. Simply you can calculate the position of the second shape and when it's becoming closer to first with x or y axis just set their start position to first shape end position. The second way is just adding 2D colliders near the first object and simply when it triggers, move position. I will strongly recommend the first way.

how to convert polyline to bitmap in matlab

I have a polyline, given as 2 vectors X, Y, of coordinates, both vectors of the same length, and X(i) corresponds to Y(i).
I need an easy way to create a boolean matrix, that has 1 where a polyline passes, and 0 where it doesn't.
is there a nice way of doing this?
I thought about poly2mask, but doc says it closes the polygon, which is not what i am looking for
Thanks
you can extend the line to the left and right edge of the graph. just copy the row numbers and change column to first and last. then add top-left and top-right corners into the coordinate array. use poly2mask to draw that huge polygon. then remove everything except the last line of the polygon. finally trim the left and right ends.
you can also use line to draw lines.

get exact coordinates of animating element with d3.js

In this jsbin, I am trying to position the horizontal line to the smaller circle in the animating group but the jsbin is using getBoundingClientRect and some arbitrary values to get there.
I wanted to use this value:
d3.select('.circle-guide').attr('cx')
But it always returns the same value and does not take the transform into account.
How can I get the coordinates of a n animating element?

OpenGL ES/real time/position of any vertex that is displayed?

I'm currently dealing with OpenGL ES (2, iOS 6)… and I have a question
i. Let be a mesh that has to be drawn. Moreover,
ii. I can ask for a rotation/translation so that the point of view changes.
So,
how can I know (in real time) the position of any vertex that is displayed?
Thank you in advance.
jgapc
It's not entirely clear what it is you are after, but if you want to know where your object is after doing a bunch of rotations and translations, then one very easy option, if you perform these changes in your program code instead of in the shader, is to simply take the entire last row or column of your transformation matrix (depends if you are using row or column major matrices) which will be the final translation of your object's center as a coordinate vector.
This last row or column is the same thing as multiplying your final transformation matrix by your object's local coordinate center vector, which is (0,0,0,1).
If you want to know where an object's vertex is, rather than the object's center, then multiply that vertex in local coordinate space by the final transformation matrix, and you will get the new coordinate where that vertex is positioned.
There are two things I'd like to point out:
Back-face culling discards triangles, not vertices.
Triangles are also clipped so that they're within the viewing frustum.
I'm curious as to why you care about what is not displayed?

Resources