I have a surface to which a set of 3d objects is drawn. The task is to determine an object by the given coordinates on the surface.
For example: some objects are drawn on the desktop application, I need to determine on which object user clicked.
Could you please advise, how such task is usually resolved? Am I need to create remember a top-most object for each pixel? I don't think it is the best approach.
Any thoughts are welcome!
Thanks!
The name for this task is picking (which ought to help you Google for more help on it). There are two main approaches:
Ray-casting: find the line that starts at the camera position and passes through the surface point you are interested in. (The line "under the mouse", or "under your finger" for a touch screen.) Depending on which 3D system you are using, there may be an API call to generate this line: for example Camera.ViewportPointToRay in Unity3D, or you may have to generate it yourself by inverting the camera transform. Find all the points of intersection between this line and the objects in your scene. Which of these points is closest to the near plane of the camera? You can use space partitioning to speed this up.
Rendering: do an extra render pass, in which instead of writing textures to the frame buffer, you record which objects were drawn. You don't do the render pass for the whole screen, you just do it for the area (e.g. the pixel) you are interested in. (This is GL_SELECT mode in OpenGL: see the Picking Tutorial for details.)
If you've described the surface somehow in 3D space, then the ray, defined by your point of observation and a 3D point that is a solution for where you clicked, should intersect one or more objects in your world, if indeed you clicked on one of them.
Given the equations for the surfaces of the objects, you can determine where this ray intersects the objects, if at all, since you also know the equation for the ray in the same coordinate system.
The object that has the closest intersection point to your point of observation (assuming you're looking at the objects from above) is the winner.
Related
I'm looking for a point where to start and how to do it right. I have a 3d model of an object. On this object are special points. Another thing I have is real photo of this object with source of light
coming from one of the points. What I want to achieve is to in some way comapre this photo and model to be able by basing on source of light to determine what specific point it is.
Which technology/library will allow me to achieve desired result and where I should start looking?
Edit:
To be more accurate. I don't have any data yet. But camera will be placed in fixed position same as metal part. This part will be rotated only in single axis. And this part have different shapes on different angles so it will be easier (I think) to match it with 3d model.
I am setting up a particle system in threejs by adapting the buffer geometry drawcalls example in threejs. I want to create a series of points, but I want them to be round.
The documentation for threejs points says it accepts geometry or buffer geometry, but I also noticed there is a circleBufferGeometry. Can I use this?
Or is there another way to make the points round besides using sprites? I'm not sure, but it seems like loading an image for each particle would cause a lot of unnecessary overhead.
So, in short, is there a more performant or simple way to make a particle system of round particles (spheres or discs) in threejs without sprites?
If you want to draw each "point"/"particle" as a geometric circle, you can use THREE.InstancedBufferGeometry or take a look at this
The geometry of a Points object defines where the points exist in 3D space. It does not define the shape of the points. Points are also drawn as quads, so they're always going to be a square, though they don't have to appear that way.
Your first option is to (as you pointed out) load a texture for each point. I don't really see how this would introduce "a lot" of overhead, because the texture would only be loaded once, and would be applied to all points. But, I'm sure you have your reasons.
Your other option is to create your own shader to draw the point as a circle. This method takes the point as a square, and discards any fragments (multiple fragments make up a pixel) outside the circle.
I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.
Could someone explain to me how FindPlane works? (I understand the inputs, and the outputs, but not the process.) I am getting random values for the output and therefore I do not understand how it actually functions: does it raycast a normal vector from my camera according to my touch position and gets the depth point that hit the raycast and gets a plane out of that?
Operation is similar to raycast, but other way round. When you click any point on screen, screen coordinates are recorded. All 3D points in Pointcloud are projected onto image plane using camera intrinsic. Points which are close to screen coordinates are taken. RANSAC method is used to extract plane information from those points. SVD can also be used to extract plane normal from inliers obtained from RANSAC. This method should be used only once per frame transformation operation is applied on all points in pointcloud.
This method gives random values in cases where Sparse Pointcloud, reflections in 3D point cloud, reflective surfaces, Cluttered 3D space, IR from outside etc.,
I'd like to implement a dragging feature where users can drag objects around the workspace. That of course is the easy bit. The hard bit is to try and make it a physically correct drag which incorporates rotation due to torque moments (imagine dragging a book around on a table using only one finger, how does it rotate as you drag?).
Does anyone know where I can find explanations on how to code this (2D only, rectangles only, no friction required)?
Much obliged,
David
EDIT:
I wrote a small app (with clearly erroneous behaviour) that I hope will convey what I'm looking for much better than words could. C# (VS 2008) source and compiled exe here
EDIT 2:
Adjusted the example project to give acceptable behaviour. New source (and compiled exe) is available here. Written in C# 2008. I provide this code free of any copyright, feel free to use/modify/whatever. No need to inform me or mention me.
Torque is just the applied force projected perpendicular to a vector between the point where the force is applied and the centroid of the object. So, if you pull perpendicular to the diameter, the torque is equal to the applied force. If you pull directly away from the centroid, the torque is zero.
You'd typically want to do this by modeling a spring connecting the original mouse-down point to the current position of the mouse (in object-local coordinates). Using a spring and some friction smooths out the motions of the mouse a bit.
I've heard good things about Chipmunk as a 2D physics package:
http://code.google.com/p/chipmunk-physics/
Okay, It's getting late, and I need to sleep. But here are some starting points. You can either do all the calculations in one coordinate space, or you can define a coordinate space per object. In most animation systems, people use coordinate spaces per object, and use transformation matrices to convert, because it makes the math easier.
The basic sequence of calculations is:
On mouse-down, you do your hit-test,
and store the coordinates of the
event (in the object coordinate
space).
When the mouse moves, you create a
vector representing the distance
moved.
The force exterted by the spring is k * M, where M is the amount of distance between that initial mouse-down point from step 1, and the current mouse position. k is the spring constant of the spring.
Project that vector onto two direction vectors, starting from the initial mouse-down point. One direction is towards the center of the object, the other is 90 degrees from that.
The force projected towards the center of the object will move it towards the mouse cursor, and the other force is the torque around the axis. How much the object accelerates is dependent on its mass, and the rotational acceleration is dependent on angular momentum.
The friction and viscosity of the medium the object is moving in causes drag, which simply reduces the motion of the object over time.
Or, maybe you just want to fake it. In that case, just store the (x,y) location of the rectangle, and its current rotation, phi. Then, do this:
Capture the mouse-down location in world coordinates
When the mouse moves, move the box according to the change in mouse position
Calculate the angle between the mouse and the center of the object (atan2 is useful here), and between the center of the object and the initial mouse-down point. Add the difference between the two angles to the rotation of the rectangle.
This would seem to be a basic physics problem.
You would need to know where the click, and that will tell you if they are pushing or pulling, so, though you are doing this in 2D, your calculations will need to be in 3D, and your awareness of where they clicked will be in 3D.
Each item will have properties, such as mass, and perhaps information for air resistance, since the air will help to provide the motion.
You will also need to react differently based on how fast the user is moving the mouse.
So, they may be able to move the 2 ton weight faster than is possible, and you will just need to adapt to that, as the user will not be happy if the object being dragged is slower than the mouse pointer.
Which language?
Here's a bunch of 2d transforms in C