How can I determine whether a point is above an irregular mesh/surface in PCL?
I have one cloud of points that I would like to convert to a surface/mesh (not sure which terminology I should use.) Think of it as an irregular ground plane. For example:
This just shows that the surface can be sort of random, even have holes in it where data wasn't available.
Now, I have another point cloud, and I'd like to be able to filter out all the points that are below this surface.
The way I've been converting my points to a surface was by following the Fast triangulation of unordered point clouds tutorial.
If I can do this without converting the points to a surface, that would be great too. I'm new at this so I can easily imagine I'm going about this all wrong.
When I tried using straight point clouds, sparsity became a big issue. For example, in the image below, I generated a dense surface of points, and to filter the other cloud, used used getPointsInBox() (as suggested here) to search beneath the points. But as you can see, it fails with sparsity (the blue points circled in black.)
If I could create a more-or-less continuous mesh grid of points from my original points, the getPointsInBox() method would work quite well, but I also haven't been able to figure out how to do that.
Related
I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.
I'm new to three.js and WebGL in general.
The sample at http://css.dzone.com/articles/threejs-render-real-world shows how to use raster GIS terrain data in three.js
Is it possible to use vector GIS data in a scene? For example, I have a series of points representing locations (including height) stored in real-world coordinates (meters). How would I go about displaying those in three.js?
The basic sample at http://threejs.org/docs/59/#Manual/Introduction/Creating_a_scene shows how to create a geometry using coordinates - could I use a similar approach with real-world coordinates such as
"x" : 339494.5,
"y" : 1294953.7,
"z": 0.75
or do I need to convert these into page units? Could I use my points to create a surface on which to drape an aerial image?
I tried modifying the simple sample but I'm not seeing anything (or any error messages): http://jsfiddle.net/slead/KpCfW/
Thanks for any suggestions on what I'm doing wrong, or whether this is indeed possible.
I did a number of things to get the JSFiddle show something.. here: http://jsfiddle.net/HxnnA/
You did not specify any faces in your geometry. In this case I just hard-coded a face with all three of your data points acting as corner. Alternatively you can look into using particles to display your data as points instead of faces.
Set material to THREE.DoubleSide. This is not usually needed or recommended, but helps debugging in early phases, when you can see both sides of a face.
Your camera was probably looking in a wrong direction. Added a lookAt() to point it to the center and made the field of view wider (this just makes it easier to find things while coding).
Your camera near and far planes were likely off-range for the camera position and terrain dimensions. So I increased the far plane distance.
Your coordinate values were quite huge, so I just modified them by hand a bit to make sense in relation to the camera, and to make sure they form a big enough triangle for it to be seen in camera. You could consider dividing your coordinates with something like 100 to make the units smaller. But adjusting the camera to account for the huge scale should be enough too.
Nothing wrong with your approach, just make sure you feed the data so that it makes sense considering the camera location, direction and near + far planes. Pay attention to how you make the faces. The parameters to Face3 is the index of each point in your vertices array. Later on you might need to take winding order, normals and uvs into account. You can study the geometry classes included in Three.js for reference.
Three.js does not specify any meaning to units. Its just floating point numbers, and you can decide yourself what a unit (1.0) represents. Whether it's 1mm, 1 inch or 1km, depends on what makes the most sense considering the application and the scale of it. Floating point numbers can bring precision problems when the actual numbers are extremely small or extremely big. My own applications typically deal with stuff in the range from a couple of centimeters to couple hundred meters, and use units in such a way that 1.0 = 1 meter, that has been working fine.
I'm trying to figure out how to draw an stretchy/elastic line between two points in openGL/Cocos2d on iPhone. Something like this
Where the "band" get's thinner as the line gets longer. iOS uses the same technique I'm aiming for in the Mail.app, pull to refresh.
First of all, is there a name for this kind of thing?
My first thought was to plot a point on the radius of the starting and ending circles based on the angle between to the two, and draw a quadratic bezier curve using the distance/2 as a control point. But I'm not a maths whizz so I'm struggling to figure out how to place the control point which will adjust the thickness of the path.
But a bigger problem is that I need to fill the shape with a colour, and that doesn't seem to be possible with OpenGL bezier curves as far as I can tell since curves don't seem to form part of a shape that can be filled.
So I looked at using a spline created using a point array, but that opens up a whole new world of mathematical pain as I'd have to figure out where all the points along the edge of the path are.
So before I go down that rabbit hole, I'm wondering wether there's something simpler that I'm overlooking, or if anyone can point me towards the most effective technique.
I'm not sure about a "common" technique that people use, other than calculating it mathematically, but this project, SlimeyRefresh, is a good example of how to accomplish this.
I'm creating an HTML5 canvas 3D renderer, and I'd say I've gotten pretty far without the help of SO, but I've run into a showstopper of sorts. I'm trying to implement backface culling on a cube with the help of some normals calculations. Also, I've tagged this as WebGL, as this is a general enough question that it could apply to both my use case and a 3D-accelerated one.
At any rate, as I'm rotating the cube, I've found that the wrong faces are being hidden. Example:
I'm using the following vertices:
https://developer.mozilla.org/en/WebGL/Creating_3D_objects_using_WebGL#Define_the_positions_of_the_cube%27s_vertices
The general procedure I'm using is:
Create a transformation matrix by which to transform the cube's vertices
For each face, and for each point on each face, I convert these to vec3s, andn multiply them by the matrix made in step 1.
I then get the surface normal of the face using Newell's method, then get a dot-product from that normal and some made-up vec3, e.g., [-1, 1, 1], since I couldn't think of a good value to put in here. I've seen some folks use the position of the camera for this, but...
Skipping the usual step of using a camera matrix, I pull the x and y values from the resulting vectors to send to my line and face renderers, but only if they have a dot-product above 0. I realize it's rather arbitrary which ones I pull, really.
I'm wondering two things; if my procedure in step 3 is correct (it most likely isn't), and if the order of the points I'm drawing on the faces is incorrect (very likely). If the latter is true, I'm not quite sure how to visualize the problem. I've seen people say that normals aren't pertinent, that it's the direction the line is being drawn, but... It's hard for me to wrap my head around that, or if that's the source of my problem.
It probably doesn't matter, but the matrix library I'm using is gl-matrix:
https://github.com/toji/gl-matrix
Also, the particular file in my open source codebase I'm using is here:
http://code.google.com/p/nanoblok/source/browse/nb11/app/render.js
Thanks in advance!
I haven't reviewed your entire system, but the “made-up vec3” should not be arbitrary; it should be the “out of the screen” vector, which (since your projection is ⟨x, y, z⟩ → ⟨x, y⟩) is either ⟨0, 0, -1⟩ or ⟨0, 0, 1⟩ depending on your coordinate system's handedness and screen axes. You don't have an explicit "camera matrix" (that is usually called a view matrix), but your camera (view and projection) is implicitly defined by your step 4 projection!
However, note that this approach will only work for orthographic projections, not perspective ones (consider a face on the left side of the screen, facing rightward and parallel to the view direction; the dot product would be 0 but it should be visible). The usual approach, used in actual 3D hardware, is to first do all of the transformation (including projection), then check whether the resulting 2D triangle is counterclockwise or clockwise wound, and keep or discard based on that condition.
I need the fastest sphere mapping algorithm. Something like Bresenham's line drawing one.
Something like the implementation that I saw in Star Control 2 (rotating planets).
Are there any already invented and/or implemented techniques for this?
I really don't want to reinvent the bicycle. Please, help...
Description of the problem.
I have a place on the 2D surface where the sphere has to appear. Sphere (let it be an Earth) has to be textured with fine map and has to have an ability to scale and rotate freely. I want to implement it with a map or some simple transformation function of coordinates: each pixel on the 2D image of the sphere is defined as a number of pixels from the cylindrical map of the sphere. This gives me an ability to implement the antialiasing of the resulting image. Also I think about using mipmaps to implement mapping if one pixel on resulting picture is corresponding to more than one pixel on the original map (for example, close to poles of the sphere). Deeply inside I feel that this can be implemented with some trivial math. But all these thoughts are just my thoughts.
This question is a little bit related to this one: Textured spheres without strong distortion, but there were no answers available on my question.
UPD: I suppose that I have no hardware support. I want to have an cross-platform solution.
The standard way to do this kind of mapping is a cube map: the sphere is projected onto the 6 sides of a cube. Modern graphics cards support this kind of texture at the hardware level, including full texture filtering; I believe mipmapping is also supported.
An alternative method (which is not explicitly supported by hardware, but which can be implemented with reasonable performance by procedural shaders) is parabolic mapping, which projects the sphere onto two opposing parabolas (each of which is mapped to a circle in the middle of a square texture). The parabolic projection is not a projective transformation, so you'll need to handle the math "by hand".
In both cases, the distortion is strictly limited. Due to the hardware support, I recommend the cube map.
There is a nice new way to do this: HEALPix.
Advantages over any other mapping:
The bitmap can be divided into equal parts (very little distortion)
Very simple, recursive geometry of the sphere with arbitrary precision.
Example image.
Did you take a look at Jim Blinn's articles "How to draw a sphere" ? I do not have access to the full articles, but it looks like what you need.
I'm a big fan of StarconII, but unfortunately I don't remember the details of what the planet drawing looked like...
The first option is triangulating the sphere and drawing it with standard 3D polygons. This has definite weaknesses as far as versimilitude is concerned, but it uses the available hardware acceleration and can be made to look reasonably good.
If you want to roll your own, you can rasterize it yourself. Foley, van Dam et al's Computer Graphics -- Principles and Practice has a chapter on Bresenham-style algorithms; you want the section on "Scan Converting Ellipses".
For the point cloud idea I suggested in earlier comments: you could avoid runtime parameterization questions by preselecting and storing the (x,y,z) coordinates of surface points instead of a 2D map. I was thinking of partially randomizing the point locations on the sphere, so that they wouldn't cause structured aliasing when transformed (forwards, backwards, whatever 8^) onto the screen. On the downside, you'd have to deal with the "fill" factor -- summing up the colors as you draw them, and dividing by the number of points. Er, also, you'd have the problem of what to do if there are no points; e.g., if you want to zoom in with extreme magnification, you'll need to do something like look for the nearest point in that case.