I'm very new to WebGL, but I'm getting close at understanding the basics.
I'm following the instructions in Jacob Seidelin's book where he explains some of the basics.
I tried rebuilding one of his examples (which is not directly explained in the book).
For some reason the depth in the uModelView matrix doesn't work in my application. I also don't get any errors using the WebGLDebugUtils.
When I set the z property of the uModelView matrix to 0 the front face of the cube fills up the screen. Since I worked with -1 to 1 in the vertices.
Here is my source code: [removed]
The shaders are located in the index.html, be they shouldn't be the problem.
I'm using gl-matrix for the matrix transformations.
Thanks in advance.
You are not using the mat4.perspective correct. Checkout the documentation:
https://github.com/toji/gl-matrix/blob/master/gl-matrix.js#L1722
You should either add the matrix as the last parameter (this is the preferred way since this does not allocate any new object):
mat4.perspective(fov, aspect, near, far, matrix);
or assign it to the matrix:
matrix = mat4.perspective(fov, aspect, near, far);
Related
I'm trying to create a mesh for a simple environment (i.e. playpen, ROS noetic and Gazebo). I used 10 pcd files (recorded using HDL-32E lidar) to create the mesh environment by using the following steps:
1- Remove radius outliers (nb_points=10, radius=0.8) from pcd files and save as ply files
2- Register ply files using point-to-plane ICP and pose graph optimization
3- Combine the ply files. Apparently, the combined cloud looks good (see combined_plys.png).
4- Reconstruct the mesh environment using poisson reconstruction (depth=14). The resultant mesh file shows a black rectangle only (see front.png). The flipped side show a kind of playpen environment but in bad look (see flipped.png). It is observed that the construction process generate a warning "Extract bad average roots: 21".
I did some R&D and observed that normals play a critical role in mesh reconstruction. I created the normals using cloudcompare and then set their orientation using orient_normals_to_align_with_direction. The registered and combined cloud now have normals, apparently aligned (see pic normals_front and normals_back). Consequently, there is some improvement in the flipped mesh but still the front side is a black rectangle. Any help/hint is much appreciated.
Combined_cloud
front_mesh
flipped_mesh
normals_front
normals_back
flipped_mesh_with_normals
Could you guys suggest how to fix this issue? Thanks in advance
I saw your question has been answered in other forum. That solution is a bit complicated and I didn't go through it. Just sharing how I solved this with a little Open3D settings.
I also encounter black reconstruction problem.
From my trials I found that it is the normals of the vertex of the mesh that we have to calculate, not those of the original point cloud. Here's what I do.
# Calculate the normals of the vertex
mesh.compute_vertex_normals()
# Paint it gray. Not necessary but the reflection of lighting is hardly perceivable with black surfaces.
mesh.paint_uniform_color(np.array([[0.5],[0.5],[0.5]]))
I've been stuck for the last two weeks on updating the threejs_mousepick.html example from an old THREE.js release to the current one. Oh, yeah, I am a newbie to programming.
I've created a Fiddle, hopping someone would spend sometime helping me. CANNON.js is a great API and it is sad to see that examples are so old/unusable with today's THREE.js. I understand it is a lot of work and I am feeling to help but I need some help first. So, if #schteppe you read this, get in touch: I am willing to spend some time working on this.
The answer is as broad as the question.
Using of THREE.Raycaster() and THREE.Plane() simplifies the things a lot. It allows to rid off such functions as projectOntoPlane, findNearestIntersectingObject, getRayCasterFromScreenCoord, and shortens the setScreenPerpCenter function (its name is ridiculous, but I left it as it was) to just one line.
jsfiddle example r87
gplane is THREE.Plane():
var gplane = new THREE.Plane(), gplaneNormal = new THREE.Vector3();
As it written in the descripting comment, we create a virtual plane, which we move our point of joint on.
function setScreenPerpCenter(point) {
gplane.setFromNormalAndCoplanarPoint(gplaneNormal.subVectors(camera.position, point).normalize(), point);
}
Here, we set our plane from a normal and a coplanar point, where the normal is a normalized vector of subtraction between the position of the camera and the point of click on the cube, and the point is that point of click itself. Read about that method here
Anyone happen to know of an example or can point me in the right direction on rendering a heightmap/terrain in WebGL from a three dimensional array? Basically I have an array that contains data relevant to x and y coordinates and a 'height' (z axis).
Everything I've found (like in the threejs world) shows how to create one dynamically or from a 2d image. Ideally I'd like to have the color of the pixel/particle related to the height. Basically looking to do something like below but in WebGL:
There are many examples on how to do this already available. You can search for three.js + heigthmap.
Or try three.js + 3d graph.
Here is something called a "Graphulus-Function" that looks pretty much exactly like what you need.
Here you can find another interesting reference.
Without more details on your data it is hard to say if these examples suit your needs...
Check also this three.js issue 1003 on GitHub: "Terrain from Heightmap" where there is a discussion about this topic and lots of great examples are mentioned.
I'm currently trying to create a three.js mesh which has a large number of faces (in the thousands) and is using textures. However, my problem is that each face can have its texture changed at runtime, so potentially it's possible that every face has a different texture.
I tried preloading a materials array (for MeshFaceMaterial) with default textures and assigning each face a different materialIndex, but that generated much lag.
A bit of research led to here, which says
If number is large (e.g. each face could be potentially different), consider different solution, using attributes / textures to drive different per-face look.
I'm a bit confused about how shaders work, and in particular I'm not even sure how you would use textures with attributes. I couldn't find any examples of this online, as most texture-shader related examples I found used uniforms instead.
So my question is this: Is there an efficient way for creating a mesh with a large number of textures, changeable at runtime? If not, are there any examples for the aforementioned attributes/textures idea?
Indeed, this can be a tricky thing to implement. Now I can't speak much to GLSL (I'm learning) but what I do know is Uniforms are constants and would not change between calls, so you would likely want an attribute for your case, but I welcome being wrong here. However, I do have a far simpler suggestion.
You could use 1 texture that you can "subdivide" into all the tiny textures you need for each face. Then at runtime you can pull out the UV coordinates from the texture and apply it to the faces individually. You'll still deal with computation time, but for a thousand or so faces it should be doable. I tested with a 25k face model and it was quick changing all faces per tick.
Now the trick is navigating the faceVertexUvs 3 dimensional array. But for example a textured cube with 12 faces you could say reset all faces to equal one side like so:
for (var uvCnt = 0; uvCnt < mesh.geometry.faceVertexUvs[0].length; uvCnt+=2 ) {
mesh.geometry.faceVertexUvs[0][uvCnt][0] = mesh.geometry.faceVertexUvs[0][2][0];
mesh.geometry.faceVertexUvs[0][uvCnt][1] = mesh.geometry.faceVertexUvs[0][2][1];
mesh.geometry.faceVertexUvs[0][uvCnt][2] = mesh.geometry.faceVertexUvs[0][2][2];
mesh.geometry.faceVertexUvs[0][uvCnt+1][0] = mesh.geometry.faceVertexUvs[0][3][0];
mesh.geometry.faceVertexUvs[0][uvCnt+1][1] = mesh.geometry.faceVertexUvs[0][3][1];
mesh.geometry.faceVertexUvs[0][uvCnt+1][2] = mesh.geometry.faceVertexUvs[0][3][2];
}
Here I have a cube that has 6 colors (1 per side) and I loop through each faceVertexUv (stepping by 2 as two triangle make a plane) and reset all the Uvs to my second side which is blue. Of course you'll want to map the coordinates into an object of sorts so you can easily query the object to return and reset the cooresponding Uv's but I don't know your use case. For completness, you'll want to run mesh.geometry.uvsNeedUpdate = true; at runtime to see the updates. I hope that helps.
I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.