Lately I've been using OpenCASCADE (PythonOCC, to be precise) for some CAD operations, including meshing shapes, and stumbled upon this class:
BRepMesh_IncrementalMesh.
I didn't find any hints on what the theLinDeflection and theAngDeflection parameters mean - and would like to know more about this.
I would appreciate any reading materials / hints / explanations on this subject.
These parameters will tell how "close" to the original surface the mesh should be.
In the docs it's described as this:
Linear deflection limits the distance between a curve and its tessellation, whereas angular deflection limits the angle between subsequent segments in a polyline.
Please check OCCT documentation for a detailed description.
Related
I'm looking at one of the Google Arts Experiments [link to page] that renders many images in a browser with WebGL + GLSL. I'm puzzled, though, by the fact that this scene includes 1,167,858 vertices and 389,286 quad faces, which equals 3 vertices per quad face (we see these numbers if we run renderer.info.render in the console on this page).
My question is: How in GLSL can one build or represent a quad face given fewer than 4 vertices? I'd be very grateful for any suggestions others can offer on this question!
More generally, are there tools one can use to investigate the ways a given page is using vertices, faces, and textures? I'd love to be able to really study the above-linked page as thoroughly as possible, so any tools that can help with this task would be very helpful!
The renderer.info.render isn't neccesarily 100 percent accurate. It only accumulates stats that it can gather with minimal overhead since the stats gathering is enabled all the time.. so take any discrepancies with a grain of salt. Also that demo is using InstancedBufferGeometry. Geometry instancing works differently than classical rendering.. in that the vertex stream is just used to parametize each instance of the rectangle.. so those 3 vertices are probably used to derive the rectangle width/height/uv coordinate and position.
In summary, you can use instancing to instance a bunch of rectangles and only have to specify the parameters that make it different.. i.e. position/tex coord/scaling, etc.
What will be the best way (or) Is there a Google's way already to Calculate the simple Straight Line Distance between Two Points, based on Lat/Lng or even on Postal/Zip Code is possible?
I found the answer by myself, from somewhere else.
Yes, there is a native solution from Google already, at:
https://developers.google.com/maps/documentation/javascript/reference?hl=en-US#spherical
All I need to do is to call the method:
'google.maps.geometry.spherical.computeDistanceBetween (latLngA, latLngB);'
(Ofcourse I also need to include the additional/required '.js')
"Best" is a pretty vague criterion. If you're able to assume the earth is a perfect sphere, then you want the simple formula for great circle distance. See for example the Wikipedia article. With this assumption your distance can be off by something less than half a percent.
The actual shape of the earth is actually a slightly oblate spheroid. The surface distance on this shape is more complicated to compute. See Ed Williams' work in javascript. Maybe he will let you use his code. If not he gives relevant references.
A free solution is at http://ezcmd.com/apps/app_ezip_locator#ezip_locator_api
Can help you find distance between two lat,long coordinates in miles or Km.
Or, you could try http://ezcmd.com/apps/app_geo_postal_codes#geo_postal_codes_api
The "best" way depends on several things. Can you provide a little more background as to how accurate and/or what's the desired application? The google.maps.DirectionsService class will allow you to calculate the driving distance client side with javascript, but if you want an accurate straight line distance you could use postgresql + postgis server side. Calculating accurate distances with lat/lng can get tricky with the different projections of the earth depending on the range of points and distances involved.
I have a set of Points in 3D space.
The image below is an example:
I would like to turn these points into a surface. I just know the X,Y and Z values of the points.
For example, check out the image below, which shows a mesh of a human face generated from points in 3D space.
i googled so much but, what i found is some images and explaination
but no one has explained with practical aspect and practical example.
is there any good or best algorithms which help me to solve this problem.
Please....
Thaks...........
You want to do a Delaunay-Triangulation. See example application here: http://www.geometrylab.de/VoroGlide/.
I am interested to read and understand the 2D mesh algorithms. A search on Google reveals a lot of papers and sources, however most are too academic and not much on beginner's side.
So, would anyone here recommend any reading sources ( suitable for the beginners), or open source implementation that I can learn from the start? Thanks.
Also, compared to triangular mesh generation, I have more interest in quadrilateral mesh and mix mesh( quad and tri combined).
I second David's answer regarding Jonathan Shewchuk's site as a good starting point.
In terms of open source software, it depends on what you are looking for exactly.
If you are interested in mesh generation, you can have a look at CGAL's code. Understanding the low level parts of CGAL's code is too much for a beginner. However, having a look at the higher level algorithms can be quite interesting even for a beginner. Also note that the documentation of CGAL is very detailed.
You can also have a look at TetGen, but its source code is monolithic and is not documented (it is more of an end user software rather than a library, even if it can also be called simply from other programs). Still, it is fairly readable, and the user manual contains a short presentation of mesh generation, with some references.
If you are also interested in mesh processing, you can have a look at OpenMesh.
More informations about your goals would definitely help providing more relevant pointers.
The first link on your Google search takes you to Jonathan Shewchuk's site. This is not actually a bad place to start. He has a program called triangle which you can download for 2D triangulation. On that page there is a link to references used in creating triangle, including a link to a description of the triangluation algorithm.
There are several approaches to mesh generation. One of the most common is to create a Delaunay triangulation. Triangulating a set of points is fairly simple and there are several algorithms which do that, including Watson's and Rupert's as used in triangle
When you want to create a constrained triangulation, where the edges of the triangulation match the edges of your input shape it is a bit harder, because you need to recover certain edges.
I would start by understanding Delaunay triangulation. Then maybe look at some of the other meshing algorithms.
Some of the common topics that you will find in mesh generation papers are
Robustness - that is how to deal with floating point round off errors.
Mesh quality - ensuring the shapes of the triangles/tetrahedrons are close to equilateral. Whether this is important depends on why you are creating the mesh. For analysis work it is very important,
How to choose where to insert the nodes in the mesh to give a good mesh distribution.
Meshing speed
Quadrilateral/Hexahedral mesh generation. This is harder than using triangles/tetrahedra.
3D mesh generation is much harder than 2D so a lot of the papers are on 3D generation
Mesh generation is a large topic. It would be helpful if you could give some more information on what aspects (eg 2D or 3D) that you are interested in. If you can give some idea of what you ant to do then maybe I can find some better sources of information.
I saw a question on reverse projecting 4 2D points to derive the corners of a rectangle in 3D space. I have a kind of more general version of the same problem:
Given either a focal length (which can be solved to produce arcseconds / pixel) or the intrinsic camera matrix (a 3x2 matrix that defines the properties of the pinhole camera model being used - it's directly related to focal length), compute the camera ray that goes through each pixel.
I'd like to take a series of frames, derive the candidate light rays from each frame, and use some sort of iterative solving approach to derive the camera pose from each frame (given a sufficiently large sample, of course)... All of that is really just massively-parallel implementations of a generalized Hough algorithm... it's getting the candidate rays in the first place that I'm having the problem with...
A friend of mine found the source code from a university for the camera matching in PhotoSynth. I'd Google around for it, if I were you.
That's a good suggestion... and I will definitely look into it (photosynth kind of resparked my interest in this subject - but I've been working on it for months for robochamps) - but it's a sparse implementation - it looks for "good" features (points in the image that should be easily identifiable in other views of the same image), and while I certainly plan to score each match based on how good the feature it's matching is, I want the full dense algorithm to derive every pixel... or should I say voxel lol?
After a little poking around, isn't it the extrinsic matrix that tells you where the camera actually is in 3-space?
I worked at a company that did a lot of this, but I always used the tools that the algorithm guys wrote. :)