Difference between NURBS patches and surfaces - computational-geometry

Some sources say that a NURBS patch is a specific type of NURBS surface, while other sources say that patches and surfaces are the same thing. Is there a distinction between the two? If so, what is it?
Thanks.

A NURBS surface is in general piecewise continuous as it is formed by pieceing multiple (rational) Bezier surfaces togather. Each (rational) Bezier surface within a NURBS surface is often referred as a "patch". So, while a "patch" is also a surface, it would not be right to say "NURBS patch" from a strict technical point of view but I think we all understand what it means when such a term is used.

Related

How to draw NURBs with OpenGL ES

In OpenGL NURBs can be drawn using evaluators. But it seems evaluators were removed from OpenGL ES spec to make it light weight. In that case, how can one draw NURBs using OpenGL ES API?
You won't get around implementing the NURBS stuff yourself. Meaning you have to sample the curve or surface at discrete points and thus convert it to an ordinary line strip or triangle set respectively. This can then be drawn with the usual vertex arrays/buffers, which should also be faster than evaluators or the GLU NURBS functions.
In openGL, NURBS curves are rendered in 2 steps - 1) evaluate some points (100 or 1000) on the curve using the mathematical formula. This can be done on GPU in openGL4, using the SSBOs (Shader Storage Buffer Objects). 2) render the evaluated points as line strip, using the VBOs.
If you want to understand the NURBS in more detail, then there is a nice web-app available here.

Any ideas on real life rocks 3d Reconstruction from Single View?

So in general, when we think of Single View Reconstruction we think of working with planes, simple textures and so on... Generally, simple objects from nature's point of view. But what about such thing as wet beach stones? I wonder if there are any algorithms that could help with reconstructing 3d from single picture of stones?
Shape from shading would be my first angle of attack.
Smooth wet rocks, such as those in the first image, may exhibit predictable specular properties allowing one to estimate the surface normal based only on the brightness value and the relative angle between the camera and the light source (the sun).
If you are able to segment individual rocks, like those in the second photo, you could probably estimate the parameters of the ground plane by making some assumptions about all the rocks in the scene being similar in size and lying on said ground plane.

What algorithms are out there for detecting lights and shadows and their parameters?

So I have picture (not the best one)
I want to detect where the lights come from and what types of lights are they. What algorithm\framework can do such things with static images?
I mentioned shadows because in general if you can separate a shadow from a surface than you can probably determine light type and other its parameters.
I mean general shadows search not only for presented image.
With the image that you presented, there are so many sources of error that I'd be surprised if a trained human, let alone an algorithm could do better than ±20% on any calculations. Here are the problems:
There isn't a known straight line anywhere since everything is hand hewn. The best bet would be the I-beam above the doorway but you don't know it's orientation.
There's heavy barrel distortion in the edges of the image which are introduced by the lens and are characteristic of that lens at that zoom and focus. Without precise calibration of that, you can only guess at the degree of distortion.
The image is skewed with regard to the wall it is facing but none of the walls appear to be all that planar anyway.
You want to know the source of lights. Well the obvious primary light is the sun, but latitude, longitude, time and date all affect that. Then there are the diffuse reflections but unless you have the albedo of the materials you can only guess.
What are you hoping to derive from this image? Usually when doing lighting analysis, someone will put known reference targets of different, known reflectivity in the space to be analyzed. Working from a pocket snapshot camera on an unknown scene really limits what you can extrapolate.

Raytracing / Phong

i don't make out the difference between raytracing and shading technique like Phong or Gouraud.
For 3D modeling do one have to choose between those algorithms or they can be implemented both in the same algorithm.
Thank you.
Technically ray tracing really only determines visibility and distance. Recursively it can be used for reflections, refractions, and shadows (checking light source visibility).
Stochastic ray tracing or photon mapping can simulate light scattering.
Phong and Gouraud shading are reflection models applied to at a surface.
It is common for people starting out in ray tracing to use a Phong or Gouraud lighting model. You can use those lighting models with any rendering system (scan conversion for example).
Phong is more like a surface property, they describe how light is scattered. See http://en.wikipedia.org/wiki/Brdf
Ray Tracing is an algorithm that simulates the process of light scattering. See http://en.wikipedia.org/wiki/Ray_tracing_%28graphics%29
You can use Phong-BRDFs in a realistic ray tracer to describe surfaces, and there also exists an approximation that is usable in rasterization.

Recommend some Bresenham's-like algorithm of sphere mapping in 2D?

I need the fastest sphere mapping algorithm. Something like Bresenham's line drawing one.
Something like the implementation that I saw in Star Control 2 (rotating planets).
Are there any already invented and/or implemented techniques for this?
I really don't want to reinvent the bicycle. Please, help...
Description of the problem.
I have a place on the 2D surface where the sphere has to appear. Sphere (let it be an Earth) has to be textured with fine map and has to have an ability to scale and rotate freely. I want to implement it with a map or some simple transformation function of coordinates: each pixel on the 2D image of the sphere is defined as a number of pixels from the cylindrical map of the sphere. This gives me an ability to implement the antialiasing of the resulting image. Also I think about using mipmaps to implement mapping if one pixel on resulting picture is corresponding to more than one pixel on the original map (for example, close to poles of the sphere). Deeply inside I feel that this can be implemented with some trivial math. But all these thoughts are just my thoughts.
This question is a little bit related to this one: Textured spheres without strong distortion, but there were no answers available on my question.
UPD: I suppose that I have no hardware support. I want to have an cross-platform solution.
The standard way to do this kind of mapping is a cube map: the sphere is projected onto the 6 sides of a cube. Modern graphics cards support this kind of texture at the hardware level, including full texture filtering; I believe mipmapping is also supported.
An alternative method (which is not explicitly supported by hardware, but which can be implemented with reasonable performance by procedural shaders) is parabolic mapping, which projects the sphere onto two opposing parabolas (each of which is mapped to a circle in the middle of a square texture). The parabolic projection is not a projective transformation, so you'll need to handle the math "by hand".
In both cases, the distortion is strictly limited. Due to the hardware support, I recommend the cube map.
There is a nice new way to do this: HEALPix.
Advantages over any other mapping:
The bitmap can be divided into equal parts (very little distortion)
Very simple, recursive geometry of the sphere with arbitrary precision.
Example image.
Did you take a look at Jim Blinn's articles "How to draw a sphere" ? I do not have access to the full articles, but it looks like what you need.
I'm a big fan of StarconII, but unfortunately I don't remember the details of what the planet drawing looked like...
The first option is triangulating the sphere and drawing it with standard 3D polygons. This has definite weaknesses as far as versimilitude is concerned, but it uses the available hardware acceleration and can be made to look reasonably good.
If you want to roll your own, you can rasterize it yourself. Foley, van Dam et al's Computer Graphics -- Principles and Practice has a chapter on Bresenham-style algorithms; you want the section on "Scan Converting Ellipses".
For the point cloud idea I suggested in earlier comments: you could avoid runtime parameterization questions by preselecting and storing the (x,y,z) coordinates of surface points instead of a 2D map. I was thinking of partially randomizing the point locations on the sphere, so that they wouldn't cause structured aliasing when transformed (forwards, backwards, whatever 8^) onto the screen. On the downside, you'd have to deal with the "fill" factor -- summing up the colors as you draw them, and dividing by the number of points. Er, also, you'd have the problem of what to do if there are no points; e.g., if you want to zoom in with extreme magnification, you'll need to do something like look for the nearest point in that case.

Resources