I want to create a NurbsSurface in OpenGL. I use a grid of control points size of 40x48. Besides I create indices in order to determine the order of vertices.
In this way I created my surface of triangles.
Just to avoid misunderstandings. I have
float[] vertices=x1,y1,z1,x2,y2,z2,x3,y3,z3....... and
float[] indices= 1,6,2,7,3,8....
Now I don't want to draw triangles. I would like to interpolate the surface points. I thought about nurbs or B-Splines.
The clue is:
In order to determine the Nurbs algorithms I have to interpolate patch by patch. In my understanding one patch is defined as for example points 1,6,2,7 or 2,7,3,8(Please open the picture).
First of all I created the vertices and indices in order to use a vertexshader.
But actually it would be enough to draw it on the old way. In this case I would determine vertices and indices as follows:
float[] vertices= v1,v2,v3... with v=x,y,z
and
float[] indices= 1,6,2,7,3,8....
In OpenGL, there is a Nurbs function ready to use. glNewNurbsRenderer. So I can render a patch easily.
Unfortunately, I fail at the point, how to stitch the patches together. I found an explanation Teapot example but (maybe I have become obsessed by this) I can't transfer the solution to my case. Can you help?
You have set of control points from which you want to draw surface.
There are two ways you can go about this
Which is described in Teapot example link you have provided.
Calculate the vertices from control points and pass then down the graphics
pipeline with GL_TRIANGLE as topology. Please remember graphics hardware
needs triangulated data in order to draw.
Follow this link which shows how to evaluate vertices from control points
http://www.glprogramming.com/red/chapter12.html
You can prepare path of your control points and use tessellation shaders to
triangulate and stitch those points.
For this you prepare set of control points as patch use GL_PATCH primitive
and pass it to tessellation control shader. In this you will specify what
tessellation level you want. Depending on that your patch will be tessellated
by another fixed function stage known as Primitive Generator.
Then your generated vertices will be pass to tessellation evaluation shader
in which you can fine tune. Here you can specify outer or inner tessellation
level which will further subdivide your patch.
I would suggest you put your VBO and IBO like you have with control points and when drawing use GL_PATCH primitive. Follow below tutorial about how to use tessellation shader to draw nurb surfaces.
Note : Second method I have suggested is kind of tricky and you will have to read lot of research papers.
I think if you dont want to go with modern pipeline then I suggest go with option 1.
Related
The H3 library uses a Dymaxion orientation, which means that the hexagon grid is rotated to an unusual angle relative to the equator/meridian lines. This makes sense when modelling the Earth, as the twelve pentagons then all lie in the water, but would be unnecessary when using the library to map other spheres (like the sky or other planets). In this case it would be more intuitive and aesthetically pleasing to align the icosahedron to put a pentagon at the poles and along the meridian. I'm just trying to work out what I would need to change in the library to achieve that? It looks like I would need to recalculate the faceCenterGeo and faceCenterPoint tables in faceijk.c, but do I need to recalculate faceAxesAzRadsCII as well? I don't really understand what that latter table is...
Per this related answer, the main changes you'd need for other planets are to change the radius of the sphere (only necessary if you want to calculate distances or areas) and, as you ask, the orientation of the icosahedron. For the latter:
faceCenterGeo defines the icosahedron orientation in lat/lng points
faceCenterPoint is a table derived from faceCenterGeo that defines the center of each face as 3d coords on a unit sphere. You could create your own derivation using generateFaceCenterPoint.c
faceAxesAzRadsCII is a table derived from faceCenterGeo that defines the angle from each face center to each of its three vertices. This does not have a generation script, and TBH I don't know how it was originally generated. It's used in the core algorithms translating between grid coordinates and geo coordinates, however, so you'd definitely need to update it.
I'd strongly suggest that taking this approach is a Bad Idea:
It's a fair amount of work - not (just) the calculations, but recompiling the code, maintaining a fork, possibly writing bindings in other languages for your fork, etc.
You'd break most tests involving geo input or output, so you'd be flying blind as to whether your updated code is working as expected.
You wouldn't be able to take advantage of other projects built on H3, e.g. bindings for other languages and databases.
If you want to re-orient the geometry for H3, I'd suggest doing exactly that - apply a transform to the input geo coordinates you send to H3, and a reverse transform to the output geo coordinates you get from H3. This has a bunch of advantages over modifying the library code:
It's a lot easier
You could continue to use the maintained library
You could apply these transformations outside of the bindings, in the language of your choice
Your own code is well-separated from 3rd-party library code
There's probably a very small performance penalty to this approach, but in almost all cases that's a tiny price to pay compared to the difficulties you avoid.
I have set of binary images, on which i need to find the cross (examples attached). I use findcontours to extract borders from the binary image. But i can't understand how can i determine is this shape (border) cross or not? Maybe opencv has some built-in methods, which could help to solve this problem. I thought to solve this problem using Machine learning, but i think there is a simpler way to do this. Thanks!
Viola-Jones object detection could be a good start. Though the main usage of the algorithm (AFAIK) is face detection, it was actually designed for any object detection, such as your cross.
The algorithm is Machine-Learning based algorithm (so, you will need a set of classified "crosses" and a set of classified "not crosses"), and you will need to identify the significant "features" (patterns) that will help the algorithm recognize crosses.
The algorithm is implemented in OpenCV as cvHaarDetectObjects()
From the original image, lets say you've extracted sets of polygons that could potentially be your cross. Assuming that all of the cross is visible, to the extent that all edges can be distinguished as having a length, you could try the following.
Reject all polygons that did not have exactly 12 vertices required to
form your polygon.
Re-order the vertices such that the shortest edge length is first.
Create a best fit perspective transformation that maps your vertices onto a cross of uniform size
Examine the residuals generated by using this transformation to project your cross back onto the uniform cross, where the residual for any given point is the distance between the projected point and the corresponding uniform point.
If all the residuals are within your defined tolerance, you've found a cross.
Note that this works primarily due to the simplicity of the geometric shape you're searching for. Your contours will also need to have noise removed for this to work, e.g. each line within the cross needs to be converted to a single simple line.
Depending on your requirements, you could try some local feature detector like SIFT or SURF. Check OpenSURF which is an interesting implementation of the latter.
after some days of struggle, i came to a conclusion that the only robust way here is to use SVM + HOG. That's all.
You could erode each blob and analyze their number of pixels is going down. No mater the rotation scaling of the crosses they should always go down with the same ratio, excepted when you're closing down on the remaining center. Again, when the blob is small enough you should expect it to be in the center of the original blob. You won't need any machine learning algorithm or training data to resolve this.
I'm adding an OpenGL renderer to my 2D game engine and I want to know whether there is a way to apply an mvp matrix only to part of the vertices in a single draw call?
I'm planning to group draw calls by textures so I'll pass a buffer of many vertices and texcoords, now I want to apply different rotation angles to different quads. Is there a way to accomplish it in the shader or should I give up on the mvp matrix in the shader and perform the same thing using the cpu?
EDIT: What about adding 3 float attributes (rotation and rot_center.xy) per vertex?
what's better performance
(1) doing CPU rotation?
(2) providing 3 more floats per vertex
(3) separating draw calls?
Is there any other option?
Here is a possibility:
Do the rotation in the vertex shader. Pass in the information (angle?) needed to create the rotation matrix as a vertex attribute.
Pass in a vertex attribute (ubyte) that is effectively a per-vertex boolean flag. Rotation in #1 will be executed only if the bool is set.
Not sure if the above will work for you from a performance/storage perspective.
I think that, while it is a good thing to group draw calls for many different performance reasons, changing your code to satisfy a basic requirement as rotation is not a good idea.
Drawing batching is a good thing but, if you are forced to keep an additional attribute (because you cannot do it with uniforms for sure, you wouldn't have the information of the single entity) it is not worth.
An additional attribute means much more memory bandwidth usage that usually is the main killing factor for performances on nowadays systems.
Drawing batching, on the other side, is important but not always critical, it depends on many factors such as:
the GPU OpenGL driver optimization
The GPU tiles configuration
The number of shapes/draw calls we are talking about (if you have 20 quads on the screen, why should you bother of batching? :) )
In other words, often it is much more convenient to drop extreme batching in favor of easiness/main tenability and avoid fancy solutions for simple requirements as rotation.
I hope this helps in some way.
Use two different objects, that is all!
There is no other workaround for rotation of part of object
Example:
A game with a tank, where you want to rotate turret and remaining-body separately. Like in your case here these two are treated as separate objects.
I am trying to make a PointCloud mapping user with multiple kinects on Processing. I get the user's front and back with 2 kinects on opposite sides and generate both PointClouds.
The trouble is that the PointClouds X/Y/Z are not syncronized, it just puts the two of them on screen and it surely looks messy. There is a way to calculate or make a comparison between them, to translate the second PointCloud to "join" the first? I could translate the position manually, but if I move the sensors it will go off again.
Supposing all the Kinects are stationary, I guess you would have to go in this order:
decide on which Kinect to use as a global reference,
get parameters for a 3D transformation for each of the other Kinects - I'd try to
use PMatrix3D and applyMatrix(), although it may be slow,
apply the transformations on to each of the other Kinects' point clouds and draw
the clouds
I don't (yet) know how to get the transformation parameters for a Procrustes transformation, but assuming they won't change, you'd probably have to set up multiple reference points, maybe by displaying the point clouds from each pair of Kinects and registering the points you know are the same in both point clouds. After getting enough of them, construct a PMatrix3D and apply it inside push/popMatrix.
This is the approach used by this guy: http://www.youtube.com/watch?v=ujUNj1RDL4I
An alternative approach would be to use an Iterative Closest Point algorithm and construct 3D transform from its output. I'd really like an ICP or PCL library for Processing, if anyone knows a good one.
I am a bit confused about that I need to move my basic square .Should i use my translate matrix or just change the object vertexes. Which one is accurate ?.
I use vertex shader
gl_Position = myPMVMatrix * a_vertex;
and also i use VBO
From an accuracy point of view both methods are about equally good.
From a performance point of view, it's about minimizing bottlenecks:
For a single square you are probably not able to measure any differences, but when you think about 1 million squares (or triangles), thinks get a little more complicated:
If all of your triangles change position relative to each other, you are probably better off with changing the vbo, because you can push the data directly to the graphics card's memory, instead of having a million OpenGl calls (which are very slow).
If all your triangles stay at the same position relative to each other (like it is the case in a normal 3d-model) you should just change the transformation matrix. In this case you don't have to push the data again onto the gfx-memory, and you only have one function-call, and you are transfering only a few bytes of data to the gfx-memory.
Depending on your application it may be a good choice to devide your triangles into different categories and update them apropriately.
Don't move objects by changing all of the vertices! What about a complex model with thousands of vertices? Even if it's a simple square, don't evolve such bad practice. That's exactly what transformation matrices are for. You are already using a transformation matrix in your shader code. From the naming I assume it's a premultiplied model-view-projection matrix. So it consists of the model matrix positioning the object in world space (here's where your translation usually should go into), the view matrix positioning the world in eye/camera space (sometimes model and view matrix are combined into a single modelview matrix, like in fixed function GL) and the projection matrix doing any kind of perspective projection and/or transformation to the clipping volume, all three multiplied together as P * V * M. If there are still some questions on these transformation matrices and their use, consult some literature on 3d transformations or just your favourite OpenGL tutorial.