For example I have 2 layers: background and image. In my case I must show or hide an image on zoom value changed (simply float variable).
The only solution I know is to keep 2 various frame buffers for both background and image and not to draw the image when it is not necessary.
But is it possible to do this in an easier way?
Just don't pass the geometry to glDrawArrays() for the layer you want to hide when the zoom occurs. OpenGL ES completely re-renders everything every frame. You should have a glClear() call at the start of your frame render loop. So, removing something is done by just not sending its triangles. You might need to divide your geometry into separate lists for each layer.
Related
I need to add this classic effect which consist in highlighting a 3D model by stroking the outlines, just like this for example (without the transparent gradiant, just a solid stroke) :
I found a way to do this here which seems pretty simple and easy to implement. The guy is playing with the stencil buffer to compute the model shape, then he's drawing the model using wireframes and the thickness of the lines is doing the job.
This is my problem, the wireframes. I'm using OpenGL ES 2.0, which means I can't use glPolygonMode to change the render mode to GL_LINE.
And I'm stuck here, I can't find any simple alternative way to do it, the most relevant solution i found for the moment is to implement the wireframe rendering myself, which is clearly not the easiest solution. To draw my objects I'm using glDrawElements with GL_TRIANGLES as primitive, I tried to use GL_TRIANGLE_STRIP as primitive but the result is definetely not the right one.
Any idea/trick to bypass the lack of glPolygonMode with OpenGL ES? Thanks in advance.
Drawing Outline or border for a Model in OpenGL ES 2 is not straight forward as the example you have mentioned.
Method 1:
The easiest way is to do it in multiple passes.
Step 1 (Shape Pass): Render only the object and draw it in black using the same camera settings. And draw all other pixels with different color.
Step 2 (Render Pass): This is the usual Render pass, where you actually draw the objects in real color. This every time you a fragment, you have to test the color at the same pixel on the ShapePass image to see if any of the nearby 8 pixels are different in color. If all nearby pixels are of same color, then the fragment does not represent a border, else add some color to draw the border.
Method 2: There are other techniques that can give you similar effects in a single pass. You can draw the same object twice, first time slightly scaled up with a single color, and then with real color.
I'm trying to port a TclTK program I wrote 20 years ago to HTML5.
After hours of frustation, I learned that when you "scale" or
"translate" HTML5's canvas element, it only applies to future
drawings, not items already on the canvas.
This is the opposite of TclTK, where items already on the canvas are
scaled/translated instead.
Short of creating a draw/redraw loop (where I clear the canvas and
redraw all the objects myself when I want to scale/translate), is
there anyway to make HTML5's canvas element behave like TclTK's?
Or am I missing something big?
The Canvas 2D Context is based around pixel-wise image manipulations — it is not a “retained mode” graphics interface as you are apparently familiar with. There literally is no record of your graphics for it to redraw. If you want to change the graphics, you have to redraw them somehow.
Everything is redrawing, in the end (though the redrawing may be hidden from your code), but there are ways to reduce the amount of work you have to do. Here are some options, roughly in order of amount of change you'll have to do to your code (and roughly in order of improved quality/performance):
Draw your graphics on the canvas, then scale and translate the canvas itself using CSS properties (not the width and height attributes of the canvas, which will clear it). This will rescale the image, possibly losing quality, since you're not drawing it anew optimized for the current scale.
Draw your graphics on the canvas, then export them into an ImageData or a data URL, then when needed redraw that onto the canvas. Again, may lose quality.
The above two are essentially kludges to keep using the canvas code you've already written. To get a proper system like you describe TK as being, you want to:
Build your own scene graph: Create a set of objects like Circle, Line, etc. which represent graphics, and containers for those which store transform attributes like scale and position. Then write routines to walk this graph and execute the appropriate drawing commands, whenever you need to redraw.
Use SVG instead. SVG is a language for vector graphics which, in modern browsers, you can embed directly in your HTML, and manipulate in JavaScript just like you would the rest of your page. In SVG, you can simply change a scale attribute and get the change you expect to see.
(The previous option is basically reinventing a small amount of SVG.)
Those two functions are currently my bottleneck. I am working with very large bitmaps.
How can I improve their performance?
You could cache smaller versions of your bitmaps which you create before drawing the first time and then simply draw the downscaled samples instead of the full-blown 15 megapixel stuff.
Then again make sure you are only drawing what is necessary i.e. in 'drawRect: (NSRect) rect' only draw inside the rect (unless absolutely necessary). And try not do perform drawings outside of that method.
If you're drawing large background images with content in the foreground that moves, consider using a layer-backed NSView, adding a layer and setting its background image. You can then draw your content in a other layers (or layer-backed NSViews) above the background layer, and the view will never need to redraw the background image because it is stored in the GPU's texture memory. Your current image is too large for a single CALayer (CALayers are limited to the maximum OpenGL texture size of 2048 x 2048) so you will probably need to break it up into tiles.
Otherwise, as #iolo mentioned, you should make sure that you only redraw the parts of the view that really need updating.
I couldn't post the image, but I use the "CGContextDrawRadialGradient" method to draw a shaded blue ball (~40 pixel diameter), it's shadow and to make a "pulsing" white ring around the ball (inner and outer gradients on the ring). The ring starts at the edges of the blue ball and expands outward (radius grows with a timer). The white ring fades as it expands outward like a radio wave.
Looks great running in the simulator but runs incredibly slow on the iPhone 4. The ring should pulse in about a second (as in simulator), but takes 15-20 seconds on the phone. I have been reading a little about CALayer, CGLayer and reading some segments on a some gradient animation, but it isn't clear what I should be using for best performance.
How do I speed this up. Should I put the ball on a layer and the expanding ring on another layer? If so, how do I know which layer to update on a drawrect?
Appreciate any guidance. Thanks.
The only way to speed something like that up is to pre-render it. Determine how many image frames you need to make it look good and then draw each frame into a context you created with CGBitmapContextCreate and capture the image using CGBitmapContextCreateImage. Probably the easiest way to animate the images would be to set the animationImages property of a UIImageView (although there are other options, like CALayer animations).
The newest Apple docs finally mention which pixel formats are supported in iOS so make sure to reference those when creating your bitmap context.
I am doing my iPhone graphics using OpenGL. In one of my projects, I need to use an image, which I need to use as a texture in OpenGL. The .png image is 512 * 512 in size, its background is transparent, and the image has a thick blue line in its center.
When I apply my image to a polygon in OpenGL, the texture appears as if the transparent part in the image is black and the thick blue line is seen as itself. In order to remove the black part, I used blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Then my black part of the texture in the polygon is removed. Now only the blue band is seen. Thus the problem is solved.
But I want to add many such images and make many objects in OpenGL. I am able to do that, but the frame rate is very low when I add more and more images to objects. But when I comment out blending, the frame rate is normal, but the images are not seen.
Since I do not have good fps, the graphics are a bit slow and I get a shaky effect.
So:
1) Is there any other method than blending to solve my problem?
2) How can I improve the frame rate of my OpenGL app? What all steps need to be taken in order to implement my graphics properly?
If you want to have transparent parts of an object, the only way is to blend to pixel data for the triangle with what is currently in the buffer (what you are currently doing). Normally, when using solid textures, the new pixel data for a triangle just overwrites what ever was in buffer (as long as it is closers, ie z-buffer). But with transparency, it has start looking at the transparency of that part of the texture, look at what is behind it, all the way back to something solid. Then has combine all of those over lapping layers of transparent stuff till you get the final image.
If all you are wanting your transparency for is something like a simple tree sprite, and removing the 'stuff' form the sides of the trunk etc. Then you may be better of providing more complex geometry that actually defines the shape of the trunk and thus not need to bother with transparency.
Sadly, I don't think there is much you can do to try to speed up your FPS, other then cut down the amount of transparency you are calculating. Maybe even adding some optimization that checks images to see if it can turn of alpha blending for this image or not. Depending on how much you are trying to push through, may save time in the long wrong.