Android OpenGL ES rotating screen - opengl-es

I use Gaussian blur to achieve the lighting effect.
On android, after rotating the screen, the first half of the screen is normal, and the second half of the screen is jagged.
This problem has also appeared on some iOS phones.
I don't know what the problem is.
I used fbo, texture buffering, depth buffering, and stencil buffering, and I recreated them with new dimensions each time I rotated the screen.
Ask what is the problem?

Related

SceneKit - Fix screen frame to camera edges

I'm working on a app that renders a 3D scene that simulates a real space into an iPhone making its screen become a hollow box, as seen in the sketch below:
(note the camera position order down below)
The problem is on how to calculate the camera parameters to make the box look real fixed to the screen edges.
Is this feasible through SceneKit?
In this configuration the camera's zNear plane corresponds to the screen of the iPhone. From that you can can derive a z position from the camera's field of view and the screen's width (see here).

Set the screen for Universal devices ( iPhone and iPad )

I made a game. I want to use it on iPhone and iPad(Universal game). I set screen 640x1136 in gamescene.sks file. When I play the game on all of iPhone, screen is seems perfect. But when I play on iPad, game screen looks bigger than iPad screen. How can I set screen for iPad?
It all comes down to the scaleMode you are using.
You have 4 options
This first 1 resizes your scene frame, the others keep the scene at the size you specified and scale to fit the screen.
.Resize, as stated, resizes your scene to fit your screen. If your initial scene is 10,10, it will make the scene size 320x480 on an iPhone 4s
.Fill (Default), which keeps the coordinates of your scene the same, but stretches in the x and y direction. If your game was in Landscape, and you design for a 4:3 screen, you will create a fat effect when going to 16x9
.AspectFill, which keeps the aspect ratio the same, but will scale your scene till the furthest borders are hit. If you are in landscape with a 4:3 scene, it will stretch to the left and right borders, making the top and bottom cropped to maintain the ratio.
.AspectFit, which keeps the aspect ratio the same, but will scale your scene till the nearest borders are hit. If you are in landscape with a 4:3 scene, it will stretch to the top and bottom borders, making black borders appear on left and right.
If you want your game to appear the same size on all screens, your best bet is to use .AspectFill, and plan your game around the cropping. (Basically do not put anything important in an area that gets cropped)

Reusing parts of the previous frame when drawing 2D with WebGL

I'm using WebGL to do something very similar to image processing. I draw a single Quad in orthoscopic projection, and perform some processing in the fragment shaders. There are two steps, in the first one the original texture from the Quad is processed in the fragment shader and written to a framebuffer, a second step processes that data to the final canvas.
The users can zoom and translate the image. For this to work smoothly, I need to hit 60 fps, or this gets noticeably sluggish. This is no issue on desktop GPUs, but on mobile devices with much weaker hardware and higher resolutions this gets problematic.
The translation case is the most noticeable and problematic, the user drags the mouse pointer or their finger over the screen and the image is lagging behind. But translation is also a case where I could theoretically reuse a lot of data from the previous frame.
Ideally I'd copy the canvas from the last frame, translate it by x,y pixels, and then only do the whole image processing in fragment shaders on the parts of the canvas that aren't covered by the translated previous canvas.
Is there a way to do this in WebGL?
If you want to access the previous frame you need to draw to a texture attached to a framebuffer. Then draw that texture into the canvas translated.

Best way to speed up multiple uses of "CGContextDrawRadialGradient" in drawrect?

I couldn't post the image, but I use the "CGContextDrawRadialGradient" method to draw a shaded blue ball (~40 pixel diameter), it's shadow and to make a "pulsing" white ring around the ball (inner and outer gradients on the ring). The ring starts at the edges of the blue ball and expands outward (radius grows with a timer). The white ring fades as it expands outward like a radio wave.
Looks great running in the simulator but runs incredibly slow on the iPhone 4. The ring should pulse in about a second (as in simulator), but takes 15-20 seconds on the phone. I have been reading a little about CALayer, CGLayer and reading some segments on a some gradient animation, but it isn't clear what I should be using for best performance.
How do I speed this up. Should I put the ball on a layer and the expanding ring on another layer? If so, how do I know which layer to update on a drawrect?
Appreciate any guidance. Thanks.
The only way to speed something like that up is to pre-render it. Determine how many image frames you need to make it look good and then draw each frame into a context you created with CGBitmapContextCreate and capture the image using CGBitmapContextCreateImage. Probably the easiest way to animate the images would be to set the animationImages property of a UIImageView (although there are other options, like CALayer animations).
The newest Apple docs finally mention which pixel formats are supported in iOS so make sure to reference those when creating your bitmap context.

What can we use instead of blending in OpenGL ES?

I am doing my iPhone graphics using OpenGL. In one of my projects, I need to use an image, which I need to use as a texture in OpenGL. The .png image is 512 * 512 in size, its background is transparent, and the image has a thick blue line in its center.
When I apply my image to a polygon in OpenGL, the texture appears as if the transparent part in the image is black and the thick blue line is seen as itself. In order to remove the black part, I used blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Then my black part of the texture in the polygon is removed. Now only the blue band is seen. Thus the problem is solved.
But I want to add many such images and make many objects in OpenGL. I am able to do that, but the frame rate is very low when I add more and more images to objects. But when I comment out blending, the frame rate is normal, but the images are not seen.
Since I do not have good fps, the graphics are a bit slow and I get a shaky effect.
So:
1) Is there any other method than blending to solve my problem?
2) How can I improve the frame rate of my OpenGL app? What all steps need to be taken in order to implement my graphics properly?
If you want to have transparent parts of an object, the only way is to blend to pixel data for the triangle with what is currently in the buffer (what you are currently doing). Normally, when using solid textures, the new pixel data for a triangle just overwrites what ever was in buffer (as long as it is closers, ie z-buffer). But with transparency, it has start looking at the transparency of that part of the texture, look at what is behind it, all the way back to something solid. Then has combine all of those over lapping layers of transparent stuff till you get the final image.
If all you are wanting your transparency for is something like a simple tree sprite, and removing the 'stuff' form the sides of the trunk etc. Then you may be better of providing more complex geometry that actually defines the shape of the trunk and thus not need to bother with transparency.
Sadly, I don't think there is much you can do to try to speed up your FPS, other then cut down the amount of transparency you are calculating. Maybe even adding some optimization that checks images to see if it can turn of alpha blending for this image or not. Depending on how much you are trying to push through, may save time in the long wrong.

Resources