Large background images and fps reducing - opengl-es

I've made a game with parallax backround images in it.
Background consists from four different moving layers. Without bg I have 60fps on iPhone4. But when I add it on the stage fps is reduced to 45-50.
I divided bg images into tiles 128x128. Problem still exists. All images in png extension with alpha channel. I'm using the last version of cocos2-x. Have anybody an idea why fps reduced so much?

Related

Estimate blur kernel from two overlapped image

I Have two aerial images.
One is sharp and the other is blurred.
This two images have 60% overlap and I want to estimate the blur kernel of the blurry image by using the sharp image.
Any idea or solution or code will be appreciated.
Thanks

What is the difference between 360 panoramic images vs normal images

I want to do face and plate blur. I use https://github.com/understand-ai/anonymizer on Docker.
Some images could process even though not all faces and plates blurred.
Some images cannot be process at all.
And all images are 360 panoramic images.
My question is why normal images are easy to undergo blurring process while 360 panoramic images are not easy to process?
Normal images in jpg format same goes with 360 panoramic images in jpg format. Width and height are same respectively.
What makes them different? what else differences between them other than their view?

How to add motion blur (non-zero exposure time rendering) in Three.js?

I am trying to achieve this effect:
https://dl.dropboxusercontent.com/u/8554242/dmitri/projects/MotionBlurDemo/MotionBlurDemo.html
But I need it applied to my Three.js scene, specifically on a Point Cloud Material (particles) or the individual particles.
Any help greatly appreciated!
if you want the "physically correct" approach then
create a FIFO of N images.
inside each scene redraw (assuming constant fps)
if FIFO already full throw out the oldest image
put the raw rendered scene image in the FIFO
blend all the images in the FIFO together
If you have big N then to speed things up You can store also the cumulative blend image off all images inside FIFO. Adding inserted image to it and substracting removing image from it. But the target image must have enough color bits to hold the result. In such case you render the cumulative image divided by N.
render the blended image to screen
For constant fps is the exposure time t=N/fps. If you do not have constant fps then you need to use variable size FIFO and Store the render time along with image. If sum of render times of images inside FIFO exceeds the exposure time throw oldest image out...
This approach requires quite a lot of memory (the images FIFO) but does not need any additional processing. Most blur effects fake all this inside geometry shader or by CPU by blurring or rendering differently the moving object which affect performance and sometimes is a bit complicated to render.

Crop a video layer in an AVComposition

I'm using AVMutableComposition to position and composite two different video tracks for playback and export. I can easily scale and position the video tracks using AVMutableVideoCompositionLayerInstruction. This is all in the sample code.
However, what I need to do is crop one of the video layers. Not "effectively crop" as is done in the sample code by having the video frame overlap the side of the composition, but actually crop one of the layers being composited so the shape of the composited video is different but video is not distorted. In other words, I don't want to change the renderSize of the whole composition, just crop one of the composited layers.
Is this possible? Any ideas to make it happen? Thanks!
Have you tried the Cropping Settings? of AVMutableVideoCompositionLayerInstruction?

White frame around images in WoW Addon

I am having some trouble with my World of Warcraft addon. Whenever I display my TGA files in the addon, there is a thin white frame around them. The same happens when I convert them to BLPs.
When I look at the images themselves with Preview, there's no white frame, but WoW decides to display one.
How do I resolve this?
I'm guessing you are using TGA files with an alpha channel and the "thin white frame" is about a pixel or less.
This is usually the effect of a matte that is placed under the opaque edges of the artwork prior to calculating the alpha channel. The solution is to generate your own alpha channel and feather the edges in by a pixel or so thus masking the matte.
The explanation is actually a tad more complex than this, but the method works.

Resources