Crop a video layer in an AVComposition - macos

I'm using AVMutableComposition to position and composite two different video tracks for playback and export. I can easily scale and position the video tracks using AVMutableVideoCompositionLayerInstruction. This is all in the sample code.
However, what I need to do is crop one of the video layers. Not "effectively crop" as is done in the sample code by having the video frame overlap the side of the composition, but actually crop one of the layers being composited so the shape of the composited video is different but video is not distorted. In other words, I don't want to change the renderSize of the whole composition, just crop one of the composited layers.
Is this possible? Any ideas to make it happen? Thanks!

Have you tried the Cropping Settings? of AVMutableVideoCompositionLayerInstruction?

Related

Corrupted resized textures with Metal on Retina screens

I want to draw a series of textures into METAL view in order to present a complete image. On a regular screen, the images are presented on exactly 1:1 scale (Meaning a 100x100 pixels texture will be presented in a 100x100 pixels square)
Drawing it on a retina display, will actually give me a 200x200 square.
Now, there may be 2 different approaches:
1) Generate entire image into 100x100 square and let Metal View to upscale it to 200x200 square - It works.
2) Upscale each texture and generate image directly into 200x200 square. Why to take this approach? Because some of the textures (Like texts) are generated dynamically, and can be generated in a better resolution. Something impossible if you take the first approach.
Unfortunately, in this approach, some ugly square is visible around each texture.
I tried to play with sizes, clamp options etc, yet I could not find any solution.
Any help would be highly appreciated!
Image from regular screen
Image from retina screen
Found a solution. In Fragment shader, texture sampler was defined as:
constexpr sampler s = sampler(coord::normalized, address::repeat, filter::nearest);
instead of:
constexpr sampler s = sampler(coord::normalized, address::clamp_to_edge, filter::nearest);

Large background images and fps reducing

I've made a game with parallax backround images in it.
Background consists from four different moving layers. Without bg I have 60fps on iPhone4. But when I add it on the stage fps is reduced to 45-50.
I divided bg images into tiles 128x128. Problem still exists. All images in png extension with alpha channel. I'm using the last version of cocos2-x. Have anybody an idea why fps reduced so much?

Top/bottom black stripes for non wide screens in DirectX

I need to put black stripes up and down for full screen rendering when wide resolutions are not supported.
Example: for 1280x1024 resolution I need to render in 1280x720 and put black stripes to fill the screen up to 1280x1024
I do believe that the thing you need can be achieved with a viewport change, check out http://msdn.microsoft.com/en-us/library/windows/desktop/bb206341(v=vs.85).aspx
make your backbuffer the size of the screen resolution (eg: 1280x1024), then render every to a texture (of 1280x720) and render that texture to the center of the screen, after all post-processing. this is probably not the most efficient solution, but its a starting point.
A better solution is actually making your system work on 4:3 resolutions, users would probably enjoy that more than a letterboxed view, and it should be to difficult.

use core-image in 3d

i have a working Core Video setup (a frame captured from a USB camera via QTKit) and the current frame is rendered as a texture on an arbitary plane in 3d space in a subclassed NSOpenGLView. so far so good but i would like to use some Core Image filter on this frame.
i now have the basic code setup and it renders my unprocessed video frame like before, but the final processed output CIImage is rendererd as a screen aligned quad into the view. it feels like a image blitted over my 3d rendering. this is what i do not want!
i am looking for a way to process my video frame (a CVOpenGLTextureRef) with Core Image and just render the resulting image on my plane in 3d.
do i have to use offscreen rendering (store viewport, set new viewport and modelview and perspective matrices and render into a FBO) or is there any easier way?
thanks in advance!
Try the GPUImage! It's easy to use, and faster than CoreImage processing. It uses predefined, or custom shaders (GLSL)

What can we use instead of blending in OpenGL ES?

I am doing my iPhone graphics using OpenGL. In one of my projects, I need to use an image, which I need to use as a texture in OpenGL. The .png image is 512 * 512 in size, its background is transparent, and the image has a thick blue line in its center.
When I apply my image to a polygon in OpenGL, the texture appears as if the transparent part in the image is black and the thick blue line is seen as itself. In order to remove the black part, I used blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Then my black part of the texture in the polygon is removed. Now only the blue band is seen. Thus the problem is solved.
But I want to add many such images and make many objects in OpenGL. I am able to do that, but the frame rate is very low when I add more and more images to objects. But when I comment out blending, the frame rate is normal, but the images are not seen.
Since I do not have good fps, the graphics are a bit slow and I get a shaky effect.
So:
1) Is there any other method than blending to solve my problem?
2) How can I improve the frame rate of my OpenGL app? What all steps need to be taken in order to implement my graphics properly?
If you want to have transparent parts of an object, the only way is to blend to pixel data for the triangle with what is currently in the buffer (what you are currently doing). Normally, when using solid textures, the new pixel data for a triangle just overwrites what ever was in buffer (as long as it is closers, ie z-buffer). But with transparency, it has start looking at the transparency of that part of the texture, look at what is behind it, all the way back to something solid. Then has combine all of those over lapping layers of transparent stuff till you get the final image.
If all you are wanting your transparency for is something like a simple tree sprite, and removing the 'stuff' form the sides of the trunk etc. Then you may be better of providing more complex geometry that actually defines the shape of the trunk and thus not need to bother with transparency.
Sadly, I don't think there is much you can do to try to speed up your FPS, other then cut down the amount of transparency you are calculating. Maybe even adding some optimization that checks images to see if it can turn of alpha blending for this image or not. Depending on how much you are trying to push through, may save time in the long wrong.

Resources