i have a working Core Video setup (a frame captured from a USB camera via QTKit) and the current frame is rendered as a texture on an arbitary plane in 3d space in a subclassed NSOpenGLView. so far so good but i would like to use some Core Image filter on this frame.
i now have the basic code setup and it renders my unprocessed video frame like before, but the final processed output CIImage is rendererd as a screen aligned quad into the view. it feels like a image blitted over my 3d rendering. this is what i do not want!
i am looking for a way to process my video frame (a CVOpenGLTextureRef) with Core Image and just render the resulting image on my plane in 3d.
do i have to use offscreen rendering (store viewport, set new viewport and modelview and perspective matrices and render into a FBO) or is there any easier way?
thanks in advance!
Try the GPUImage! It's easy to use, and faster than CoreImage processing. It uses predefined, or custom shaders (GLSL)
Related
I have an image that is generated as an NSImage, and then I draw it into my NSView subclass, using a custom draw() method.
I want to modify this custom view so that the image is drawn in the same place, but it fades out on the side. That is, it's drawn as a linear gradient, from alpha=1.0 to alpha=0.0.
My best guess is one of the draw() variants with NSCompositingOperation might help me do what I want, but I'm having trouble understanding how they could do this. I'm not a graphics expert, and the NSImage and NSCompositingOperation docs seem to be using different terminology.
The quick version: pretty much this question but on macOS instead of Android.
You're on the right track. You'll want to use NSCompositingOperationDestinationOut to achieve this. That will effectively punch out the destination based on the alpha of the source being drawn. So if you first draw your image and then draw a gradient from alpha 0.0 to 1.0 with the .destinationOut operation on top, you'll end up with your image with alpha 1.0 to 0.0
Because that punch out happens to whatever is already in the backing store where the gradient is being drawn to, you'll want to be careful where/how you use it.
If you want to do this all within the drawing of the view, you should do the drawing of your image and the punch out gradient within a transparency layer using CGContextBeginTransparencyLayer and CGContextEndTransparencyLayer to prevent punching out anything else.
You could also first create a new NSImage to represent this faded variant, either using the drawingHandler constructor or NSCustomImageRep and doing the same image & gradient drawing within there (without worrying about transparency layers). And then draw that image into your view, or simply use that image with an NSImageView.
I'm using WebGL to do something very similar to image processing. I draw a single Quad in orthoscopic projection, and perform some processing in the fragment shaders. There are two steps, in the first one the original texture from the Quad is processed in the fragment shader and written to a framebuffer, a second step processes that data to the final canvas.
The users can zoom and translate the image. For this to work smoothly, I need to hit 60 fps, or this gets noticeably sluggish. This is no issue on desktop GPUs, but on mobile devices with much weaker hardware and higher resolutions this gets problematic.
The translation case is the most noticeable and problematic, the user drags the mouse pointer or their finger over the screen and the image is lagging behind. But translation is also a case where I could theoretically reuse a lot of data from the previous frame.
Ideally I'd copy the canvas from the last frame, translate it by x,y pixels, and then only do the whole image processing in fragment shaders on the parts of the canvas that aren't covered by the translated previous canvas.
Is there a way to do this in WebGL?
If you want to access the previous frame you need to draw to a texture attached to a framebuffer. Then draw that texture into the canvas translated.
Here what I need to do is print the 3D animation transitions in my animation clip in 2D format. For example printing a human animation walking on paper. Is there a way to do this?
There are several aspects to consider:
Splitting the animation up into frames
Rendering the frame
Retrieving the rendered frame
Splitting the animation up into frames
Before you render a frame, you have to decide, which frame it should be. Depending on your requirements you may want to render only keyframes (frames that have been defined by the animation artist or the mocap data, but may not be evenly distributed over the duration of the animation), or to render frames that are evenly distributed over the duration of the animation (i.e. render one frame every 0.2 seconds).
Once you know the frame, you can tell the AnimationController to jump to that specific frame and update the 3D model accordingly.
Rendering the frame
Again, depending on your requirements, you may want to consider using a Camera with an orthographic projection. This will remove any perspective distortions, making it easier to visually analyze the animation. If of course the goal is a more visually appealing or "artistic" presentation, a perspective projection may be desired as well.
Retrieving the frame
As you want to print out the rendering, you need to retrieve the result of the rendering. You can do this either by letting Unity take a screenshot (Application.CaptureScreenshot), or by having a RenderTexture and setting this as the render target of your Camera. You can then copy the contents of the RenderTexture to a Texture2D (Texture2D.ReadPixels()) and save that to disk (Texture2D.EncodeToPNG/JPG())
Is it possible to render to FBO texture once and then use the resulting texture handle to render all following frames?
For example, in case I'm rendering a hard shadow map and the scene geometry and light position are static, the depth map is always the same and I want to render it only once using a FBO and then just use it after that. However, if I simply put a flag to render the depth texture once, the texture remains empty for the rest of the frames.
Is FBO get reallocated after rendering a frame has been complete? What would be the right way to preserve rendered texture for rendering of the following frames?
Rendering to a texture is no different than if you had uploaded those pixels to the texture in the first place. The contents of a texture do not magically disappear. A texture's contents are changed when you change them. This could be by uploading data to the texture, or by setting one of the texture's images to be used for framebuffer operations (clearing, rendering to it, etc).
Unless you do something to explicitly change the data stored in a texture, it won't change.
I am doing my iPhone graphics using OpenGL. In one of my projects, I need to use an image, which I need to use as a texture in OpenGL. The .png image is 512 * 512 in size, its background is transparent, and the image has a thick blue line in its center.
When I apply my image to a polygon in OpenGL, the texture appears as if the transparent part in the image is black and the thick blue line is seen as itself. In order to remove the black part, I used blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Then my black part of the texture in the polygon is removed. Now only the blue band is seen. Thus the problem is solved.
But I want to add many such images and make many objects in OpenGL. I am able to do that, but the frame rate is very low when I add more and more images to objects. But when I comment out blending, the frame rate is normal, but the images are not seen.
Since I do not have good fps, the graphics are a bit slow and I get a shaky effect.
So:
1) Is there any other method than blending to solve my problem?
2) How can I improve the frame rate of my OpenGL app? What all steps need to be taken in order to implement my graphics properly?
If you want to have transparent parts of an object, the only way is to blend to pixel data for the triangle with what is currently in the buffer (what you are currently doing). Normally, when using solid textures, the new pixel data for a triangle just overwrites what ever was in buffer (as long as it is closers, ie z-buffer). But with transparency, it has start looking at the transparency of that part of the texture, look at what is behind it, all the way back to something solid. Then has combine all of those over lapping layers of transparent stuff till you get the final image.
If all you are wanting your transparency for is something like a simple tree sprite, and removing the 'stuff' form the sides of the trunk etc. Then you may be better of providing more complex geometry that actually defines the shape of the trunk and thus not need to bother with transparency.
Sadly, I don't think there is much you can do to try to speed up your FPS, other then cut down the amount of transparency you are calculating. Maybe even adding some optimization that checks images to see if it can turn of alpha blending for this image or not. Depending on how much you are trying to push through, may save time in the long wrong.