how to load an image in opengl - image

I need to upload an image into my background...
Does anyone know how to do this?
I know I have to do the following steps:
1) Load image data into system memory
2) Generate a texture name with glGenTextures
3) Bind the texture name with gBindTexture
4) Set wrapping and filtering mode with glTexParameter
5) call glTexImage2D with the right parameters depending on the image nature to load image data into video memory
but I don't know how to put them in opengl

OpenGL supports textures and images... but the user needs to provide the data. So you have to use sme library or additional code to load the data.
I suggest using very simple lib SOIL - http://www.lonesock.net/soil.html
Or some library provided by your SDK
in general:
load texture bytes into pBytes;
glTexImage2D(..., ..., ..., pBytes);

Related

In OpenGL ES what is an "external image"? Why do we need GL_OES_EGL_image_external?

I am reading through spec for external images. It says:
This extension provides a mechanism for creating EGLImage texture targets
from EGLImages. This extension defines a new texture target,
TEXTURE_EXTERNAL_OES.
I have done my best but I can't find out what an "external image is". This extension, and many of the related extension specs, reference "EGLImages" and similar things but I can't figure out what they are.
Why do I need this?
Typically to create an image I load a file from disk. I believe that is "external".
This question basically says it is an image not created by the graphics driver but wouldn't mean virtually all images ever created would be EGLImages or "external images"? When using OpenGL I don't remember having to worry about if my image was external or not.
Can somebody explain what an "External" image is, why it is needed (mainly I see this w/r/t OpenGL ES) and why these extensions are needed? Frankly I am not sure what an "EGL Image" is either, or why they make a distinction.
Thank you
This is a late answer.
An external image AKA external texture is typically used to supply frames from an image stream (e.g. camera preview, decoded video) as OpenGL textures. Such frames usually have special color encodings and memory layouts (e.g. multi-plane YUV). The extension mentioned above allows sampling such images as if they were regular OpenGL textures (with a few limitations).

DX11 add a simple black box on a texture

I want to add a simple black box(like this: effect) on a texture(ID3D11ShaderResourceView), is there a simple way to do it in DX11? don't want write a shadow to do it.
Well, what you're trying to do is actually "initializing texture programmatically". Textures from D3D POV are nothing more than pieces of memory with clearly defined layout. Normally, you create a texture resource, read data from a texture file (like *.BMP for example), put the data in the texture and then feed it to the pipeline for sampling.
In your case though, you need an additional step:
Create texture resource using either D3D11_USAGE_DEFAULT or D3D11_USAGE_DYNAMIC flag - so you can access it from the CPU
Read the color map to your texture
Depending on the chosen type, either add your data to the initial data or Map/Unmap and add your data (by your data I mean overwrite each edge of the image with black color)
This can be also done to kind of "generate" textures, like for example checker-board or clouds.
All the information you need can be found here.

Render CIImage to a specific rect in IOSurface

I'm trying to render a CIImage to a specific location in an IOSurface using [CIContext render:toIOSurface:bounds:colorSpace:] by specifying the bounds argument r as the destination rectangle.
According to the documentation this should work, but CoreImage always render the image to the bottom-left corner of the IOSurface.
It seems to me like a bug in CoreImage.
I can overcome this problem by rendering the image to an intermediate IOSurface with the same size of the CIImage, and then copy the content of the surface to another surface.
However, I would like to avoid the allocation and the copying in the solution.
Any suggestion?
What you want to happen isn't currently possible with that API (which is a huge bummer).
You can however wrap your IOSurface up as a texture (using CGLTexImageIOSurface2D) and then use CIContext's contextWithCGLContext:…, and then finally use drawImage:inRect:fromRect: to do this.
It's a huge hack, but it works (mostly):
https://github.com/ccgus/FMMicroPaintPlus/blob/master/CIMicroPaint/FMIOSurfaceAccumulator.m
Since macOS 10.13 you can use CIRenderDestination and CIContext.startTask(toRender:from:to:at:) to achieve the same result without having to provide an intermediate image.
In my case I used a combination of Metal and Core Image to render only a subpart of the output image as part of my pipeline as follow:
let renderDst = CIRenderDestination(mtlTexture: texture, commandBuffer: commandBuffer)
try! context.startTask(toRender: ciimage,
from: dirtyRect, to: renderDst, at: dirtyRect.origin)
As I'm already synchronizing against the MTLCommandBuffer I didn't need to synchronize against the returned CIRenderTask.
If you want to more details you can check the slides (starting from 83) of Advances in Core Image: Filters, Metal, Vision, and More (WWDC video from 2017).

Easiest way to import image file to OpenGL ES 2.0 cross platform

I am learning to use OpenGL ES 2.0 by using MoSync to write cross platform C code. I have already managed to draw basic shapes such as a triangle, square and circle so the next stage is to draw some text to the screen. After reading various books, tutorials and forum posts I realise I have to create a texture atlas bitmap.
I have a file with the text I want to use, i.e 0-9 a-z image file. Before I can upload and bind it to a texture object I first need to upload the image to OpenGL. Various tutorials use UIImage or BitmapFactory to upload the image but I cannot use these as MoSync does not contain their header files. Could anyone suggest a way to load my image file to OPenGL?
To use MoSync on the Android platform you are probably going to have to make a native library for MoSync and your OpenGL ES code in C++. Most OpenGL ES projects on Android are done in native code for many reasons which are detailed in this article:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1/
I ended up using maOpenGLTexImage(MAHandle image), which works exactly as glTexImage2D() but it uses an image resource instead and figures out pixel formats etc.

How to convert an OpenGL ES texture into a CIImage

I know how to do it the other way around. But how can I create a CIImage from a texture, without having to copy into CPU memory? [CIImage imageWithData]? CVOpenGLESTextureCache?
Unfortunately, I don't think there's any way to avoid having to read back pixel data using glReadPixels(). All of the inputs for a CIImage (data, CGImageRef, CVPixelBufferRef) are CPU-side, so I don't see a fast path to deliver that to a CIImage. It looks like your best alternative there would be to use glReadPixels() to pull in the raw RGBA data from your texture and send it into the CIImage using -initWithData:options: and an kCIFormatRGBA8 pixel format. (Update: 3/14/2012) On iOS 5.0, there is now a faster way to grab OpenGL ES frame data, using the new texture caches. I describe this in detail in this answer.
However, there might be another way to achieve what you want. If you simply want to apply filters on a texture for output to the screen, you might be able to use my GPUImage framework to do the processing. It already uses OpenGL ES 2.0 as the core of its rendering pipeline, with textures as the way that frames of images or video are passed from one filter to the next. It's also much faster than Core Image, in my benchmarks.
You can supply your texture as an input here, so that it never has to touch the CPU. I don't have a stock class for grabbing raw textures from OpenGL ES yet, but you can modify the code for one of the existing GPUImageOutput subclasses to use this as a source fairly easily. You can then chain filters on to that, and direct the output to the screen or to a still image. At some point, I'll add a class for this kind of data source, but the project's still fairly new.
As of iOS 6, you can use a built-in init method for this situation:
initWithTexture:size:flipped:colorSpace:
See the docs:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Reference/QuartzCoreFramework/Classes/CIImage_Class/Reference/Reference.html
You might find these helpful:
https://developer.apple.com/library/ios/#samplecode/RosyWriter/Introduction/Intro.html
https://developer.apple.com/library/ios/#samplecode/GLCameraRipple/Listings/GLCameraRipple_RippleViewController_m.html
In general I think the image data will need to be copied from the GPU to the CPU. However the iOS features mentioned above might make this easier and more efficient.

Resources