Are there any patterns about when to use up-to-row-start image size in glCompressedTexSubImage2D? - opengl-es

I'm using glCompressedTexSubImage2D to update data to an ASTC_RGBA_6x6 format texture, but it seems that for some devices one should pass in an up-to-row-start size for param imageSize rather than the data size according to GL_INVALID_VALUE on glCompressedTexSubImage2D with an ASTC 8x8 texture.
Does anyone know how to find out when to use up-to-row-start size, and when to use the original size on any device? Is there any API or device param that I can use? Thank you in advance for any suggestion.
Currently I only know RedMI 4 would have this problem.

Related

How image_row_pitch is calculated for image 2d object in opencl while using clenquemapimage()?

I was working with a simple code (on qualcomm ardreno) which take an gray image of 1024x768 dim and and 100 to each of the pixel element. There might be different ways to create image 2d object and use it but i have used following structs
cl_rgba and cl_unsigned_int16.
I am grouping 4 pixel of 2 byte each and while creating image2d object using clcreateimage() ,i have used width=1024/4.I am mapping this created image object with clEnqueMapImage() with region[3]={1024/4,768,1}.
So when I print the image_row_pitch after this mapping call i get 2112. Which according to my understanding should be (256(width/4)*4(as rgba)*2(size of uint16)=2048). I have also noticed if I use width=960 then i get pitch as 1920 which is expected. I am not able to understand how this image row pitch is coming as a larger value than it should be .
What I am missing here.
kindly help.

Render CIImage to a specific rect in IOSurface

I'm trying to render a CIImage to a specific location in an IOSurface using [CIContext render:toIOSurface:bounds:colorSpace:] by specifying the bounds argument r as the destination rectangle.
According to the documentation this should work, but CoreImage always render the image to the bottom-left corner of the IOSurface.
It seems to me like a bug in CoreImage.
I can overcome this problem by rendering the image to an intermediate IOSurface with the same size of the CIImage, and then copy the content of the surface to another surface.
However, I would like to avoid the allocation and the copying in the solution.
Any suggestion?
What you want to happen isn't currently possible with that API (which is a huge bummer).
You can however wrap your IOSurface up as a texture (using CGLTexImageIOSurface2D) and then use CIContext's contextWithCGLContext:…, and then finally use drawImage:inRect:fromRect: to do this.
It's a huge hack, but it works (mostly):
https://github.com/ccgus/FMMicroPaintPlus/blob/master/CIMicroPaint/FMIOSurfaceAccumulator.m
Since macOS 10.13 you can use CIRenderDestination and CIContext.startTask(toRender:from:to:at:) to achieve the same result without having to provide an intermediate image.
In my case I used a combination of Metal and Core Image to render only a subpart of the output image as part of my pipeline as follow:
let renderDst = CIRenderDestination(mtlTexture: texture, commandBuffer: commandBuffer)
try! context.startTask(toRender: ciimage,
from: dirtyRect, to: renderDst, at: dirtyRect.origin)
As I'm already synchronizing against the MTLCommandBuffer I didn't need to synchronize against the returned CIRenderTask.
If you want to more details you can check the slides (starting from 83) of Advances in Core Image: Filters, Metal, Vision, and More (WWDC video from 2017).

how to load an image in opengl

I need to upload an image into my background...
Does anyone know how to do this?
I know I have to do the following steps:
1) Load image data into system memory
2) Generate a texture name with glGenTextures
3) Bind the texture name with gBindTexture
4) Set wrapping and filtering mode with glTexParameter
5) call glTexImage2D with the right parameters depending on the image nature to load image data into video memory
but I don't know how to put them in opengl
OpenGL supports textures and images... but the user needs to provide the data. So you have to use sme library or additional code to load the data.
I suggest using very simple lib SOIL - http://www.lonesock.net/soil.html
Or some library provided by your SDK
in general:
load texture bytes into pBytes;
glTexImage2D(..., ..., ..., pBytes);

How to convert an OpenGL ES texture into a CIImage

I know how to do it the other way around. But how can I create a CIImage from a texture, without having to copy into CPU memory? [CIImage imageWithData]? CVOpenGLESTextureCache?
Unfortunately, I don't think there's any way to avoid having to read back pixel data using glReadPixels(). All of the inputs for a CIImage (data, CGImageRef, CVPixelBufferRef) are CPU-side, so I don't see a fast path to deliver that to a CIImage. It looks like your best alternative there would be to use glReadPixels() to pull in the raw RGBA data from your texture and send it into the CIImage using -initWithData:options: and an kCIFormatRGBA8 pixel format. (Update: 3/14/2012) On iOS 5.0, there is now a faster way to grab OpenGL ES frame data, using the new texture caches. I describe this in detail in this answer.
However, there might be another way to achieve what you want. If you simply want to apply filters on a texture for output to the screen, you might be able to use my GPUImage framework to do the processing. It already uses OpenGL ES 2.0 as the core of its rendering pipeline, with textures as the way that frames of images or video are passed from one filter to the next. It's also much faster than Core Image, in my benchmarks.
You can supply your texture as an input here, so that it never has to touch the CPU. I don't have a stock class for grabbing raw textures from OpenGL ES yet, but you can modify the code for one of the existing GPUImageOutput subclasses to use this as a source fairly easily. You can then chain filters on to that, and direct the output to the screen or to a still image. At some point, I'll add a class for this kind of data source, but the project's still fairly new.
As of iOS 6, you can use a built-in init method for this situation:
initWithTexture:size:flipped:colorSpace:
See the docs:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Reference/QuartzCoreFramework/Classes/CIImage_Class/Reference/Reference.html
You might find these helpful:
https://developer.apple.com/library/ios/#samplecode/RosyWriter/Introduction/Intro.html
https://developer.apple.com/library/ios/#samplecode/GLCameraRipple/Listings/GLCameraRipple_RippleViewController_m.html
In general I think the image data will need to be copied from the GPU to the CPU. However the iOS features mentioned above might make this easier and more efficient.

Waterfall display

I need help in creating a waterfall display of my image data stored in a buffer. The stream of image data needs to be displayed scrolling down the screen as its being acquired from the camera.
I am using visual studio c++ windows forms.
Can someone please help me to figure out how to achieve this display?
Thanks in advance
I think the info you provide is to minimal to suggest anything worthwhile.
For making custom graphical effects, the usual suggested route is to make a DIB bitmap, which gives you access to the raw bytes. Alter the bytes anyway you see fit (adding the stream of raw image bytes from your camera) and then blit it to the windows HDC in a timely fashion.

Resources