Container format for ETC1 textures - opengl-es

I'm looking for a format that supports mipmaps, cubemaps and 3d textures for using on a OpenGL ES 2.0 game. On Windows, I was using .dds format because of its support for DXT compression. For mobile programs, I think there are .pkm files which don't supports multiple textures and .pvr files which I 'think' dependent on PowerVR platforms. So;
-Can I use .dds with ETC1 compression? Is there a license issue that prevents me to use .dds on platforms other than Windows?
-Do other GPU vendors' products(Adreno, Mali etc.) support .pvr files? (Not PVRTC, just .pvr with ETC1 compression)
-Or is there another file format that I can use for my needs?

Yes, you can use DDS for ETC1. Just invent your own FOURCC code. As far as I know dds is not patented.
No GPU vendor support pvr file format (including PoverVX). GPU vendors care only about compressed texture data (PVRTC, ETC, DXTC), not about file format (png, jpeg, dds, pvr). It is user/application responsibility to parse file format to extract texture data (compressed or not compressed).
You can use any file format that is good for your needs. Invent your own. For example, like this:
[4 bytes] - width
[4 bytes] - height
[4 bytes] - format id (1 - etc1, 2 - dxt, 3 - ... whatver)
[4 bytes] - count of images (mipmaps/cubemaps/whatever)
[bytes] - data

Or is there another file format that I can use for my needs?
You might want to look at http://www.khronos.org/opengles/sdk/tools/KTX/
and for program to create KTX files http://www.malideveloper.com/texture-compression-tool.php
KTX format support ETC1 compressed textures with mipmaps. It should support also other compression formats, but I don't know other tools that can do it (I've never need it).
Using libktx you can load textures (with mipmaps) from file/memory to GL objects with "single" line of code. Also it can decompress ETC1 textures to GL_RGB while loading .ktx file, if device doesn't support ETC1 (you need to set GLEW_OES_compressed_ETC1_RGB8_texture manually like here)

Related

How to parse the Raw16 data from the package which cypress CX3 transfer for YUV formating

all
I use the cypress USB3.0 control chip(CX3) to transfer the image data from my sensor.
But the CX3's SDK not support the raw16 data format, and i think that the bit width of raw16 and yuv2 are both 16 bits, so i make the sensor output raw16 data format and use the yuv2 format to transfer my raw data to the host, and i really get the data from the CX3 by the matlab or the e-cam tool which be able to preview the video stream. But of cause the color distort, because the receiver regard the data(under the UVC protocol) as the YUV data.
So my question is :
What's the different between the raw and yuv data order? Or another
say, how the receiver(camera tool) treat as the yuv data and raw data?
How can i parse the raw data from the UVC packages which contain the
raw data but using the yuv format to package?

What exactly is the difference between MediaFoundation RGB data and a BMP?

In trying to understand how to convert mediafoundation rgb32 data into a bitmap data that can be loaded into image/bitmap widgets or saved as a bitmap file, I am wondering what the RGB32 data actually is, in comparison to the data a BMP has?
Is it simply missing header information or key information a bitmap file has like width, height, etc?
What does RGB32 actually mean, in comparison to BMP data in a bitmap file or memory stream?
You normally have 32-bit RGB as IMFMediaBuffer attached to IMFSample. This is just bitmap bits, without format specific metadata. You can access this data by obtaining media buffer pointer, such as, for example, by doing IMFSample::ConvertToContiguousBuffer call, then doing IMFMediaBuffer::Lock to get a pixel data pointer.
The obtained buffer is compatible to data in standard .BMP file (except maybe, at some times, the rows could be in reverse order), it is just .BMP file has a header before this data. .BMP file normally has BITMAPFILEHEADER structure, then BITMAPINFOHEADER and then the buffer in question. If you write it one after another initialized respectively, this would yield you a valid picture file. This and other questions here show the way to create a .BMP file from bitmap bits.
See this GitHub code snippet, which is really close to the requested task and might be a good starting point.

Where is the endianness of the frame buffer device accounted for?

I'm working on a board with an at91 sama5d3 based device. In order to test the framebuffer, I redirected a raw data file to /dev/fb0. The file was generated using gimp and exported as a raw data file in the 'normal RGB' format. So as a serialised byte stream the data format in the file is RGBRGB (where each colour is 8 bits).
When copied to the framebuffer, the Red and the Blues were swapped as the LCD controller operates in little endian format when configured for 24 bpp mode.
This got me wondering, at what point is the endianness of the framebuffer taken into account? I'm planning on using directfb and had a look at some of the docs but didn't see anything directly alluding to the endianness of the pixel data.
I'm not familiar with how the kernel framebuffer works and so am missing a few pieces of the puzzle.

Best image format for/in CUDA image processing

i am new to image-processing in CUDA.
I am currently learning whatever i can about this.
Can anyone tell me what is the appropriate format (extension of image) for storing and accessing image files so that CUDA processing would have the most efficiency.
And y does all the sample cuda programs for image processing use .ppm file format for images.
And can i convert the images in other format to that format.
And how can i access those files (CUDA Code)?
Most image formats are created for efficient exchange of images, ie. on media (hard disk), the internet, etc.
For computation, the most useful representation of an image is usually in some raw, uncompressed format.
CUDA doesn't have any intrinsic functions that are used to manipulate an image in one of the interchange formats (e.g. .jpg, .png, .ppm, etc.) You should use some other library to convert an image in one of the interchange formats to a raw uncompressed format, and then you can operate on it directly in host code or in CUDA device code. Since CUDA doesn't recognize any interchange format, there is no one format that is correct or best to use. It will depend on other requirements you may have.
The sample programs that have used the .ppm format have simply done so for convenience. There are plenty of sample codes out there that use other formats such as .jpg or .bmp to store an image used by a CUDA program.

Lossless JPEG - can't find any example images, DICOM files

I'm currently working on the lossless JPEG files(not JPEG-LS). It's really hard to find any files to test my application on.
Particulary I need files that contain reset interval markers, multiple DC huffman tables, multiple scenes or comment markers.
Do you know where I could find any lossless JPEG files? Do you yourself have any that you could share?
Thanks in advance, Witek.
EDIT: i could also use DICOM files using this compression standard (tag (0002,0010) Transfer syntax UID = 1.2.840.10008.1.2.4.70)
On the following site you can find a few DICOM lossless JPEG files, in particular with the transfer syntaxes 1.2.840.10008.1.2.4.57 and .70. Consult the Transfer Syntax section for easy identification of which data sets that provide the requested transfer syntax.
There are also a number of lossless JPEG images of different flavors on the NEMA DICOM FTP site. For more detailed information on the various data sets, please consult the README file.
Here's a large collection of dicom sample images: There are some JPEG lossless images among them. Some subfolders have images that are not valid DICOM, but that is usually documented. By the same maintainer there is also this list of links.
Lossless JPEG is most widely used in XA (cathlab) cine images. These are always grayscale, and exist as 8 or 10 bit images.
You could also setup a free PACS like DCM4CHEE or conquest, send it uncompressed images and have them forward the images jpeg-lossless compressed. The advantage of this is that you can create images of different color spaces, bit depths, planar/bypixel, etcetera. Color spaces are interesting: people sometimes make mistakes to transform the color space like for Jpeg lossy, which you should not do.
Most likely none of these images require advanced stuff like restart markers. If you want to check if this works, create bitstreams with the IJG implementation and package them in DICOM.
EDIT: be warned that there are buggy images out there. I am using an implementation based on the IJG code.

Resources