Where is the endianness of the frame buffer device accounted for? - linux-kernel

I'm working on a board with an at91 sama5d3 based device. In order to test the framebuffer, I redirected a raw data file to /dev/fb0. The file was generated using gimp and exported as a raw data file in the 'normal RGB' format. So as a serialised byte stream the data format in the file is RGBRGB (where each colour is 8 bits).
When copied to the framebuffer, the Red and the Blues were swapped as the LCD controller operates in little endian format when configured for 24 bpp mode.
This got me wondering, at what point is the endianness of the framebuffer taken into account? I'm planning on using directfb and had a look at some of the docs but didn't see anything directly alluding to the endianness of the pixel data.
I'm not familiar with how the kernel framebuffer works and so am missing a few pieces of the puzzle.

Related

How to parse the Raw16 data from the package which cypress CX3 transfer for YUV formating

all
I use the cypress USB3.0 control chip(CX3) to transfer the image data from my sensor.
But the CX3's SDK not support the raw16 data format, and i think that the bit width of raw16 and yuv2 are both 16 bits, so i make the sensor output raw16 data format and use the yuv2 format to transfer my raw data to the host, and i really get the data from the CX3 by the matlab or the e-cam tool which be able to preview the video stream. But of cause the color distort, because the receiver regard the data(under the UVC protocol) as the YUV data.
So my question is :
What's the different between the raw and yuv data order? Or another
say, how the receiver(camera tool) treat as the yuv data and raw data?
How can i parse the raw data from the UVC packages which contain the
raw data but using the yuv format to package?

What exactly is the difference between MediaFoundation RGB data and a BMP?

In trying to understand how to convert mediafoundation rgb32 data into a bitmap data that can be loaded into image/bitmap widgets or saved as a bitmap file, I am wondering what the RGB32 data actually is, in comparison to the data a BMP has?
Is it simply missing header information or key information a bitmap file has like width, height, etc?
What does RGB32 actually mean, in comparison to BMP data in a bitmap file or memory stream?
You normally have 32-bit RGB as IMFMediaBuffer attached to IMFSample. This is just bitmap bits, without format specific metadata. You can access this data by obtaining media buffer pointer, such as, for example, by doing IMFSample::ConvertToContiguousBuffer call, then doing IMFMediaBuffer::Lock to get a pixel data pointer.
The obtained buffer is compatible to data in standard .BMP file (except maybe, at some times, the rows could be in reverse order), it is just .BMP file has a header before this data. .BMP file normally has BITMAPFILEHEADER structure, then BITMAPINFOHEADER and then the buffer in question. If you write it one after another initialized respectively, this would yield you a valid picture file. This and other questions here show the way to create a .BMP file from bitmap bits.
See this GitHub code snippet, which is really close to the requested task and might be a good starting point.

Best image format for/in CUDA image processing

i am new to image-processing in CUDA.
I am currently learning whatever i can about this.
Can anyone tell me what is the appropriate format (extension of image) for storing and accessing image files so that CUDA processing would have the most efficiency.
And y does all the sample cuda programs for image processing use .ppm file format for images.
And can i convert the images in other format to that format.
And how can i access those files (CUDA Code)?
Most image formats are created for efficient exchange of images, ie. on media (hard disk), the internet, etc.
For computation, the most useful representation of an image is usually in some raw, uncompressed format.
CUDA doesn't have any intrinsic functions that are used to manipulate an image in one of the interchange formats (e.g. .jpg, .png, .ppm, etc.) You should use some other library to convert an image in one of the interchange formats to a raw uncompressed format, and then you can operate on it directly in host code or in CUDA device code. Since CUDA doesn't recognize any interchange format, there is no one format that is correct or best to use. It will depend on other requirements you may have.
The sample programs that have used the .ppm format have simply done so for convenience. There are plenty of sample codes out there that use other formats such as .jpg or .bmp to store an image used by a CUDA program.

Fail to generate correct wav file from raw stream

I captured raw audio data stream together with its WAVEFORMATEXTENSIBLE struct.
WAVEFORMATEXTENSIBLE is shown in the figure below:
Following the standard of wav file, I tried to write the raw bits into a wav file.
What I do is:
write "RIFF".
write a DWORD. (filesize - sizeof("RIFF") - sizeof(DWORD)).
=== WaveFormat Chunk ===
write "WAVEfmt "
write a DWORD. (size of the WAVEFORMATEXTENSIBLE struct)
write the WAVEFORMATEXTENSIBLE struct.
=== Fact Chunk ===
write "fact"
write a DWORD. ( 4 )
write a DWORD. ( num of samples in the stream, which should be sizeof(rawdata)*8/wBitsPerSample ).
=== Data Chunk ===
write "data"
write a DWORD (size of rawdata)
write the raw data.
After getting the wav file from the above steps, I played the wav file with media player, there is no sound, playing with audacity will give me a distorted sound, I can hear that it is the correct audio I want, but the sound is distorted with noise.
The raw data can be find here
The wav file I generate is here
It is very confusing to me, because when I use the same method to convert IEEE-float data to wav file, it works just fine.
I figured this out, it seems the getbuffer releasebuffer cycle in IAudioRenderClient is putting raw data that has the format same as that passed into the initialize method of the IAudioClient.
The GetMixFormat in IAudioClient in my case is different from the format passed into the initialize method. I think GetMixFormat gets the format that the device supports.
IAudioClient should have done the conversion of format from the initialized format to the mixformat. I intercept the initialize method, get the format, and it works like a charm.
I'm intercepting WASAPI to access the audio data and face the exact same issue where the generated audio file from the data sounds like the correct content but is very noisy somehow although the frame rate, sample width, number of channels etc. are set properly.
The SubFormat field of WAVEFORMATEXTENSIBLE shows that the data is actually KSDATAFORMAT_SUBTYPE_IEEE_FLOAT, while I originally treat it as integers. According to this page, KSDATAFORMAT_SUBTYPE_IEEE_FLOAT is equivalent to WAVE_FORMAT_IEEE_FLOAT in WAVEFORMATEX. Hence, setting the "audio format" in the wav file's fmt chunk(normally starts in the 20th position) to WAVE_FORMAT_IEEE_FLOAT(which is 3) solved the problem. Remember to put it in little endian.
Original value of audio format
After modification

Container format for ETC1 textures

I'm looking for a format that supports mipmaps, cubemaps and 3d textures for using on a OpenGL ES 2.0 game. On Windows, I was using .dds format because of its support for DXT compression. For mobile programs, I think there are .pkm files which don't supports multiple textures and .pvr files which I 'think' dependent on PowerVR platforms. So;
-Can I use .dds with ETC1 compression? Is there a license issue that prevents me to use .dds on platforms other than Windows?
-Do other GPU vendors' products(Adreno, Mali etc.) support .pvr files? (Not PVRTC, just .pvr with ETC1 compression)
-Or is there another file format that I can use for my needs?
Yes, you can use DDS for ETC1. Just invent your own FOURCC code. As far as I know dds is not patented.
No GPU vendor support pvr file format (including PoverVX). GPU vendors care only about compressed texture data (PVRTC, ETC, DXTC), not about file format (png, jpeg, dds, pvr). It is user/application responsibility to parse file format to extract texture data (compressed or not compressed).
You can use any file format that is good for your needs. Invent your own. For example, like this:
[4 bytes] - width
[4 bytes] - height
[4 bytes] - format id (1 - etc1, 2 - dxt, 3 - ... whatver)
[4 bytes] - count of images (mipmaps/cubemaps/whatever)
[bytes] - data
Or is there another file format that I can use for my needs?
You might want to look at http://www.khronos.org/opengles/sdk/tools/KTX/
and for program to create KTX files http://www.malideveloper.com/texture-compression-tool.php
KTX format support ETC1 compressed textures with mipmaps. It should support also other compression formats, but I don't know other tools that can do it (I've never need it).
Using libktx you can load textures (with mipmaps) from file/memory to GL objects with "single" line of code. Also it can decompress ETC1 textures to GL_RGB while loading .ktx file, if device doesn't support ETC1 (you need to set GLEW_OES_compressed_ETC1_RGB8_texture manually like here)

Resources