In trying to understand how to convert mediafoundation rgb32 data into a bitmap data that can be loaded into image/bitmap widgets or saved as a bitmap file, I am wondering what the RGB32 data actually is, in comparison to the data a BMP has?
Is it simply missing header information or key information a bitmap file has like width, height, etc?
What does RGB32 actually mean, in comparison to BMP data in a bitmap file or memory stream?
You normally have 32-bit RGB as IMFMediaBuffer attached to IMFSample. This is just bitmap bits, without format specific metadata. You can access this data by obtaining media buffer pointer, such as, for example, by doing IMFSample::ConvertToContiguousBuffer call, then doing IMFMediaBuffer::Lock to get a pixel data pointer.
The obtained buffer is compatible to data in standard .BMP file (except maybe, at some times, the rows could be in reverse order), it is just .BMP file has a header before this data. .BMP file normally has BITMAPFILEHEADER structure, then BITMAPINFOHEADER and then the buffer in question. If you write it one after another initialized respectively, this would yield you a valid picture file. This and other questions here show the way to create a .BMP file from bitmap bits.
See this GitHub code snippet, which is really close to the requested task and might be a good starting point.
Related
all
I use the cypress USB3.0 control chip(CX3) to transfer the image data from my sensor.
But the CX3's SDK not support the raw16 data format, and i think that the bit width of raw16 and yuv2 are both 16 bits, so i make the sensor output raw16 data format and use the yuv2 format to transfer my raw data to the host, and i really get the data from the CX3 by the matlab or the e-cam tool which be able to preview the video stream. But of cause the color distort, because the receiver regard the data(under the UVC protocol) as the YUV data.
So my question is :
What's the different between the raw and yuv data order? Or another
say, how the receiver(camera tool) treat as the yuv data and raw data?
How can i parse the raw data from the UVC packages which contain the
raw data but using the yuv format to package?
I'm working on a board with an at91 sama5d3 based device. In order to test the framebuffer, I redirected a raw data file to /dev/fb0. The file was generated using gimp and exported as a raw data file in the 'normal RGB' format. So as a serialised byte stream the data format in the file is RGBRGB (where each colour is 8 bits).
When copied to the framebuffer, the Red and the Blues were swapped as the LCD controller operates in little endian format when configured for 24 bpp mode.
This got me wondering, at what point is the endianness of the framebuffer taken into account? I'm planning on using directfb and had a look at some of the docs but didn't see anything directly alluding to the endianness of the pixel data.
I'm not familiar with how the kernel framebuffer works and so am missing a few pieces of the puzzle.
i am new to image-processing in CUDA.
I am currently learning whatever i can about this.
Can anyone tell me what is the appropriate format (extension of image) for storing and accessing image files so that CUDA processing would have the most efficiency.
And y does all the sample cuda programs for image processing use .ppm file format for images.
And can i convert the images in other format to that format.
And how can i access those files (CUDA Code)?
Most image formats are created for efficient exchange of images, ie. on media (hard disk), the internet, etc.
For computation, the most useful representation of an image is usually in some raw, uncompressed format.
CUDA doesn't have any intrinsic functions that are used to manipulate an image in one of the interchange formats (e.g. .jpg, .png, .ppm, etc.) You should use some other library to convert an image in one of the interchange formats to a raw uncompressed format, and then you can operate on it directly in host code or in CUDA device code. Since CUDA doesn't recognize any interchange format, there is no one format that is correct or best to use. It will depend on other requirements you may have.
The sample programs that have used the .ppm format have simply done so for convenience. There are plenty of sample codes out there that use other formats such as .jpg or .bmp to store an image used by a CUDA program.
I captured raw audio data stream together with its WAVEFORMATEXTENSIBLE struct.
WAVEFORMATEXTENSIBLE is shown in the figure below:
Following the standard of wav file, I tried to write the raw bits into a wav file.
What I do is:
write "RIFF".
write a DWORD. (filesize - sizeof("RIFF") - sizeof(DWORD)).
=== WaveFormat Chunk ===
write "WAVEfmt "
write a DWORD. (size of the WAVEFORMATEXTENSIBLE struct)
write the WAVEFORMATEXTENSIBLE struct.
=== Fact Chunk ===
write "fact"
write a DWORD. ( 4 )
write a DWORD. ( num of samples in the stream, which should be sizeof(rawdata)*8/wBitsPerSample ).
=== Data Chunk ===
write "data"
write a DWORD (size of rawdata)
write the raw data.
After getting the wav file from the above steps, I played the wav file with media player, there is no sound, playing with audacity will give me a distorted sound, I can hear that it is the correct audio I want, but the sound is distorted with noise.
The raw data can be find here
The wav file I generate is here
It is very confusing to me, because when I use the same method to convert IEEE-float data to wav file, it works just fine.
I figured this out, it seems the getbuffer releasebuffer cycle in IAudioRenderClient is putting raw data that has the format same as that passed into the initialize method of the IAudioClient.
The GetMixFormat in IAudioClient in my case is different from the format passed into the initialize method. I think GetMixFormat gets the format that the device supports.
IAudioClient should have done the conversion of format from the initialized format to the mixformat. I intercept the initialize method, get the format, and it works like a charm.
I'm intercepting WASAPI to access the audio data and face the exact same issue where the generated audio file from the data sounds like the correct content but is very noisy somehow although the frame rate, sample width, number of channels etc. are set properly.
The SubFormat field of WAVEFORMATEXTENSIBLE shows that the data is actually KSDATAFORMAT_SUBTYPE_IEEE_FLOAT, while I originally treat it as integers. According to this page, KSDATAFORMAT_SUBTYPE_IEEE_FLOAT is equivalent to WAVE_FORMAT_IEEE_FLOAT in WAVEFORMATEX. Hence, setting the "audio format" in the wav file's fmt chunk(normally starts in the 20th position) to WAVE_FORMAT_IEEE_FLOAT(which is 3) solved the problem. Remember to put it in little endian.
Original value of audio format
After modification
I want to send an image using HttpSendRequest API.
Basically I want to Post the request with some string parameters and after those params I need to send raw image data.
So is it okay to creata a unsigned char buffer of lengh equal to size of strings plus image file size and then doing memcpy of strings and memcpy of image data?
HttpSendRequest can be used to send image data.
Basically we need to open the image data file and read the image file in a buffer and pass that around in HttpSendRequest.
image data should be read in unsigned char buffer.