I want to send an image using HttpSendRequest API.
Basically I want to Post the request with some string parameters and after those params I need to send raw image data.
So is it okay to creata a unsigned char buffer of lengh equal to size of strings plus image file size and then doing memcpy of strings and memcpy of image data?
HttpSendRequest can be used to send image data.
Basically we need to open the image data file and read the image file in a buffer and pass that around in HttpSendRequest.
image data should be read in unsigned char buffer.
Related
all
I use the cypress USB3.0 control chip(CX3) to transfer the image data from my sensor.
But the CX3's SDK not support the raw16 data format, and i think that the bit width of raw16 and yuv2 are both 16 bits, so i make the sensor output raw16 data format and use the yuv2 format to transfer my raw data to the host, and i really get the data from the CX3 by the matlab or the e-cam tool which be able to preview the video stream. But of cause the color distort, because the receiver regard the data(under the UVC protocol) as the YUV data.
So my question is :
What's the different between the raw and yuv data order? Or another
say, how the receiver(camera tool) treat as the yuv data and raw data?
How can i parse the raw data from the UVC packages which contain the
raw data but using the yuv format to package?
In trying to understand how to convert mediafoundation rgb32 data into a bitmap data that can be loaded into image/bitmap widgets or saved as a bitmap file, I am wondering what the RGB32 data actually is, in comparison to the data a BMP has?
Is it simply missing header information or key information a bitmap file has like width, height, etc?
What does RGB32 actually mean, in comparison to BMP data in a bitmap file or memory stream?
You normally have 32-bit RGB as IMFMediaBuffer attached to IMFSample. This is just bitmap bits, without format specific metadata. You can access this data by obtaining media buffer pointer, such as, for example, by doing IMFSample::ConvertToContiguousBuffer call, then doing IMFMediaBuffer::Lock to get a pixel data pointer.
The obtained buffer is compatible to data in standard .BMP file (except maybe, at some times, the rows could be in reverse order), it is just .BMP file has a header before this data. .BMP file normally has BITMAPFILEHEADER structure, then BITMAPINFOHEADER and then the buffer in question. If you write it one after another initialized respectively, this would yield you a valid picture file. This and other questions here show the way to create a .BMP file from bitmap bits.
See this GitHub code snippet, which is really close to the requested task and might be a good starting point.
I am trying to do the operations like rgb2gray(img) on a live video read using vid=videoinput() like rgb2gray(vid).
It is a type mismatch but I am stuck here. Should I convert the vid to any image format and store it in a matrix or is there any other way like to do rgb2gray? I don't want to use vid.ReturnedColorSpace = 'grayscale', as I need to convert the video into images or matrix and do rgb2gray operation.
In your code vid is a videoinput object that lets you capture frames from a camera. You cannot pass it to rgb2gray. What you can do is grab the frames one at a time in a loop, and pass each one to rgb2gray individually.
I captured raw audio data stream together with its WAVEFORMATEXTENSIBLE struct.
WAVEFORMATEXTENSIBLE is shown in the figure below:
Following the standard of wav file, I tried to write the raw bits into a wav file.
What I do is:
write "RIFF".
write a DWORD. (filesize - sizeof("RIFF") - sizeof(DWORD)).
=== WaveFormat Chunk ===
write "WAVEfmt "
write a DWORD. (size of the WAVEFORMATEXTENSIBLE struct)
write the WAVEFORMATEXTENSIBLE struct.
=== Fact Chunk ===
write "fact"
write a DWORD. ( 4 )
write a DWORD. ( num of samples in the stream, which should be sizeof(rawdata)*8/wBitsPerSample ).
=== Data Chunk ===
write "data"
write a DWORD (size of rawdata)
write the raw data.
After getting the wav file from the above steps, I played the wav file with media player, there is no sound, playing with audacity will give me a distorted sound, I can hear that it is the correct audio I want, but the sound is distorted with noise.
The raw data can be find here
The wav file I generate is here
It is very confusing to me, because when I use the same method to convert IEEE-float data to wav file, it works just fine.
I figured this out, it seems the getbuffer releasebuffer cycle in IAudioRenderClient is putting raw data that has the format same as that passed into the initialize method of the IAudioClient.
The GetMixFormat in IAudioClient in my case is different from the format passed into the initialize method. I think GetMixFormat gets the format that the device supports.
IAudioClient should have done the conversion of format from the initialized format to the mixformat. I intercept the initialize method, get the format, and it works like a charm.
I'm intercepting WASAPI to access the audio data and face the exact same issue where the generated audio file from the data sounds like the correct content but is very noisy somehow although the frame rate, sample width, number of channels etc. are set properly.
The SubFormat field of WAVEFORMATEXTENSIBLE shows that the data is actually KSDATAFORMAT_SUBTYPE_IEEE_FLOAT, while I originally treat it as integers. According to this page, KSDATAFORMAT_SUBTYPE_IEEE_FLOAT is equivalent to WAVE_FORMAT_IEEE_FLOAT in WAVEFORMATEX. Hence, setting the "audio format" in the wav file's fmt chunk(normally starts in the 20th position) to WAVE_FORMAT_IEEE_FLOAT(which is 3) solved the problem. Remember to put it in little endian.
Original value of audio format
After modification
I'm loading a texture from .png using D3DXCreateTextureFromFile(). How can my program know if the image file contains an alpha channel?
This isn't too hard to do by simply examining the file.
A PNG file consists of:
A file header
One or more 'chunks'
The file header is always 8 bytes and should be skipped over.
Each chunk begins with 4 bytes indicating its length, and 4 bytes indicating its type. The first chunk should always be 13 bytes and have the type IHDR. This contains the information about the image.
The tenth byte in the header contains the exact information you're looking for. It will be equal to 6 if the PNG file is RGBA.
More information can be found here.
Call IDirect3DTexture9::GetSurfaceLevel and then call IDirect3DSurface9::GetDesc. The D3DSURFACE_DESC.Format member will tell you.