Trying to write program which uses libav to extract raw pixel data (~BMP) from arbitrary video. Everything goes well except sws_scale() failing to convert AVFrame to RGB24.
I formulated minimal example of it where AVFrame is being created and initialized with 4 different methods found on internet: https://github.com/SlavMFM/libav_bmp_example - all of them fail in different ways. How can I fix it so sws_scale() does the convertion?
First, don't use avcodec_decode_video2. Use avcodec_send_packet and avcodec_receive_frame
second, Don't call av_frame_get_buffer on source Just allocate it with av_frame_alloc, avcodec_receive_frame will set up the rest
Then allocate a destination frame frame like:
AVFrame* frame = av_frame_alloc();
frame->format = whatever;
frame->width = w;
frame->height = h;
av_frame_get_buffer(frame, 32);
Related
I'm trying to create an NV12 resource as source for a video encoder in DX12. While I intend to eventually populate a resource from GPU, what I'm trying to do now is take an ffmpeg AVFrame I already have (in AV_PIX_FMT_YUV420P format) and create a texture in DXGI_FORMAT_NV12 format using that data.
I understand the NV12 format (https://learn.microsoft.com/en-us/windows/win32/medfound/recommended-8-bit-yuv-formats-for-video-rendering#nv12) has U and V interleaved while the AV_PIX_FMT_YUV420P doesn't.
My main question is what does the D3D12_RESOURCE_DESC look like for an NV12 texture - do I tell it I need more than one array/mip level to make it planar? Or do I just give it a single memory address with both planes layed out as per the NV12 format, and it figures out subresources for me based on the format?
I understand that to read the data I define two SRVs, one for Y mapped to the Red channel and a second for U and V, but it's how I initialise it that's confusing me.
Just create the resource as normal, and then when you query the layout description, it will be planar.
D3D12_RESOURCE_DESC desc = {};
desc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;
desc.Format = DXGI_FORMAT_NV12;
desc.MipLevels = 1;
desc.DepthOrArraySize = 1;
desc.Width = 1024;
desc.Height = 720;
desc.SampleDesc.Count = 1;
const CD3DX12_HEAP_PROPERTIES defaultHeapProperties(D3D12_HEAP_TYPE_DEFAULT);
ComPtr<ID3D12Resource> res;
HRESULT hr = device->CreateCommittedResource(
&defaultHeapProperties,
D3D12_HEAP_FLAG_NONE,
&desc,
D3D12_RESOURCE_STATE_COMMON,
nullptr,
IID_PPV_ARGS(res.GetAddressOf()));
if (FAILED(hr))
{
// error
}
D3D12_FEATURE_DATA_FORMAT_INFO formatInfo = { DXGI_FORMAT_NV12, 0 };
if (FAILED(device->CheckFeatureSupport(D3D12_FEATURE_FORMAT_INFO, &formatInfo, sizeof(formatInfo))))
{
formatInfo = {};
}
D3D12_PLACED_SUBRESOURCE_FOOTPRINT footprint[2] = {};
UINT numRows;
UINT64 rowBytes, totalBytes;
device->GetCopyableFootprints(&desc, 0, 2, 0, footprint, &numRows, &rowBytes, &totalBytes);
The formatInfo.PlaneCount is 2, which is why you have to ask for two subresources.
footprint[0].Format is DXGI_FORMAT_R8_TYPELESS with 1024x720 size. The footprint[0].Offset is likely 0.
footprint[1].Format is DXGI_FORMAT_R8G8_TYPELESS with 512x360 size. The footprint[1].Offset is something other than 0.
In Direct3D 12 Video the layouts are very simple to understand. In Direct3D 11 Video, it was all implicitly defined so it was a bit of a mess. That said, DDS files were defined as non-planar data, so you may want to examine how these are handled in DirectXTex.
What I want to do is to produce a video composed from a single image, repeated for many frames.
I have tried the below code but it is producing a video file of size 0 bytes.
IplImage *image = cvLoadImage("images/img1.jpg", 1);
CvVideoWriter* writer = cvCreateVideoWriter("Video from Images.flv",
CV_FOURCC('D','I','V','X'), fps, size);
for(int counter=0; counter < 300; counter++)
{
/*The below statement writes the frame one by one to the video ...*/
cvWriteFrame(writer, image);
}
You need to call cvReleaseVideoWriter(CvVideoWriter** writer) at the end.
Had you used the C++ API, the destructor would have taken care of this for you.
I use packet duration to translate from frame index to pts and back, and I'd like to be sure that this is a reliable method of doing so.
Alternatively, is there a better way to translate pts to a frame index and vice versa?
A snippet showing my usage:
bool seekFrame(int64_t frame)
{
if(frame > container.frameCount)
frame = container.frameCount;
// Seek to a frame behind the desired frame because nextFrame() will also increment the frame index
int64_t seek = pts_cache[frame-1]; // pts_cache is an array of all frame pts values
// get the nearest prior keyframe
int preceedingKeyframe = av_index_search_timestamp(container.video_st, seek, AVSEEK_FLAG_BACKWARD);
// here's where I'm worried that packetDuration isn't a reliable method of translating frame index to
// pts value
int64_t nearestKeyframePts = preceedingKeyframe * container.packetDuration;
avcodec_flush_buffers(container.pCodecCtx);
int ret = av_seek_frame(container.pFormatCtx, container.videoStreamIndex, nearestKeyframePts, AVSEEK_FLAG_ANY);
if(ret < 0) return false;
container.lastPts = nearestKeyframePts;
AVFrame *pFrame = NULL;
while(nextFrame(pFrame, NULL) && container.lastPts < seek)
{
;
}
container.currentFrame = frame-1;
av_free(pFrame);
return true;
}
No, not guaranteed. It may work with some codec/container combination where frame-rate is static. avi, h264 raw (annex-b) and yuv4mpeg come to mind. But other containers like flv, mp4, ts, have a PTS/DTS (or CTS) for EVERY frame. The source could be variable frame rate, or frames could have be dropped at some point during processing due to bandwidth. Also some codecs will remove duplicate frames.
So unless you created the file yourself. Do not trust it. There is no guaranteed way to look at a frame and know its 'index' except start at the beginning and count.
Your method, MAY be good enough for most files however.
Imagine I have H.264 AnxB frames coming in from a real-time conversation. What is the best way to encapsulate in MPEG2 transport stream while maintaining the timing information for subsequent playback?
I am using libavcodec and libavformat libraries. When I obtain pointer to object (*pcc) of type AVCodecContext, I set the foll.
pcc->codec_id = CODEC_ID_H264;
pcc->bit_rate = br;
pcc->width = 640;
pcc->height = 480;
pcc->time_base.num = 1;
pcc->time_base.den = fps;
When I receive NAL units, I create a AVPacket and call av_interleaved_write_frame().
AVPacket pkt;
av_init_packet( &pkt );
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = pst->index;
pkt.data = (uint8_t*)p_NALunit;
pkt.size = len;
pkt.dts = AV_NOPTS_VALUE;
pkt.pts = AV_NOPTS_VALUE;
av_interleaved_write_frame( fc, &pkt );
I basically have two questions:
1) For variable framerate, is there a way to not specify the foll.
pcc->time_base.num = 1;
pcc->time_base.den = fps;
and replace it with something to indicate variable framerate?
2) While submitting packets, what "timestamps" should I assign to
pkt.dts and pkt.pts?
Right now, when I play the output using ffplay it is playing at constant framerate (fps) which I use in the above code.
I also would love to know how to accommodate varying spatial resolution. In the stream that I receive, each keyframe is preceded by SPS and PPS. I know whenever the spatial resolution changes.
IS there a way to not have to specify
pcc->width = 640;
pcc->height = 480;
upfront? In other words, indicate that the spatial resolution can change mid-stream.
Thanks a lot,
Eddie
DTS and PTS are measured in a 90 KHz clock. See ISO 13818 part 1 section 2.4.3.6 way down below the syntax table.
As for the variable frame rate, your framework may or may not have a way to generate this (vui_parameters.fixed_frame_rate_flag=0). Whether the playback software handles it is an ENTIRELY different question. Most players assume a fixed frame rate regardless of PTS or DTS. mplayer can't even compute the frame rate correctly for a fixed-rate transport stream generated by ffmpeg.
I think if you're going to change the resolution you need to end the stream (nal_unit_type 10 or 11) and start a new sequence. It can be in the same transport stream (assuming your client's not too simple).
I'm trying to learn to use the different ffmpeg libs with Cocoa, and I'm trying to get frames to display with help of Core Video. It seems I have gotten the CV callbacks to work, and it gets frames which I try to put in a CVImageBufferRef that I later draw with Core Image.
The problem is I'm trying to get PIX_FMT_YUYV422 to work with libswscale, but as soon as I change the pixel format to anything other than PIX_FMT_YUV420P it crashes with EXC_BAD_ACCESS.
As long as I use YUV420P the program runs, allthough it doesn't display properly. I suspected that the pixel format isn't supported, so I wanted to try PIX_FMT_YUYV422.
I have had it running before and successfully wrote PPM files with PIX_FMT_RGB24. For some reason it just crashes on me now, and I don't see what might be wrong.
I'm a bit in over my head here, but that is how I prefer to learn. :)
Here's how I allocate the AVFrames:
inFrame = avcodec_alloc_frame();
outFrame = avcodec_alloc_frame();
int frameBytes = avpicture_get_size(PIX_FMT_YUYV422, cdcCtx->width, cdcCtx->height);
uint8_t *frameBuffer = malloc(frameBytes);
avpicture_fill((AVPicture *)outFrame, frameBuffer, PIX_FMT_YUYV422, cdcCtx->width, cdcCtx->height);
Then I try to run it through swscale like so:
static struct SwsContext *convertContext;
if (convertContext == NULL) {
int w = cdcCtx->width;
int h = cdcCtx->height;
convertContext = sws_getContext(w, h, cdcCtx->pix_fmt, outWidth, outHeight, PIX_FMT_YUYV422, SWS_BICUBIC, NULL, NULL, NULL);
if (convertContext == NULL) {
NSLog(#"Cannot initialize the conversion context!");
return NO;
}
}
sws_scale(convertContext, inFrame->data, inFrame->linesize, 0, outHeight, outFrame->data, outFrame->linesize);
And finally I try to write it to a pixel buffer for use with Core Image:
int ret = CVPixelBufferCreateWithBytes(0, outWidth, outHeight, kYUVSPixelFormat, outFrame->data[0], outFrame->linesize[0], 0, 0, 0, ¤tFrame);
With 420P it runs, but it doesnt match up with the kYUVSPixelformat for the pixel buffer, and as I understand it doesnt accept YUV420.
I would really appreciate any help, no matter how small, as it might help me struggle on. :)
This certainly isn't a complete code sample, since you never decode anything into the input frame. If you were to do that, it looks correct.
You also don't need to fill the output picture, or even allocate an AVFrame for it, really.
YUV420P is a planar format. Therefore, AVFrame.data[0] is not the whole story. I see a mistake in
int ret = CVPixelBufferCreateWithBytes(0, outWidth, outHeight, kYUVSPixelFormat, outFrame->data[0], outFrame->linesize[0], 0, 0, 0, ¤tFrame);
For planar formats, you will have to read data blocks from AVFrame.data[0] up to AVFrame.data[3]