ffmpeg libx264 AVCodecContext settings - windows

I am using a recent windows (Jan 2011) ffmpeg build and trying to record video in H264. It is recording fine in MPEG4 using the following settings:
c->codec_id = CODEC_ID_MPEG4;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->width = VIDEO_WIDTH;
c->height = VIDEO_HEIGHT;
c->bit_rate = c->width * c->height * 4;
c->time_base.den = FRAME_RATE;
c->time_base.num = 1;
c->gop_size = 12;
c->pix_fmt = PIX_FMT_YUV420P;
Simply changing CODEC Id to H264 causes avcodec_open() to fail (-1). I found a list of possible settings How to encode h.264 with libavcodec/x264?. I have tried these, without setting pix_fmt, avcodec_open() still fails but if I additionally set c->pix_fmt = PIX_FMT_YUV420P; then I get a divide by zero exception.
I then came across a few posts on here that say I should set nothing (with exception of code_id, codec_type, width, height and perhaps bit_rate and pix_fmt) as the library now chooses the best settings itself. I have tried various combinations, still avcode_open() fails.
Does anyone have some advice on what to do or some settings that are current?
Thanks.
Here are one set of H264 settings which give the issue I describe:
static AVStream* AddVideoStream(AVFormatContext *pOutputFmtCtx,
int frameWidth, int frameHeight, int fps)
{
AVCodecContext* ctx;
AVStream* stream;
stream = av_new_stream(pOutputFmtCtx, 0);
if (!stream)
{
return NULL;
}
ctx = stream->codec;
ctx->codec_id = pOutputFmtCtx->oformat->video_codec; //CODEC_ID_H264
ctx->codec_type = AVMEDIA_TYPE_VIDEO;
ctx->width = frameWidth; //704
ctx->height = frameHeight; //576
ctx->bit_rate = frameWidth * frameHeight * 4;
ctx->coder_type = 1; // coder = 1
ctx->flags|=CODEC_FLAG_LOOP_FILTER; // flags=+loop
ctx->me_cmp|= 1; // cmp=+chroma, where CHROMA = 1
ctx->partitions|=X264_PART_I8X8+X264_PART_I4X4+X264_PART_P8X8+X264_PART_B8X8; // partitions=+parti8x8+parti4x4+partp8x8+partb8x8
ctx->me_method=ME_HEX; // me_method=hex
ctx->me_subpel_quality = 7; // subq=7
ctx->me_range = 16; // me_range=16
ctx->gop_size = 250; // g=250
ctx->keyint_min = 25; // keyint_min=25
ctx->scenechange_threshold = 40; // sc_threshold=40
ctx->i_quant_factor = 0.71; // i_qfactor=0.71
ctx->b_frame_strategy = 1; // b_strategy=1
ctx->qcompress = 0.6; // qcomp=0.6
ctx->qmin = 10; // qmin=10
ctx->qmax = 51; // qmax=51
ctx->max_qdiff = 4; // qdiff=4
ctx->max_b_frames = 3; // bf=3
ctx->refs = 3; // refs=3
ctx->directpred = 1; // directpred=1
ctx->trellis = 1; // trellis=1
ctx->flags2|=CODEC_FLAG2_BPYRAMID+CODEC_FLAG2_MIXED_REFS+CODEC_FLAG2_WPRED+CODEC_FLAG2_8X8DCT+CODEC_FLAG2_FASTPSKIP; // flags2=+bpyramid+mixed_refs+wpred+dct8x8+fastpskip
ctx->weighted_p_pred = 2; // wpredp=2
// libx264-main.ffpreset preset
ctx->flags2|=CODEC_FLAG2_8X8DCT;
ctx->flags2^=CODEC_FLAG2_8X8DCT; // flags2=-dct8x8
// if set this get divide by 0 error on avcodec_open()
// if don't set it get -1 error on avcodec_open()
//ctx->pix_fmt = PIX_FMT_YUV420P;
return stream;
}

In my experience you should give FFMPEG the least amount of information when initialising your codec as possible. This may seem counter intuitive but it means that FFMPEG will use it's default settings that are more likely to work than your own guesses. See what I would include below:
AVStream *stream;
m_video_codec = avcodec_find_encoder(AV_CODEC_ID_H264);
stream = avformat_new_stream(_outputCodec, m_video_codec);
ctx = stream->codec;
ctx->codec_id = m_fmt->video_codec;
ctx->bit_rate = m_AVIMOV_BPS; //Bits Per Second
ctx->width = m_AVIMOV_WIDTH; //Note Resolution must be a multiple of 2!!
ctx->height = m_AVIMOV_HEIGHT; //Note Resolution must be a multiple of 2!!
ctx->time_base.den = m_AVIMOV_FPS; //Frames per second
ctx->time_base.num = 1;
ctx->gop_size = m_AVIMOV_GOB; // Intra frames per x P frames
ctx->pix_fmt = AV_PIX_FMT_YUV420P;//Do not change this, H264 needs YUV format not RGB
As in previous answers, here is a working example of the FFMPEG library encoding RGB frames to a H264 video:
http://www.imc-store.com.au/Articles.asp?ID=276
An extra thought on your code though:
Have you called register all like below?
avcodec_register_all();
av_register_all();
If you don't call these two functions near the start of your code your subsequent calls to FFMPEG will fail and you'll most likely seg-fault.
Have a look at the linked example, I tested it on VC++2010 and it works perfectly.

Related

Encoding of raw frames (D3D11Texture2D) to an rtsp stream using libav*

I have managed to create a rtsp stream using libav* and directX texture (which I am obtaining from GDI API using Bitblit method). Here's my approach for creating live rtsp stream:
Create output context and stream (skipping the checks here)
avformat_alloc_output_context2(&ofmt_ctx, NULL, "rtsp", rtsp_url); //RTSP
vid_codec = avcodec_find_encoder(ofmt_ctx->oformat->video_codec);
vid_stream = avformat_new_stream(ofmt_ctx,vid_codec);
vid_codec_ctx = avcodec_alloc_context3(vid_codec);
Set codec params
codec_ctx->codec_tag = 0;
codec_ctx->codec_id = ofmt_ctx->oformat->video_codec;
//codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
codec_ctx->width = width; codec_ctx->height = height;
codec_ctx->gop_size = 12;
//codec_ctx->gop_size = 40;
//codec_ctx->max_b_frames = 3;
codec_ctx->pix_fmt = target_pix_fmt; // AV_PIX_FMT_YUV420P
codec_ctx->framerate = { stream_fps, 1 };
codec_ctx->time_base = { 1, stream_fps};
if (fctx->oformat->flags & AVFMT_GLOBALHEADER)
{
codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
Initialize video stream
if (avcodec_parameters_from_context(stream->codecpar, codec_ctx) < 0)
{
Debug::Error("Could not initialize stream codec parameters!");
return false;
}
AVDictionary* codec_options = nullptr;
if (codec->id == AV_CODEC_ID_H264) {
av_dict_set(&codec_options, "profile", "high", 0);
av_dict_set(&codec_options, "preset", "fast", 0);
av_dict_set(&codec_options, "tune", "zerolatency", 0);
}
// open video encoder
int ret = avcodec_open2(codec_ctx, codec, &codec_options);
if (ret<0) {
Debug::Error("Could not open video encoder: ", avcodec_get_name(codec->id), " error ret: ", AVERROR(ret));
return false;
}
stream->codecpar->extradata = codec_ctx->extradata;
stream->codecpar->extradata_size = codec_ctx->extradata_size;
Start streaming
// Create new frame and allocate buffer
AVFrame* AllocateFrameBuffer(AVCodecContext* codec_ctx, double width, double height)
{
AVFrame* frame = av_frame_alloc();
std::vector<uint8_t> framebuf(av_image_get_buffer_size(codec_ctx->pix_fmt, width, height, 1));
av_image_fill_arrays(frame->data, frame->linesize, framebuf.data(), codec_ctx->pix_fmt, width, height, 1);
frame->width = width;
frame->height = height;
frame->format = static_cast<int>(codec_ctx->pix_fmt);
//Debug::Log("framebuf size: ", framebuf.size(), " frame format: ", frame->format);
return frame;
}
void RtspStream(AVFormatContext* ofmt_ctx, AVStream* vid_stream, AVCodecContext* vid_codec_ctx, char* rtsp_url)
{
printf("Output stream info:\n");
av_dump_format(ofmt_ctx, 0, rtsp_url, 1);
const int width = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetTextureWidth();
const int height = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetTextureHeight();
//DirectX BGRA to h264 YUV420p
SwsContext* conversion_ctx = sws_getContext(width, height, src_pix_fmt,
vid_stream->codecpar->width, vid_stream->codecpar->height, target_pix_fmt,
SWS_BICUBIC | SWS_BITEXACT, nullptr, nullptr, nullptr);
if (!conversion_ctx)
{
Debug::Error("Could not initialize sample scaler!");
return;
}
AVFrame* frame = AllocateFrameBuffer(vid_codec_ctx,vid_codec_ctx->width,vid_codec_ctx->height);
if (!frame) {
Debug::Error("Could not allocate video frame\n");
return;
}
if (avformat_write_header(ofmt_ctx, NULL) < 0) {
Debug::Error("Error occurred when writing header");
return;
}
if (av_frame_get_buffer(frame, 0) < 0) {
Debug::Error("Could not allocate the video frame data\n");
return;
}
int frame_cnt = 0;
//av start time in microseconds
int64_t start_time_av = av_gettime();
AVRational time_base = vid_stream->time_base;
AVRational time_base_q = { 1, AV_TIME_BASE };
// frame pixel data info
int data_size = width * height * 4;
uint8_t* data = new uint8_t[data_size];
// AVPacket* pkt = av_packet_alloc();
while (RtspStreaming::IsStreaming())
{
/* make sure the frame data is writable */
if (av_frame_make_writable(frame) < 0)
{
Debug::Error("Can't make frame writable");
break;
}
//get copy/ref of the texture
//uint8_t* data = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetBuffer();
if (!WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetPixels(data, 0, 0, width, height))
{
Debug::Error("Failed to get frame buffer. ID: ", RtspStreaming::WindowId());
std::this_thread::sleep_for (std::chrono::seconds(2));
continue;
}
//printf("got pixels data\n");
// convert BGRA to yuv420 pixel format
int srcStrides[1] = { 4 * width };
if (sws_scale(conversion_ctx, &data, srcStrides, 0, height, frame->data, frame->linesize) < 0)
{
Debug::Error("Unable to scale d3d11 texture to frame. ", frame_cnt);
break;
}
//Debug::Log("frame pts: ", frame->pts, " time_base:", av_rescale_q(1, vid_codec_ctx->time_base, vid_stream->time_base));
frame->pts = frame_cnt++;
//frame_cnt++;
//printf("scale conversion done\n");
//encode to the video stream
int ret = avcodec_send_frame(vid_codec_ctx, frame);
if (ret < 0)
{
Debug::Error("Error sending frame to codec context! ",frame_cnt);
break;
}
AVPacket* pkt = av_packet_alloc();
//av_init_packet(pkt);
ret = avcodec_receive_packet(vid_codec_ctx, pkt);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
{
//av_packet_unref(pkt);
av_packet_free(&pkt);
continue;
}
else if (ret < 0)
{
Debug::Error("Error during receiving packet: ",AVERROR(ret));
//av_packet_unref(pkt);
av_packet_free(&pkt);
break;
}
if (pkt->pts == AV_NOPTS_VALUE)
{
//Write PTS
//Duration between 2 frames (us)
int64_t calc_duration = (double)AV_TIME_BASE / av_q2d(vid_stream->r_frame_rate);
//Parameters
pkt->pts = (double)(frame_cnt * calc_duration) / (double)(av_q2d(time_base) * AV_TIME_BASE);
pkt->dts = pkt->pts;
pkt->duration = (double)calc_duration / (double)(av_q2d(time_base) * AV_TIME_BASE);
}
int64_t pts_time = av_rescale_q(pkt->dts, time_base, time_base_q);
int64_t now_time = av_gettime() - start_time_av;
if (pts_time > now_time)
av_usleep(pts_time - now_time);
//pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
//pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
//pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
//pkt->pos = -1;
//write frame and send
if (av_interleaved_write_frame(ofmt_ctx, pkt)<0)
{
Debug::Error("Error muxing packet, frame number:",frame_cnt);
break;
}
//Debug::Log("RTSP streaming...");
//sstd::this_thread::sleep_for(std::chrono::milliseconds(1000/20));
//av_packet_unref(pkt);
av_packet_free(&pkt);
}
//av_free_packet(pkt);
delete[] data;
/* Write the trailer, if any. The trailer must be written before you
* close the CodecContexts open when you wrote the header; otherwise
* av_write_trailer() may try to use memory that was freed on
* av_codec_close(). */
av_write_trailer(ofmt_ctx);
av_frame_unref(frame);
av_frame_free(&frame);
printf("streaming thread CLOSED!\n");
}
Now, this allows me to connect to my rtsp server and maintain the connection. However, on the rtsp client side I am getting either gray or single static frame as shown below:
Would appreciate if you can help with following questions:
Firstly, why the stream is not working in spite of continued connection to the server and updating frames?
Video codec. By default rtsp format uses Mpeg4 codec, is it possible to use h264? When I manually set it to AV_CODEC_ID_H264 the program fails at avcodec_open2 with return value of -22.
Do I need to create and allocate new "AVFrame" and "AVPacket" for every frame? Or can I just reuse global variable for this?
Do I need to explicitly define some code for real-time streaming? (Like in ffmpeg we use "-re" flag).
Would be great if you can point out some example code for creating livestream. I have checked following resources:
https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/encode_video.c
streaming FLV to RTMP with FFMpeg using H264 codec and C++ API to flv.js
https://medium.com/swlh/streaming-video-with-ffmpeg-and-directx-11-7395fcb372c4
Update
While test I found that I am able to play the stream using ffplay, while it's getting stuck on VLC player. Here is snapshot on the ffplay log
The basic construct and initialization seems to be okay. Find below responses to your questions
why the stream is not working in spite of continued connection to the server and updating frames?
If you're getting an error or broken stream, you might wanna check into your presentation and decompression timestamps (pts/dts) of your packet.
In your code, I notice that you're taking time_base from video stream object which is not guranteed to be same as codec->time_base value and usually varies depending upon active stream.
AVRational time_base = vid_stream->time_base;
AVRational time_base_q = { 1, AV_TIME_BASE };
Video codec. By default rtsp format uses Mpeg4 codec, is it possible to use h264?
I don't see why not... RTSP is just a protocol for carrying your packets over the network. So you should be able use AV_CODEC_ID_H264 for encoding the stream.
Do I need to create and allocate new "AVFrame" and "AVPacket" for every frame? Or can I just reuse global variable for this?
In libav during encoding process a single packet is used for encoding a video frame, while there can be multiple audio frames in a single packet. I should reference this, but can't seem to find any source at the moment. But anyways the point is you would need to create new packet every time.
Do I need to explicitly define some code for real-time streaming? (Like in ffmpeg we use "-re" flag).
You don't need to add anything else for real time streaming. Although you might wanna implement it to limit the number of frame updates that you pass to encoder and save some performance.
for me the difference between ffplay good capture and VLC bad capture (for UDP packets) was pkt_size=xxx attribute (ffmpeg -re -i test.mp4 -f mpegts udp://127.0.0.1:23000?pkt_size=1316) (VLC open media network tab udp://#:23000:pkt_size=1316). So only if pkt_size is defined (and equal) VLC is able to capture.

Replace Bento4 with libav / ffmpeg

We use Bento4 - a really well designed SDK - to demux mp4 files in .mov containers. Decoding is done by an own codec, so only the raw (intraframe) samples are needed. By now this works pretty straightforward
AP4_Track *test_videoTrack = nullptr;
AP4_ByteStream *input = nullptr;
AP4_Result result = AP4_FileByteStream::Create(filename, AP4_FileByteStream::STREAM_MODE_READ, input);
AP4_File m_file (*input, true);
//
// Read movie tracks, and metadata, find the video track
size_t index = 0;
uint32_t m_width = 0, m_height = 0;
auto item = m_file.GetMovie()->GetTracks().FirstItem();
auto track = item->GetData();
if (track->GetType() == AP4_Track::TYPE_VIDEO)
{
m_width = (uint32_t)((double)test_videoTrack->GetWidth() / double(1 << 16));
m_height = (uint32_t)((double)test_videoTrack->GetHeight() / double(1 << 16));
std::string codec("unknown");
auto sd = track->GetSampleDescription(0);
AP4_String c;
if (AP4_SUCCEEDED(sd->GetCodecString(c)))
{
codec = c.GetChars();
}
// Find and instantiate the decoder
AP4_Sample sample;
AP4_DataBuffer sampleData;
test_videoTrack->ReadSample(0, sample, sampleData);
}
For several reasons we would prefer replacing Bento4 with libav/ffmpeg (mainly because we already have in the project and want to reduce dependencies)
How would we ( preferrably in pseudo-code ) replace the Bento4-tasks done above with libav? Please remember that the used codec is not in the ffmpeg library, so we cannot use the standard ffmpeg decoding examples. Opening the media file simply fails. Without decoder we got no size or any other info so far. What we need would
open the media file
get contained tracks (possibly also audio)
get track size / length info
get track samples by index
It turned out to be very easy:
AVFormatContext* inputFile = avformat_alloc_context();
avformat_open_input(&inputFile, filename, nullptr, nullptr);
avformat_find_stream_info(inputFile, nullptr);
//Get just two streams...First Video & First Audio
int videoStreamIndex = -1, audioStreamIndex = -1;
for (int i = 0; i < inputFile->nb_streams; i++)
{
if (inputFile->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO && videoStreamIndex == -1)
{
videoStreamIndex = i;
}
else if (inputFile->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO && audioStreamIndex == -1)
{
audioStreamIndex = i;
}
}
Now test for the correct codec tag
// get codec id
char ct[64] = {0};
static const char* codec_id = "MPAK";
av_get_codec_tag_string( ct, sizeof(ct),inputFile->streams[videoStreamIndex]->codec->codec_tag);
assert(strncmp( ct , codec_id, strlen(codec_id)) == 0)
I did not know that the sizes are set even before a codec is chosen (or even available).
// lookup size
Size2D mediasize(inputFile->streams[videoStreamIndex]->codec->width, inputFile->streams[videoStreamIndex]->codec->height);
Seeking by frame and unpacking (video) is done like this:
AVStream* s = m_file->streams[videoStreamIndex];
int64_t seek_ts = (int64_t(frame_index) * s->r_frame_rate.den * s->time_base.den) / (int64_t(s->r_frame_rate.num) * s->time_base.num);
av_seek_frame(m_hap_file, videoStreamIndex, seek_ts, AVSEEK_FLAG_ANY);
AVPacket pkt;
av_read_frame(inputFile, &pkt);
Now the packet contains a frame ready to unpack with own decoder.

Vulkan copying image from swap chain

I am using the vkCmdCopyImageToBuffer function and getting a memory access violation and don't understand why.
Here is the code:
VkBufferImageCopy region = {};
region.bufferOffset = 0;
region.bufferRowLength = width;
region.bufferImageHeight = height;
region.imageSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
region.imageSubresource.mipLevel = 0;
region.imageSubresource.baseArrayLayer = 0;
region.imageSubresource.layerCount = 1;
region.imageOffset = { 0, 0, 0 };
region.imageExtent = {
width,
height,
1
};
vkCmdCopyImageToBuffer(m_drawCmdBuffers[i], m_swapChain.buffers[i].image,
VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, m_renderImage, 1, &region);
The swapchain images are created here in the initialization code:
// Get the swap chain images
images.resize(imageCount);
VK_CHECK_RESULT(fpGetSwapchainImagesKHR(device, swapChain, &imageCount, images.data()));
// Get the swap chain buffers containing the image and imageview
buffers.resize(imageCount);
for (uint32_t i = 0; i < imageCount; i++)
{
VkImageViewCreateInfo colorAttachmentView = {};
colorAttachmentView.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
colorAttachmentView.pNext = NULL;
colorAttachmentView.format = colorFormat;
colorAttachmentView.components = {
VK_COMPONENT_SWIZZLE_R,
VK_COMPONENT_SWIZZLE_G,
VK_COMPONENT_SWIZZLE_B,
VK_COMPONENT_SWIZZLE_A
};
colorAttachmentView.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
colorAttachmentView.subresourceRange.baseMipLevel = 0;
colorAttachmentView.subresourceRange.levelCount = 1;
colorAttachmentView.subresourceRange.baseArrayLayer = 0;
colorAttachmentView.subresourceRange.layerCount = 1;
colorAttachmentView.viewType = VK_IMAGE_VIEW_TYPE_2D;
colorAttachmentView.flags = 0;
buffers[i].image = images[i];
colorAttachmentView.image = buffers[i].image;
VK_CHECK_RESULT(vkCreateImageView(device, &colorAttachmentView, nullptr, &buffers[i].view));
}
And my buffer is similarly created here:
VkBufferCreateInfo createinfo = {};
createinfo.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
createinfo.size = width * height * 4 * sizeof(int8_t);
createinfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
createinfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
//create the image copy buffer
vkCreateBuffer(m_device, &createinfo, NULL, &m_renderImage);
I have tried different pixel formats and different createinfo.usage settings but none help.
VkSurfaceCapabilitiesKHR::supportedUsageFlags defines the limitations on the ways in which you can use the VkImages created by the swap chain. The only one that is guaranteed to be supported is color attachment; all of the other, including transfer src, are optional.
Therefore, you should not assume that you can copy from a presentable image. If you find yourself with a need to do that, you must first query that value. If it does not allow copies, then you must render to your own image, which you copy from. You can render from that image into the presentable one when you intend to present it.

Error in video streaming using libavformat: VBV buffer size not set, muxing may fail

I stream a video using libavformat as follows:
static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
enum AVCodecID codec_id)
{
AVCodecContext *c;
AVStream *st;
/* find the encoder */
*codec = avcodec_find_encoder(codec_id);
if (!(*codec)) {
fprintf(stderr, "Could not find encoder for '%s'\n",
avcodec_get_name(codec_id));
exit(1);
}
st = avformat_new_stream(oc, *codec);
if (!st) {
fprintf(stderr, "Could not allocate stream\n");
exit(1);
}
st->id = oc->nb_streams-1;
c = st->codec;
switch ((*codec)->type) {
case AVMEDIA_TYPE_AUDIO:
c->sample_fmt = (*codec)->sample_fmts ?
(*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
c->bit_rate = 64000;
c->sample_rate = 44100;
c->channels = 2;
break;
case AVMEDIA_TYPE_VIDEO:
c->codec_id = codec_id;
c->bit_rate = 400000;
/* Resolution must be a multiple of two. */
c->width = outframe_width;
c->height = outframe_height;
/* timebase: This is the fundamental unit of time (in seconds) in terms
* of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be
* identical to 1. */
c->time_base.den = STREAM_FRAME_RATE;
c->time_base.num = 1;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
c->pix_fmt = STREAM_PIX_FMT;
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
/* just for testing, we also add B frames */
c->max_b_frames = 2;
}
if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
/* Needed to avoid using macroblocks in which some coeffs overflow.
* This does not happen with normal video, it just happens here as
* the motion of the chroma plane does not match the luma plane. */
c->mb_decision = 2;
}
break;
default:
break;
}
/* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
return st;
}
But when I run this code, I get the following error/warning:
[mpeg # 01f3f040] VBV buffer size not set, muxing may fail
Do you know how I can set the VBV buffer size in the code? In fact, when I use ffplay to display the streamed video, ffplay doesn't show anything for short videos but for long videos, it start displaying the video immediately. So, it looks like ffplay needs a buffer to be filled up by some amount so that it can start displaying the stream. Am I right?
You can set the VBV buffer size of a stream with:
AVCPBProperties *props;
props = (AVCPBProperties*) av_stream_new_side_data(
st, AV_PKT_DATA_CPB_PROPERTIES, sizeof(*props));
props->buffer_size = 230 * 1024;
props->max_bitrate = 0;
props->min_bitrate = 0;
props->avg_bitrate = 0;
props->vbv_delay = UINT64_MAX;
Where st is a pointer to the AVStream struct. Min bit rate, max bit rate, and average bit rate are set to 0 while the VBV delay is set to UINT64_MAX in this example because those values indicate unknown or unspecified values for these fields (see AVCPB properties documentation). Alternatively, you set these values to whatever is reasonable for your specific use case. Just don't forget to assign these fields, because they will not be initialized automatically.

FFmpeg: bitrate change dynamically

I read the previous thread and this is the response from NISHAnT,
FFMPEG: Dynamic change of bit_rate for Video
avcodec_init();
avcodec_register_all();
codec = avcodec_find_encoder(CODEC_ID_H263);
c = avcodec_alloc_context();
picture= avcodec_alloc_frame();
c->bit_rate = bitrate;
c->width = w;
c->height = h;
c->time_base= (AVRational){1,framerate};
c->pix_fmt = PIX_FMT_YUV420P;
avcodec_close(c);
av_free(c);
And this is my code:
if(previous_BR != cur_BR){
previous_BR = cur_BR;
AVCodecContext* new_c = av_mallocz(sizeof(AVCodecContext));;
avcodec_copy_context(new_c, ost_table[0]->st->codec);
avcodec_close(ost_table[0]->st->codec);
av_free(ost_table[0]->st->codec);
avcodec_init();
avcodec_register_all();
ost_table[0]->enc = avcodec_find_encoder(CODEC_ID_H264);
new_c = avcodec_alloc_context3(ost_table[0]->enc);
ost_table[0]->st->codec = new_c;
AVFrame *picture= avcodec_alloc_frame();
new_c->bit_rate = cur_BR;
new_c->width = 352;
new_c->height = 288;
int framerate = 30;
new_c->time_base= (AVRational){1,framerate};
new_c->pix_fmt = PIX_FMT_YUV420P;
new_c->codec_type = AVMEDIA_TYPE_VIDEO;
new_c->codec_id = CODEC_ID_H264;}
I tried to add my code to transcode(), but ffmpeg exits after it goes through my codes.
is there something wrong with my codes?
or what else I should add?
I put the code after "redo:", so that it will recursively loop back.
please help !!
Thank you.
c is AVCodecContext Structure.
You must configure ffmpeg first for the type of file you are playing.Build it by conifguing first build.sh file in ffmpeg root directory.
for the type of file you have to configure the codec9coder-decoder) and muxer/demuxer.
for example to play avi file , you have to configure the muxer/demuxer and codec for avi which is MPEG "AVI" amd "MPEG4" respectively.

Resources