FFmpeg: bitrate change dynamically - ffmpeg

I read the previous thread and this is the response from NISHAnT,
FFMPEG: Dynamic change of bit_rate for Video
avcodec_init();
avcodec_register_all();
codec = avcodec_find_encoder(CODEC_ID_H263);
c = avcodec_alloc_context();
picture= avcodec_alloc_frame();
c->bit_rate = bitrate;
c->width = w;
c->height = h;
c->time_base= (AVRational){1,framerate};
c->pix_fmt = PIX_FMT_YUV420P;
avcodec_close(c);
av_free(c);
And this is my code:
if(previous_BR != cur_BR){
previous_BR = cur_BR;
AVCodecContext* new_c = av_mallocz(sizeof(AVCodecContext));;
avcodec_copy_context(new_c, ost_table[0]->st->codec);
avcodec_close(ost_table[0]->st->codec);
av_free(ost_table[0]->st->codec);
avcodec_init();
avcodec_register_all();
ost_table[0]->enc = avcodec_find_encoder(CODEC_ID_H264);
new_c = avcodec_alloc_context3(ost_table[0]->enc);
ost_table[0]->st->codec = new_c;
AVFrame *picture= avcodec_alloc_frame();
new_c->bit_rate = cur_BR;
new_c->width = 352;
new_c->height = 288;
int framerate = 30;
new_c->time_base= (AVRational){1,framerate};
new_c->pix_fmt = PIX_FMT_YUV420P;
new_c->codec_type = AVMEDIA_TYPE_VIDEO;
new_c->codec_id = CODEC_ID_H264;}
I tried to add my code to transcode(), but ffmpeg exits after it goes through my codes.
is there something wrong with my codes?
or what else I should add?
I put the code after "redo:", so that it will recursively loop back.
please help !!
Thank you.

c is AVCodecContext Structure.
You must configure ffmpeg first for the type of file you are playing.Build it by conifguing first build.sh file in ffmpeg root directory.
for the type of file you have to configure the codec9coder-decoder) and muxer/demuxer.
for example to play avi file , you have to configure the muxer/demuxer and codec for avi which is MPEG "AVI" amd "MPEG4" respectively.

Related

RTMP live stream directly from NVENC encoder

I am trying to create a live RTMP stream containing the animation generated with NVIDIA OptiX. The stream is to be received by nginx + rtmp module and broadcasted in MPEG-DASH format. Full chain up to dash.js player is working if the video is first saved to .flv file and then I send it with ffmpeg without any reformatting using command:
ffmpeg -re -i my_video.flv -c:v copy -f flv rtmp://x.x.x.x:1935/dash/test
But I want to stream directly from the code. And with this I am failng... Nginx logs an error "dash: invalid avcc received (2: No such file or directory)". Then it seems to receive the stream correctly (segments are rolling, dash manifest is there), however the stream is not possible to play in the browser.
I can see only one difference in the manifest between direct stream and stream from file. Codecs attribute of the representation in the direct stream is missed: codecs="avcc1.000000" instead of "avc1.640028" which I get when streaming from file.
My code opens the stream:
av_register_all();
AVOutputFormat* fmt = av_guess_format("flv",
file_name, nullptr);
fmt->video_codec = AV_CODEC_ID_H264;
AVFormatContext* _oc;
avformat_alloc_output_context2(&_oc, fmt, nullptr, "rtmp://x.x.x.x:1935/dash/test");
AVStream* _vs = avformat_new_stream(_oc, nullptr);
_vs->id = 0;
_vs->time_base = AVRational { 1, 25 };
_vs->avg_frame_rate = AVRational{ 25, 1 };
AVCodecParameters *vpar = _vs->codecpar;
vpar->codec_id = fmt->video_codec;
vpar->codec_type = AVMEDIA_TYPE_VIDEO;
vpar->format = AV_PIX_FMT_YUV420P;
vpar->profile = FF_PROFILE_H264_HIGH;
vpar->level = _level;
vpar->width = _width;
vpar->height = _height;
vpar->bit_rate = _avg_bitrate;
avio_open(&_oc->pb, _oc->filename, AVIO_FLAG_WRITE);
avformat_write_header(_oc, nullptr);
Width, height, bitrate, level and profile I get from NVENC encoder settings. I also do the error checking, ommited here. Then I have a loop writing each encoded packets, with IDR frames etc all prepared on the fly with NVENC. The loop body is:
auto & pkt_data = _packets[i];
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.pts = av_rescale_q(_n_frames++, AVRational{ 1, 25 }, _vs->time_base);
pkt.duration = av_rescale_q(1, AVRational{ 1, 25 }, _vs->time_base);
pkt.dts = pkt.pts;
pkt.stream_index = _vs->index;
pkt.data = pkt_data.data();
pkt.size = (int)pkt_data.size();
if (!memcmp(pkt_data.data(), "\x00\x00\x00\x01\x67", 5))
{
pkt.flags |= AV_PKT_FLAG_KEY;
}
av_write_frame(_oc, &pkt);
Obviously ffmpeg is writing avcc code somewhere... I have no clue where to add this code so the RTMP server can recognize it. Or I am missing something else?
Any hint greatly appreciated, folks!
Thanks to Gyan's comment I was able to solve the issue. Following the AV_CODEC_FLAG_GLOBAL_HEADER flag in the wrapper one can see how the global header is added, which was missing in my case. You can use directly the NVENC API function nvEncGetSequenceParams, but since I am anyway using SDK, it is a bit cleaner.
So I had to attach the header to AVCodecParameters::extradata:
std::vector<uint8_t> payload;
_encoder->GetSequenceParams(payload);
vpar->extradata_size = payload.size();
vpar->extradata = (uint8_t*)av_mallocz(payload.size() + AV_INPUT_BUFFER_PADDING_SIZE);
memcpy(vpar->extradata, payload.data(), payload.size());
_encoder is my instance of NvEncoder from SDK.
The wrapper is doing the same thing, however using deprecated struct AVCodecContext.

Replace Bento4 with libav / ffmpeg

We use Bento4 - a really well designed SDK - to demux mp4 files in .mov containers. Decoding is done by an own codec, so only the raw (intraframe) samples are needed. By now this works pretty straightforward
AP4_Track *test_videoTrack = nullptr;
AP4_ByteStream *input = nullptr;
AP4_Result result = AP4_FileByteStream::Create(filename, AP4_FileByteStream::STREAM_MODE_READ, input);
AP4_File m_file (*input, true);
//
// Read movie tracks, and metadata, find the video track
size_t index = 0;
uint32_t m_width = 0, m_height = 0;
auto item = m_file.GetMovie()->GetTracks().FirstItem();
auto track = item->GetData();
if (track->GetType() == AP4_Track::TYPE_VIDEO)
{
m_width = (uint32_t)((double)test_videoTrack->GetWidth() / double(1 << 16));
m_height = (uint32_t)((double)test_videoTrack->GetHeight() / double(1 << 16));
std::string codec("unknown");
auto sd = track->GetSampleDescription(0);
AP4_String c;
if (AP4_SUCCEEDED(sd->GetCodecString(c)))
{
codec = c.GetChars();
}
// Find and instantiate the decoder
AP4_Sample sample;
AP4_DataBuffer sampleData;
test_videoTrack->ReadSample(0, sample, sampleData);
}
For several reasons we would prefer replacing Bento4 with libav/ffmpeg (mainly because we already have in the project and want to reduce dependencies)
How would we ( preferrably in pseudo-code ) replace the Bento4-tasks done above with libav? Please remember that the used codec is not in the ffmpeg library, so we cannot use the standard ffmpeg decoding examples. Opening the media file simply fails. Without decoder we got no size or any other info so far. What we need would
open the media file
get contained tracks (possibly also audio)
get track size / length info
get track samples by index
It turned out to be very easy:
AVFormatContext* inputFile = avformat_alloc_context();
avformat_open_input(&inputFile, filename, nullptr, nullptr);
avformat_find_stream_info(inputFile, nullptr);
//Get just two streams...First Video & First Audio
int videoStreamIndex = -1, audioStreamIndex = -1;
for (int i = 0; i < inputFile->nb_streams; i++)
{
if (inputFile->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO && videoStreamIndex == -1)
{
videoStreamIndex = i;
}
else if (inputFile->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO && audioStreamIndex == -1)
{
audioStreamIndex = i;
}
}
Now test for the correct codec tag
// get codec id
char ct[64] = {0};
static const char* codec_id = "MPAK";
av_get_codec_tag_string( ct, sizeof(ct),inputFile->streams[videoStreamIndex]->codec->codec_tag);
assert(strncmp( ct , codec_id, strlen(codec_id)) == 0)
I did not know that the sizes are set even before a codec is chosen (or even available).
// lookup size
Size2D mediasize(inputFile->streams[videoStreamIndex]->codec->width, inputFile->streams[videoStreamIndex]->codec->height);
Seeking by frame and unpacking (video) is done like this:
AVStream* s = m_file->streams[videoStreamIndex];
int64_t seek_ts = (int64_t(frame_index) * s->r_frame_rate.den * s->time_base.den) / (int64_t(s->r_frame_rate.num) * s->time_base.num);
av_seek_frame(m_hap_file, videoStreamIndex, seek_ts, AVSEEK_FLAG_ANY);
AVPacket pkt;
av_read_frame(inputFile, &pkt);
Now the packet contains a frame ready to unpack with own decoder.

FFmpeg muxing to avi

sI have program, that succefully shows h264 stream using SDL: I'm getting h264 frame, decode it using ffmpeg and draw using SDL.
Also I can write frames to file (using fwrite) and play this file through ffplay.
But I want to mux data to the avi and face some problems in av_write_frame.
Here is my code:
...
/*Initializing format context - outFormatContext is the member of my class*/
AVOutputFormat *outFormat;
outFormat = av_guess_format(NULL,"out.avi",NULL);
outFormat->video_codec = AV_CODEC_ID_H264;
outFormat->audio_codec = AV_CODEC_ID_NONE;
avformat_alloc_output_context2(&outFormatContext, outFormat, NULL, "out.avi");
AVCodec *outCodec;
AVStream *outStream = add_stream(outFormatContext, &outCodec, outFormatContext->oformat->video_codec);
avcodec_open2(outStream->codec, outCodec, NULL);
av_dump_format(outFormatContext, 0, "out.avi", 1);
if (avio_open(&outFormatContext->pb, "out.avi", AVIO_FLAG_WRITE) < 0)
throw Exception("Couldn't open file");
if (avformat_write_header(outFormatContext, NULL) < 0)
throw Exception("Couldn't write to file");
//I don't have exceptions here - so there is 6KB header in out.avi.
...
static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
enum AVCodecID codec_id)
{
AVCodecContext *c;
AVStream *st;
/* find the encoder */
*codec = avcodec_find_encoder(codec_id);
if (!(*codec))
throw("Could not find encoder");
st = avformat_new_stream(oc, *codec);
if (!st)
throw ("Could not allocate stream");
st->id = oc->nb_streams-1;
c = st->codec;
c->bit_rate = 400000;
/* Resolution must be a multiple of two. */
c->width = 1920;
c->height = 1080;
c->pix_fmt = PIX_FMT_YUV420P;
c->flags = 0;
c->time_base.num = 1;
c->time_base.den = 25;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
return st;
}
...
/* Part of decoding loop. There is AVPacket packet - h264 packet;
int ret = av_write_frame(outFormatContext, &packet); //it return -22 code - Invadlid argument;
if (avcodec_decode_video2(pCodecCtx, pFrame, &frameDecoded, &packet) < 0)
return;
if (frameDecoded)
{
//SDL stuff
}
Also i tried to use avcodec_encode_video2 (encode pFrame back to the H264) next to the SDL stuff but encoding is not working - i've got empty packets :( It is the second problem.
Using av_interleaved_write_frame causes acces violation.
Code of the muxing part i copied from ffmpeg muxing example (https://www.ffmpeg.org/doxygen/2.1/doc_2examples_2muxing_8c-example.html)

FFMPEG streaming RTP: time base not set

I'm trying to create a small demo to get a feeling for streaming programmatically with ffmpeg. I'm using the code from this question as a basis. I can compile my code, but when I try to run it I always get this error:
[rtp # 0xbeb480] time base not set
The thing is, I have set the time base parameters. I even tried setting them for the stream (and the codec associated with the stream) as well, even though this should not be necessary as far as I understand it. This is the relevant section in my code:
AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_H264);
AVCodecContext* c = avcodec_alloc_context3(codec);
c->pix_fmt = AV_PIX_FMT_YUV420P;
c->flags = CODEC_FLAG_GLOBAL_HEADER;
c->width = WIDTH;
c->height = HEIGHT;
c->time_base.den = FPS;
c->time_base.num = 1;
c->gop_size = FPS;
c->bit_rate = BITRATE;
avcodec_open2(c, codec, NULL);
struct AVStream* stream = avformat_new_stream(avctx, codec);
// TODO: causes an error
avformat_write_header(avctx, NULL);
The error occurs when calling "avformat_write_header" near the end. All methods that can fail (like avcodec_open2) are checked, I just removed the checks to make the code more readable.
Digging through google and the ffmpeg source code didn't yield any useful results. I think it's really basic, but I'm stuck. Who can help me?
You are making settings in a wrong codec context.
The streams created by avformat_new_stream() have their own internal codec contexts, the one you created with avcodec_alloc_context3() is unnecessary and has no effect on the workings of avformat_write_header().
To set the variables correctly, set them this way:
AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_H264);
struct AVStream* stream = avformat_new_stream(avctx, codec);
stream->codec->pix_fmt = AV_PIX_FMT_YUV420P;
stream->codec->flags = CODEC_FLAG_GLOBAL_HEADER;
stream->codec->width = WIDTH;
stream->codec->height = HEIGHT;
stream->codec->time_base = (AVRational){1,FPS};
stream->codec->gop_size = FPS;
stream->codec->bit_rate = BITRATE;
That solved this particular problem for me, I added the other answer given here as well, as that's how I have it set, though your method of setting the time_base probably could have worked too, if you had been talking to the correct codec context.
Try:
c->time_base = (AVRational) {1, FPS};

ffmpeg libx264 AVCodecContext settings

I am using a recent windows (Jan 2011) ffmpeg build and trying to record video in H264. It is recording fine in MPEG4 using the following settings:
c->codec_id = CODEC_ID_MPEG4;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->width = VIDEO_WIDTH;
c->height = VIDEO_HEIGHT;
c->bit_rate = c->width * c->height * 4;
c->time_base.den = FRAME_RATE;
c->time_base.num = 1;
c->gop_size = 12;
c->pix_fmt = PIX_FMT_YUV420P;
Simply changing CODEC Id to H264 causes avcodec_open() to fail (-1). I found a list of possible settings How to encode h.264 with libavcodec/x264?. I have tried these, without setting pix_fmt, avcodec_open() still fails but if I additionally set c->pix_fmt = PIX_FMT_YUV420P; then I get a divide by zero exception.
I then came across a few posts on here that say I should set nothing (with exception of code_id, codec_type, width, height and perhaps bit_rate and pix_fmt) as the library now chooses the best settings itself. I have tried various combinations, still avcode_open() fails.
Does anyone have some advice on what to do or some settings that are current?
Thanks.
Here are one set of H264 settings which give the issue I describe:
static AVStream* AddVideoStream(AVFormatContext *pOutputFmtCtx,
int frameWidth, int frameHeight, int fps)
{
AVCodecContext* ctx;
AVStream* stream;
stream = av_new_stream(pOutputFmtCtx, 0);
if (!stream)
{
return NULL;
}
ctx = stream->codec;
ctx->codec_id = pOutputFmtCtx->oformat->video_codec; //CODEC_ID_H264
ctx->codec_type = AVMEDIA_TYPE_VIDEO;
ctx->width = frameWidth; //704
ctx->height = frameHeight; //576
ctx->bit_rate = frameWidth * frameHeight * 4;
ctx->coder_type = 1; // coder = 1
ctx->flags|=CODEC_FLAG_LOOP_FILTER; // flags=+loop
ctx->me_cmp|= 1; // cmp=+chroma, where CHROMA = 1
ctx->partitions|=X264_PART_I8X8+X264_PART_I4X4+X264_PART_P8X8+X264_PART_B8X8; // partitions=+parti8x8+parti4x4+partp8x8+partb8x8
ctx->me_method=ME_HEX; // me_method=hex
ctx->me_subpel_quality = 7; // subq=7
ctx->me_range = 16; // me_range=16
ctx->gop_size = 250; // g=250
ctx->keyint_min = 25; // keyint_min=25
ctx->scenechange_threshold = 40; // sc_threshold=40
ctx->i_quant_factor = 0.71; // i_qfactor=0.71
ctx->b_frame_strategy = 1; // b_strategy=1
ctx->qcompress = 0.6; // qcomp=0.6
ctx->qmin = 10; // qmin=10
ctx->qmax = 51; // qmax=51
ctx->max_qdiff = 4; // qdiff=4
ctx->max_b_frames = 3; // bf=3
ctx->refs = 3; // refs=3
ctx->directpred = 1; // directpred=1
ctx->trellis = 1; // trellis=1
ctx->flags2|=CODEC_FLAG2_BPYRAMID+CODEC_FLAG2_MIXED_REFS+CODEC_FLAG2_WPRED+CODEC_FLAG2_8X8DCT+CODEC_FLAG2_FASTPSKIP; // flags2=+bpyramid+mixed_refs+wpred+dct8x8+fastpskip
ctx->weighted_p_pred = 2; // wpredp=2
// libx264-main.ffpreset preset
ctx->flags2|=CODEC_FLAG2_8X8DCT;
ctx->flags2^=CODEC_FLAG2_8X8DCT; // flags2=-dct8x8
// if set this get divide by 0 error on avcodec_open()
// if don't set it get -1 error on avcodec_open()
//ctx->pix_fmt = PIX_FMT_YUV420P;
return stream;
}
In my experience you should give FFMPEG the least amount of information when initialising your codec as possible. This may seem counter intuitive but it means that FFMPEG will use it's default settings that are more likely to work than your own guesses. See what I would include below:
AVStream *stream;
m_video_codec = avcodec_find_encoder(AV_CODEC_ID_H264);
stream = avformat_new_stream(_outputCodec, m_video_codec);
ctx = stream->codec;
ctx->codec_id = m_fmt->video_codec;
ctx->bit_rate = m_AVIMOV_BPS; //Bits Per Second
ctx->width = m_AVIMOV_WIDTH; //Note Resolution must be a multiple of 2!!
ctx->height = m_AVIMOV_HEIGHT; //Note Resolution must be a multiple of 2!!
ctx->time_base.den = m_AVIMOV_FPS; //Frames per second
ctx->time_base.num = 1;
ctx->gop_size = m_AVIMOV_GOB; // Intra frames per x P frames
ctx->pix_fmt = AV_PIX_FMT_YUV420P;//Do not change this, H264 needs YUV format not RGB
As in previous answers, here is a working example of the FFMPEG library encoding RGB frames to a H264 video:
http://www.imc-store.com.au/Articles.asp?ID=276
An extra thought on your code though:
Have you called register all like below?
avcodec_register_all();
av_register_all();
If you don't call these two functions near the start of your code your subsequent calls to FFMPEG will fail and you'll most likely seg-fault.
Have a look at the linked example, I tested it on VC++2010 and it works perfectly.

Resources