FFMPEG streaming RTP: time base not set - ffmpeg

I'm trying to create a small demo to get a feeling for streaming programmatically with ffmpeg. I'm using the code from this question as a basis. I can compile my code, but when I try to run it I always get this error:
[rtp # 0xbeb480] time base not set
The thing is, I have set the time base parameters. I even tried setting them for the stream (and the codec associated with the stream) as well, even though this should not be necessary as far as I understand it. This is the relevant section in my code:
AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_H264);
AVCodecContext* c = avcodec_alloc_context3(codec);
c->pix_fmt = AV_PIX_FMT_YUV420P;
c->flags = CODEC_FLAG_GLOBAL_HEADER;
c->width = WIDTH;
c->height = HEIGHT;
c->time_base.den = FPS;
c->time_base.num = 1;
c->gop_size = FPS;
c->bit_rate = BITRATE;
avcodec_open2(c, codec, NULL);
struct AVStream* stream = avformat_new_stream(avctx, codec);
// TODO: causes an error
avformat_write_header(avctx, NULL);
The error occurs when calling "avformat_write_header" near the end. All methods that can fail (like avcodec_open2) are checked, I just removed the checks to make the code more readable.
Digging through google and the ffmpeg source code didn't yield any useful results. I think it's really basic, but I'm stuck. Who can help me?

You are making settings in a wrong codec context.
The streams created by avformat_new_stream() have their own internal codec contexts, the one you created with avcodec_alloc_context3() is unnecessary and has no effect on the workings of avformat_write_header().
To set the variables correctly, set them this way:
AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_H264);
struct AVStream* stream = avformat_new_stream(avctx, codec);
stream->codec->pix_fmt = AV_PIX_FMT_YUV420P;
stream->codec->flags = CODEC_FLAG_GLOBAL_HEADER;
stream->codec->width = WIDTH;
stream->codec->height = HEIGHT;
stream->codec->time_base = (AVRational){1,FPS};
stream->codec->gop_size = FPS;
stream->codec->bit_rate = BITRATE;
That solved this particular problem for me, I added the other answer given here as well, as that's how I have it set, though your method of setting the time_base probably could have worked too, if you had been talking to the correct codec context.

Try:
c->time_base = (AVRational) {1, FPS};

Related

How to change the settings of AVCodecContext after initialization (FFMPEG)

I have a question about Libavcodec that I can't find the answer to online. I'm trying to use H.264 to encode frames. The issue I'm having is that the frames I wish to encode have variable widths and heights. I understand that to encode frames in Libavcodec, you need to pass a "width" and "height" parameter to the AvCodecContext struct, and then initialize it as such:
AVCodec *codec = codec = avcodec_find_encoder(AV_CODEC_ID_H264);
AVCodecContext *context = avcodec_alloc_context3(encoder->codec);
context->width = 1920;
//OTHER SETTINGS HERE
//FINALLY...
avcodec_open2(context, codec, NULL);
Let's say that, after I've initialized this context, I need to encode a different frame that now has a width of 900. I can't simply just do context->width = 900 because the context has already been set to width 1920 and initialized. I could create an entirely new AvCodecContext and delete the previous one with avcodec_close() as follows:
AVCodec *codec = codec = avcodec_find_encoder(AV_CODEC_ID_H264);
AVCodecContext *context = avcodec_alloc_context3(encoder->codec);
context->width = 900;
//OTHER SETTINGS HERE
//FINALLY...
avcodec_open2(context, codec, NULL);
// DO THE ENCODING HERE
avcodec_close(context);
But my program has been crashing unexpectedly when I do this, and I feel like recreating the AVCodecContext every time I need to change a simple width/height setting is inefficient to begin with. Does anyone have any suggestions as to how I can go about doing this? Thank you very much!
That’s not a thing. You must reinitialize the encoder, or scale/pad the frames to the same size
I had the same problem, I solved it like this:
First I changed the codecContext width and height (They should be even numbers):
while (w%2 != 0) {
w--;
}
while (h%2 != 0) {
h--;
}
cctx->bit_rate = w*h*10;
cctx->width = w;
cctx->height = h;
Secons I initialized the codec with changed codecContext:
codec->init(cctx);
And finally destroy and recreated the AVFrame (I couldn't find a method to reinitialize frame without recreation and simply changing width and height doesn't work):
if (videoFrame) {
av_frame_free(&videoFrame);
}
if (!videoFrame) {
videoFrame = av_frame_alloc();
videoFrame->format = AV_PIX_FMT_YUV420P;
videoFrame->width = cctx->width;
videoFrame->height = cctx->height;
if ((av_frame_get_buffer(videoFrame, 32)) < 0) {
std::cout << "Failed to allocate picture" << std::endl;
return;
}
}

RTMP live stream directly from NVENC encoder

I am trying to create a live RTMP stream containing the animation generated with NVIDIA OptiX. The stream is to be received by nginx + rtmp module and broadcasted in MPEG-DASH format. Full chain up to dash.js player is working if the video is first saved to .flv file and then I send it with ffmpeg without any reformatting using command:
ffmpeg -re -i my_video.flv -c:v copy -f flv rtmp://x.x.x.x:1935/dash/test
But I want to stream directly from the code. And with this I am failng... Nginx logs an error "dash: invalid avcc received (2: No such file or directory)". Then it seems to receive the stream correctly (segments are rolling, dash manifest is there), however the stream is not possible to play in the browser.
I can see only one difference in the manifest between direct stream and stream from file. Codecs attribute of the representation in the direct stream is missed: codecs="avcc1.000000" instead of "avc1.640028" which I get when streaming from file.
My code opens the stream:
av_register_all();
AVOutputFormat* fmt = av_guess_format("flv",
file_name, nullptr);
fmt->video_codec = AV_CODEC_ID_H264;
AVFormatContext* _oc;
avformat_alloc_output_context2(&_oc, fmt, nullptr, "rtmp://x.x.x.x:1935/dash/test");
AVStream* _vs = avformat_new_stream(_oc, nullptr);
_vs->id = 0;
_vs->time_base = AVRational { 1, 25 };
_vs->avg_frame_rate = AVRational{ 25, 1 };
AVCodecParameters *vpar = _vs->codecpar;
vpar->codec_id = fmt->video_codec;
vpar->codec_type = AVMEDIA_TYPE_VIDEO;
vpar->format = AV_PIX_FMT_YUV420P;
vpar->profile = FF_PROFILE_H264_HIGH;
vpar->level = _level;
vpar->width = _width;
vpar->height = _height;
vpar->bit_rate = _avg_bitrate;
avio_open(&_oc->pb, _oc->filename, AVIO_FLAG_WRITE);
avformat_write_header(_oc, nullptr);
Width, height, bitrate, level and profile I get from NVENC encoder settings. I also do the error checking, ommited here. Then I have a loop writing each encoded packets, with IDR frames etc all prepared on the fly with NVENC. The loop body is:
auto & pkt_data = _packets[i];
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.pts = av_rescale_q(_n_frames++, AVRational{ 1, 25 }, _vs->time_base);
pkt.duration = av_rescale_q(1, AVRational{ 1, 25 }, _vs->time_base);
pkt.dts = pkt.pts;
pkt.stream_index = _vs->index;
pkt.data = pkt_data.data();
pkt.size = (int)pkt_data.size();
if (!memcmp(pkt_data.data(), "\x00\x00\x00\x01\x67", 5))
{
pkt.flags |= AV_PKT_FLAG_KEY;
}
av_write_frame(_oc, &pkt);
Obviously ffmpeg is writing avcc code somewhere... I have no clue where to add this code so the RTMP server can recognize it. Or I am missing something else?
Any hint greatly appreciated, folks!
Thanks to Gyan's comment I was able to solve the issue. Following the AV_CODEC_FLAG_GLOBAL_HEADER flag in the wrapper one can see how the global header is added, which was missing in my case. You can use directly the NVENC API function nvEncGetSequenceParams, but since I am anyway using SDK, it is a bit cleaner.
So I had to attach the header to AVCodecParameters::extradata:
std::vector<uint8_t> payload;
_encoder->GetSequenceParams(payload);
vpar->extradata_size = payload.size();
vpar->extradata = (uint8_t*)av_mallocz(payload.size() + AV_INPUT_BUFFER_PADDING_SIZE);
memcpy(vpar->extradata, payload.data(), payload.size());
_encoder is my instance of NvEncoder from SDK.
The wrapper is doing the same thing, however using deprecated struct AVCodecContext.

FFmpeg transcoded sound (AAC) stops after half video time

I have a strange problem in my C/C++ FFmpeg transcoder, which takes an input MP4 (varying input codecs) and produces and output MP4 (x264, baseline & AAC LC #44100 sample rate with libfdk_aac):
The resulting mp4 video has fine images (x264) and the audio (AAC LC) works fine as well, but is only played until exactly the half of the video.
The audio is not slowed down, not stretched and doesn't stutter. It just stops right in the middle of the video.
One hint may be that the input file has a sample rate of 22050 and 22050/44100 is 0.5, but I really don't get why this would make the sound just stop after half the time. I'd expect such an error leading to sound being at the wrong speed. Everything works just fine if I don't try to enforce 44100 and instead just use the incoming sample_rate.
Another guess would be that the pts calculation doesn't work. But the audio sounds just fine (until it stops) and I do exactly the same for the video part, where it works flawlessly. "Exactly", as in the same code, but "audio"-variables replaced with "video"-variables.
FFmpeg reports no errors during the whole process. I also flush the decoders/encoders/interleaved_writing after all the package reading from the input is done. It works well for the video so I doubt there is much wrong with my general approach.
Here are the functions of my code (stripped off the error handling & other class stuff):
AudioCodecContext Setup
outContext->_audioCodec = avcodec_find_encoder(outContext->_audioTargetCodecID);
outContext->_audioStream =
avformat_new_stream(outContext->_formatContext, outContext->_audioCodec);
outContext->_audioCodecContext = outContext->_audioStream->codec;
outContext->_audioCodecContext->channels = 2;
outContext->_audioCodecContext->channel_layout = av_get_default_channel_layout(2);
outContext->_audioCodecContext->sample_rate = 44100;
outContext->_audioCodecContext->sample_fmt = outContext->_audioCodec->sample_fmts[0];
outContext->_audioCodecContext->bit_rate = 128000;
outContext->_audioCodecContext->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
outContext->_audioCodecContext->time_base =
(AVRational){1, outContext->_audioCodecContext->sample_rate};
outContext->_audioStream->time_base = (AVRational){1, outContext->_audioCodecContext->sample_rate};
int retVal = avcodec_open2(outContext->_audioCodecContext, outContext->_audioCodec, NULL);
Resampler Setup
outContext->_audioResamplerContext =
swr_alloc_set_opts( NULL, outContext->_audioCodecContext->channel_layout,
outContext->_audioCodecContext->sample_fmt,
outContext->_audioCodecContext->sample_rate,
_inputContext._audioCodecContext->channel_layout,
_inputContext._audioCodecContext->sample_fmt,
_inputContext._audioCodecContext->sample_rate,
0, NULL);
int retVal = swr_init(outContext->_audioResamplerContext);
Decoding
decodedBytes = avcodec_decode_audio4( _inputContext._audioCodecContext,
_inputContext._audioTempFrame,
&p_gotAudioFrame, &_inputContext._currentPacket);
Converting (only if decoding produced a frame, of course)
int retVal = swr_convert( outContext->_audioResamplerContext,
outContext->_audioConvertedFrame->data,
outContext->_audioConvertedFrame->nb_samples,
(const uint8_t**)_inputContext._audioTempFrame->data,
_inputContext._audioTempFrame->nb_samples);
Encoding (only if decoding produced a frame, of course)
outContext->_audioConvertedFrame->pts =
av_frame_get_best_effort_timestamp(_inputContext._audioTempFrame);
// Init the new packet
av_init_packet(&outContext->_audioPacket);
outContext->_audioPacket.data = NULL;
outContext->_audioPacket.size = 0;
// Encode
int retVal = avcodec_encode_audio2( outContext->_audioCodecContext,
&outContext->_audioPacket,
outContext->_audioConvertedFrame,
&p_gotPacket);
// Set pts/dts time stamps for writing interleaved
av_packet_rescale_ts( &outContext->_audioPacket,
outContext->_audioCodecContext->time_base,
outContext->_audioStream->time_base);
outContext->_audioPacket.stream_index = outContext->_audioStream->index;
Writing (only if encoding produced a packet, of course)
int retVal = av_interleaved_write_frame(outContext->_formatContext, &outContext->_audioPacket);
I am quite out of ideas about what would cause such a behaviour.
So, I finally managed to figure things out myself.
The problem was indeed in the difference of the sample_rate.
You'd assume that a call to swr_convert() would give you all the samples you need for converting the audio frame when called like I did.
Of course, that would be too easy.
Instead, you need to call swr_convert (potentially) multiple times per frame and buffer its output, if required. Then you need to grab a single frame from the buffer and that is what you will have to encode.
Here is my new convertAudioFrame function:
// Calculate number of output samples
int numOutputSamples = av_rescale_rnd(
swr_get_delay(outContext->_audioResamplerContext, _inputContext._audioCodecContext->sample_rate)
+ _inputContext._audioTempFrame->nb_samples,
outContext->_audioCodecContext->sample_rate,
_inputContext._audioCodecContext->sample_rate,
AV_ROUND_UP);
if (numOutputSamples == 0)
{
return;
}
uint8_t* tempSamples;
av_samples_alloc( &tempSamples, NULL,
outContext->_audioCodecContext->channels, numOutputSamples,
outContext->_audioCodecContext->sample_fmt, 0);
int retVal = swr_convert( outContext->_audioResamplerContext,
&tempSamples,
numOutputSamples,
(const uint8_t**)_inputContext._audioTempFrame->data,
_inputContext._audioTempFrame->nb_samples);
// Write to audio fifo
if (retVal > 0)
{
retVal = av_audio_fifo_write(outContext->_audioFifo, (void**)&tempSamples, retVal);
}
av_freep(&tempSamples);
// Get a frame from audio fifo
int samplesAvailable = av_audio_fifo_size(outContext->_audioFifo);
if (samplesAvailable > 0)
{
retVal = av_audio_fifo_read(outContext->_audioFifo,
(void**)outContext->_audioConvertedFrame->data,
outContext->_audioCodecContext->frame_size);
// We got a frame, so also set its pts
if (retVal > 0)
{
p_gotConvertedFrame = 1;
if (_inputContext._audioTempFrame->pts != AV_NOPTS_VALUE)
{
outContext->_audioConvertedFrame->pts = _inputContext._audioTempFrame->pts;
}
else if (_inputContext._audioTempFrame->pkt_pts != AV_NOPTS_VALUE)
{
outContext->_audioConvertedFrame->pts = _inputContext._audioTempFrame->pkt_pts;
}
}
}
This function I basically call until there are no more frame in the audio fifo buffer.
So, the audio was only half as long because I only encoded as many frames as I decoded. Where I actually needed to encode 2 times as many frames due to 2 times the sample_rate.

Decoder crashes after ffmpeg upgrade

Recently I upgraded ffmpeg from 0.9 to 1.0 (tested on Win7x64 and on iOS), and now avcodec_decode_video2 seagfaults. Long story short: the crash occurs every time the video dimensions change (eg. from 320x240 to 160x120 or vice versa).
I receive mpeg4 video stream from some proprietary source and decode it like this:
// once, during initialization:
AVCodec *codec_ = avcodec_find_decoder(CODEC_ID_MPEG4);
AVCodecContext ctx_ = avcodec_alloc_context3(codec_);
avcodec_open2(ctx_, codec_, 0);
AVPacket packet_;
av_init_packet(&packet_);
AVFrame picture_ = avcodec_alloc_frame();
// on every frame:
int got_picture;
packet_.size = size;
packet_.data = (uint8_t *)buffer;
avcodec_decode_video2(ctx_, picture_, &got_picture, &packet_);
Again, all the above had worked flawlessly until I upgraded to 1.0. Now every time the frame dimensions change - avcodec_decode_video2 crashes. Note that I don't assign width/height in AVCodecContext - neither in the beginning, nor when the stream changes - can it be the reason?
I'd appreciate any idea!
Update: setting ctx_.width and ctx_.height doesn't help.
Update2: just before the crash I get the following log messages:
mpeg4, level 24: "Found 2 unreleased buffers!".
level 8: "Assertion i < avci->buffer_count failed at libavcodec/utils.c:603"
Update3 upgrading to 1.1.2 fixed this crash. The decoder is able again to cope with dimensions change on the fly.
You can try to fill the AVPacket::side_data. If you change the frame size, codec receives information from it (see libavcodec/utils.c apply_param_change function)
This structure can be filled as follows:
int my_ff_add_param_change(AVPacket *pkt, int32_t width, int32_t height)
{
uint32_t flags = 0;
int size = 4 * 3;
uint8_t *data;
if (!pkt)
return AVERROR(EINVAL);
flags = AV_SIDE_DATA_PARAM_CHANGE_DIMENSIONS;
data = av_packet_new_side_data(pkt, AV_PKT_DATA_PARAM_CHANGE, size);
if (!data)
return AVERROR(ENOMEM);
((uint32_t*)data)[0] = flags;
((uint32_t*)data)[1] = width;
((uint32_t*)data)[2] = height;
return 0;
}
You need to call this function every time the size changes.
I think this feature has appeared recently. I didn't know about it until I looked new ffmpeg sources.
UPD
As you write, the easiest method to solve the problem is to perform codec restart. Just call avcodec_close / avcodec_open2
I just ran into same issue when my frames were changing size on the fly. However, calling avcodec_close/avcodec_open2 is superflous. A cleaner way is to just reset your AVPacket data structure before the call to avcodec_decode_video2. Here it is the code:
av_init_packet(&packet_)
The key here is that this method resets the all of the values of AVPacket to defaults. Check docs for more info.

FFmpeg: bitrate change dynamically

I read the previous thread and this is the response from NISHAnT,
FFMPEG: Dynamic change of bit_rate for Video
avcodec_init();
avcodec_register_all();
codec = avcodec_find_encoder(CODEC_ID_H263);
c = avcodec_alloc_context();
picture= avcodec_alloc_frame();
c->bit_rate = bitrate;
c->width = w;
c->height = h;
c->time_base= (AVRational){1,framerate};
c->pix_fmt = PIX_FMT_YUV420P;
avcodec_close(c);
av_free(c);
And this is my code:
if(previous_BR != cur_BR){
previous_BR = cur_BR;
AVCodecContext* new_c = av_mallocz(sizeof(AVCodecContext));;
avcodec_copy_context(new_c, ost_table[0]->st->codec);
avcodec_close(ost_table[0]->st->codec);
av_free(ost_table[0]->st->codec);
avcodec_init();
avcodec_register_all();
ost_table[0]->enc = avcodec_find_encoder(CODEC_ID_H264);
new_c = avcodec_alloc_context3(ost_table[0]->enc);
ost_table[0]->st->codec = new_c;
AVFrame *picture= avcodec_alloc_frame();
new_c->bit_rate = cur_BR;
new_c->width = 352;
new_c->height = 288;
int framerate = 30;
new_c->time_base= (AVRational){1,framerate};
new_c->pix_fmt = PIX_FMT_YUV420P;
new_c->codec_type = AVMEDIA_TYPE_VIDEO;
new_c->codec_id = CODEC_ID_H264;}
I tried to add my code to transcode(), but ffmpeg exits after it goes through my codes.
is there something wrong with my codes?
or what else I should add?
I put the code after "redo:", so that it will recursively loop back.
please help !!
Thank you.
c is AVCodecContext Structure.
You must configure ffmpeg first for the type of file you are playing.Build it by conifguing first build.sh file in ffmpeg root directory.
for the type of file you have to configure the codec9coder-decoder) and muxer/demuxer.
for example to play avi file , you have to configure the muxer/demuxer and codec for avi which is MPEG "AVI" amd "MPEG4" respectively.

Resources