x264 encoded frames into a mp4 container with ffmpeg API - ffmpeg

I'm struggling with understanding what is and what is not needed in getting my already encoded x264 frames into a video container file using ffmpeg's libavformat API.
My current program will get the x264 frames like this -
while( x264_encoder_delayed_frames( h ) )
{
printf("Writing delayed frame %u\n", delayed_frame_counter++);
i_frame_size = x264_encoder_encode( h, &nal, &i_nal, NULL, &pic_out );
if( i_frame_size < 0 ) {
printf("Failed to encode a delayed x264 frame.\n");
return ERROR;
}
else if( i_frame_size )
{
if( !fwrite(nal->p_payload, i_frame_size, 1, video_file_ptr) ) {
printf("Failed to write a delayed x264 frame.\n");
return ERROR;
}
}
}
If I use the CLI to the ffmpeg binary, I can put these frames into a container using:
ffmpeg -i "raw_frames.h264" -c:v copy -f mp4 "video.mp4"
I would like to code this function into my program using the libavformat API though. I'm a little stuck in the concepts and the order on which each ffmpeg function is needed to be called.
So far I have written:
mAVOutputFormat = av_guess_format("gen_vid.mp4", NULL, NULL);
printf("Guessed format\n");
int ret = avformat_alloc_output_context2(&mAVFormatContext, NULL, NULL, "gen_vid.mp4");
printf("Created context = %d\n", ret);
printf("Format = %s\n", mAVFormatContext->oformat->name);
mAVStream = avformat_new_stream(mAVFormatContext, 0);
if (!mAVStream) {
printf("Failed allocating output stream\n");
} else {
printf("Allocated stream.\n");
}
mAVCodecParameters = mAVStream->codecpar;
if (mAVCodecParameters->codec_type != AVMEDIA_TYPE_AUDIO &&
mAVCodecParameters->codec_type != AVMEDIA_TYPE_VIDEO &&
mAVCodecParameters->codec_type != AVMEDIA_TYPE_SUBTITLE) {
printf("Invalid codec?\n");
}
if (!(mAVFormatContext->oformat->flags & AVFMT_NOFILE)) {
ret = avio_open(&mAVFormatContext->pb, "gen_vid.mp4", AVIO_FLAG_WRITE);
if (ret < 0) {
printf("Could not open output file '%s'", "gen_vid.mp4");
}
}
ret = avformat_write_header(mAVFormatContext, NULL);
if (ret < 0) {
printf("Error occurred when opening output file\n");
}
This will print out:
Guessed format
Created context = 0
Format = mp4
Allocated stream.
Invalid codec?
[mp4 # 0x55ffcea2a2c0] Could not find tag for codec none in stream #0, codec not currently supported in container
Error occurred when opening output file
How can I make sure the codec type is set correctly for my video?
Next I need to somehow point my mAVStream to use my x264 frames - advice would be great.
Update 1:
So I've tried to set the H264 codec, so the codec's meta-data is available. I seem to hit 2 newer issues now.
1) It cannot find the device and therefore cannot configure the encoder.
2) I get the "dimensions not set".
mAVOutputFormat = av_guess_format("gen_vid.mp4", NULL, NULL);
printf("Guessed format\n");
// MUST allocate the media file format context.
int ret = avformat_alloc_output_context2(&mAVFormatContext, NULL, NULL, "gen_vid.mp4");
printf("Created context = %d\n", ret);
printf("Format = %s\n", mAVFormatContext->oformat->name);
// Even though we already have encoded the H264 frames using x264,
// we still need the codec's meta-data.
const AVCodec *mAVCodec;
mAVCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
if (!mAVCodec) {
fprintf(stderr, "Codec '%s' not found\n", "H264");
exit(1);
}
mAVCodecContext = avcodec_alloc_context3(mAVCodec);
if (!mAVCodecContext) {
fprintf(stderr, "Could not allocate video codec context\n");
exit(1);
}
printf("Codec context allocated with defaults.\n");
/* put sample parameters */
mAVCodecContext->bit_rate = 400000;
mAVCodecContext->width = width;
mAVCodecContext->height = height;
mAVCodecContext->time_base = (AVRational){1, 30};
mAVCodecContext->framerate = (AVRational){30, 1};
mAVCodecContext->gop_size = 10;
mAVCodecContext->level = 31;
mAVCodecContext->max_b_frames = 1;
mAVCodecContext->pix_fmt = AV_PIX_FMT_NV12;
av_opt_set(mAVCodecContext->priv_data, "preset", "slow", 0);
printf("Set codec parameters.\n");
// Initialize the AVCodecContext to use the given AVCodec.
avcodec_open2(mAVCodecContext, mAVCodec, NULL);
// Add a new stream to a media file. Must be called before
// calling avformat_write_header().
mAVStream = avformat_new_stream(mAVFormatContext, mAVCodec);
if (!mAVStream) {
printf("Failed allocating output stream\n");
} else {
printf("Allocated stream.\n");
}
// TODO How should codecpar be set?
mAVCodecParameters = mAVStream->codecpar;
if (mAVCodecParameters->codec_type != AVMEDIA_TYPE_AUDIO &&
mAVCodecParameters->codec_type != AVMEDIA_TYPE_VIDEO &&
mAVCodecParameters->codec_type != AVMEDIA_TYPE_SUBTITLE) {
printf("Invalid codec?\n");
}
if (!(mAVFormatContext->oformat->flags & AVFMT_NOFILE)) {
ret = avio_open(&mAVFormatContext->pb, "gen_vid.mp4", AVIO_FLAG_WRITE);
if (ret < 0) {
printf("Could not open output file '%s'", "gen_vid.mp4");
}
}
printf("Called avio_open()\n");
// MUST write a header.
ret = avformat_write_header(mAVFormatContext, NULL);
if (ret < 0) {
printf("Error occurred when opening output file (writing header).\n");
}
Now I am getting this output -
Guessed format
Created context = 0
Format = mp4
Codec context allocated with defaults.
Set codec parameters.
[h264_v4l2m2m # 0x556460344b40] Could not find a valid device
[h264_v4l2m2m # 0x556460344b40] can't configure encoder
Allocated stream.
Invalid codec?
Called avio_open()
[mp4 # 0x5564603442c0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
[mp4 # 0x5564603442c0] dimensions not set
Error occurred when opening output file (writing header).

Related

Encoding of raw frames (D3D11Texture2D) to an rtsp stream using libav*

I have managed to create a rtsp stream using libav* and directX texture (which I am obtaining from GDI API using Bitblit method). Here's my approach for creating live rtsp stream:
Create output context and stream (skipping the checks here)
avformat_alloc_output_context2(&ofmt_ctx, NULL, "rtsp", rtsp_url); //RTSP
vid_codec = avcodec_find_encoder(ofmt_ctx->oformat->video_codec);
vid_stream = avformat_new_stream(ofmt_ctx,vid_codec);
vid_codec_ctx = avcodec_alloc_context3(vid_codec);
Set codec params
codec_ctx->codec_tag = 0;
codec_ctx->codec_id = ofmt_ctx->oformat->video_codec;
//codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
codec_ctx->width = width; codec_ctx->height = height;
codec_ctx->gop_size = 12;
//codec_ctx->gop_size = 40;
//codec_ctx->max_b_frames = 3;
codec_ctx->pix_fmt = target_pix_fmt; // AV_PIX_FMT_YUV420P
codec_ctx->framerate = { stream_fps, 1 };
codec_ctx->time_base = { 1, stream_fps};
if (fctx->oformat->flags & AVFMT_GLOBALHEADER)
{
codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
Initialize video stream
if (avcodec_parameters_from_context(stream->codecpar, codec_ctx) < 0)
{
Debug::Error("Could not initialize stream codec parameters!");
return false;
}
AVDictionary* codec_options = nullptr;
if (codec->id == AV_CODEC_ID_H264) {
av_dict_set(&codec_options, "profile", "high", 0);
av_dict_set(&codec_options, "preset", "fast", 0);
av_dict_set(&codec_options, "tune", "zerolatency", 0);
}
// open video encoder
int ret = avcodec_open2(codec_ctx, codec, &codec_options);
if (ret<0) {
Debug::Error("Could not open video encoder: ", avcodec_get_name(codec->id), " error ret: ", AVERROR(ret));
return false;
}
stream->codecpar->extradata = codec_ctx->extradata;
stream->codecpar->extradata_size = codec_ctx->extradata_size;
Start streaming
// Create new frame and allocate buffer
AVFrame* AllocateFrameBuffer(AVCodecContext* codec_ctx, double width, double height)
{
AVFrame* frame = av_frame_alloc();
std::vector<uint8_t> framebuf(av_image_get_buffer_size(codec_ctx->pix_fmt, width, height, 1));
av_image_fill_arrays(frame->data, frame->linesize, framebuf.data(), codec_ctx->pix_fmt, width, height, 1);
frame->width = width;
frame->height = height;
frame->format = static_cast<int>(codec_ctx->pix_fmt);
//Debug::Log("framebuf size: ", framebuf.size(), " frame format: ", frame->format);
return frame;
}
void RtspStream(AVFormatContext* ofmt_ctx, AVStream* vid_stream, AVCodecContext* vid_codec_ctx, char* rtsp_url)
{
printf("Output stream info:\n");
av_dump_format(ofmt_ctx, 0, rtsp_url, 1);
const int width = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetTextureWidth();
const int height = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetTextureHeight();
//DirectX BGRA to h264 YUV420p
SwsContext* conversion_ctx = sws_getContext(width, height, src_pix_fmt,
vid_stream->codecpar->width, vid_stream->codecpar->height, target_pix_fmt,
SWS_BICUBIC | SWS_BITEXACT, nullptr, nullptr, nullptr);
if (!conversion_ctx)
{
Debug::Error("Could not initialize sample scaler!");
return;
}
AVFrame* frame = AllocateFrameBuffer(vid_codec_ctx,vid_codec_ctx->width,vid_codec_ctx->height);
if (!frame) {
Debug::Error("Could not allocate video frame\n");
return;
}
if (avformat_write_header(ofmt_ctx, NULL) < 0) {
Debug::Error("Error occurred when writing header");
return;
}
if (av_frame_get_buffer(frame, 0) < 0) {
Debug::Error("Could not allocate the video frame data\n");
return;
}
int frame_cnt = 0;
//av start time in microseconds
int64_t start_time_av = av_gettime();
AVRational time_base = vid_stream->time_base;
AVRational time_base_q = { 1, AV_TIME_BASE };
// frame pixel data info
int data_size = width * height * 4;
uint8_t* data = new uint8_t[data_size];
// AVPacket* pkt = av_packet_alloc();
while (RtspStreaming::IsStreaming())
{
/* make sure the frame data is writable */
if (av_frame_make_writable(frame) < 0)
{
Debug::Error("Can't make frame writable");
break;
}
//get copy/ref of the texture
//uint8_t* data = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetBuffer();
if (!WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetPixels(data, 0, 0, width, height))
{
Debug::Error("Failed to get frame buffer. ID: ", RtspStreaming::WindowId());
std::this_thread::sleep_for (std::chrono::seconds(2));
continue;
}
//printf("got pixels data\n");
// convert BGRA to yuv420 pixel format
int srcStrides[1] = { 4 * width };
if (sws_scale(conversion_ctx, &data, srcStrides, 0, height, frame->data, frame->linesize) < 0)
{
Debug::Error("Unable to scale d3d11 texture to frame. ", frame_cnt);
break;
}
//Debug::Log("frame pts: ", frame->pts, " time_base:", av_rescale_q(1, vid_codec_ctx->time_base, vid_stream->time_base));
frame->pts = frame_cnt++;
//frame_cnt++;
//printf("scale conversion done\n");
//encode to the video stream
int ret = avcodec_send_frame(vid_codec_ctx, frame);
if (ret < 0)
{
Debug::Error("Error sending frame to codec context! ",frame_cnt);
break;
}
AVPacket* pkt = av_packet_alloc();
//av_init_packet(pkt);
ret = avcodec_receive_packet(vid_codec_ctx, pkt);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
{
//av_packet_unref(pkt);
av_packet_free(&pkt);
continue;
}
else if (ret < 0)
{
Debug::Error("Error during receiving packet: ",AVERROR(ret));
//av_packet_unref(pkt);
av_packet_free(&pkt);
break;
}
if (pkt->pts == AV_NOPTS_VALUE)
{
//Write PTS
//Duration between 2 frames (us)
int64_t calc_duration = (double)AV_TIME_BASE / av_q2d(vid_stream->r_frame_rate);
//Parameters
pkt->pts = (double)(frame_cnt * calc_duration) / (double)(av_q2d(time_base) * AV_TIME_BASE);
pkt->dts = pkt->pts;
pkt->duration = (double)calc_duration / (double)(av_q2d(time_base) * AV_TIME_BASE);
}
int64_t pts_time = av_rescale_q(pkt->dts, time_base, time_base_q);
int64_t now_time = av_gettime() - start_time_av;
if (pts_time > now_time)
av_usleep(pts_time - now_time);
//pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
//pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
//pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
//pkt->pos = -1;
//write frame and send
if (av_interleaved_write_frame(ofmt_ctx, pkt)<0)
{
Debug::Error("Error muxing packet, frame number:",frame_cnt);
break;
}
//Debug::Log("RTSP streaming...");
//sstd::this_thread::sleep_for(std::chrono::milliseconds(1000/20));
//av_packet_unref(pkt);
av_packet_free(&pkt);
}
//av_free_packet(pkt);
delete[] data;
/* Write the trailer, if any. The trailer must be written before you
* close the CodecContexts open when you wrote the header; otherwise
* av_write_trailer() may try to use memory that was freed on
* av_codec_close(). */
av_write_trailer(ofmt_ctx);
av_frame_unref(frame);
av_frame_free(&frame);
printf("streaming thread CLOSED!\n");
}
Now, this allows me to connect to my rtsp server and maintain the connection. However, on the rtsp client side I am getting either gray or single static frame as shown below:
Would appreciate if you can help with following questions:
Firstly, why the stream is not working in spite of continued connection to the server and updating frames?
Video codec. By default rtsp format uses Mpeg4 codec, is it possible to use h264? When I manually set it to AV_CODEC_ID_H264 the program fails at avcodec_open2 with return value of -22.
Do I need to create and allocate new "AVFrame" and "AVPacket" for every frame? Or can I just reuse global variable for this?
Do I need to explicitly define some code for real-time streaming? (Like in ffmpeg we use "-re" flag).
Would be great if you can point out some example code for creating livestream. I have checked following resources:
https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/encode_video.c
streaming FLV to RTMP with FFMpeg using H264 codec and C++ API to flv.js
https://medium.com/swlh/streaming-video-with-ffmpeg-and-directx-11-7395fcb372c4
Update
While test I found that I am able to play the stream using ffplay, while it's getting stuck on VLC player. Here is snapshot on the ffplay log
The basic construct and initialization seems to be okay. Find below responses to your questions
why the stream is not working in spite of continued connection to the server and updating frames?
If you're getting an error or broken stream, you might wanna check into your presentation and decompression timestamps (pts/dts) of your packet.
In your code, I notice that you're taking time_base from video stream object which is not guranteed to be same as codec->time_base value and usually varies depending upon active stream.
AVRational time_base = vid_stream->time_base;
AVRational time_base_q = { 1, AV_TIME_BASE };
Video codec. By default rtsp format uses Mpeg4 codec, is it possible to use h264?
I don't see why not... RTSP is just a protocol for carrying your packets over the network. So you should be able use AV_CODEC_ID_H264 for encoding the stream.
Do I need to create and allocate new "AVFrame" and "AVPacket" for every frame? Or can I just reuse global variable for this?
In libav during encoding process a single packet is used for encoding a video frame, while there can be multiple audio frames in a single packet. I should reference this, but can't seem to find any source at the moment. But anyways the point is you would need to create new packet every time.
Do I need to explicitly define some code for real-time streaming? (Like in ffmpeg we use "-re" flag).
You don't need to add anything else for real time streaming. Although you might wanna implement it to limit the number of frame updates that you pass to encoder and save some performance.
for me the difference between ffplay good capture and VLC bad capture (for UDP packets) was pkt_size=xxx attribute (ffmpeg -re -i test.mp4 -f mpegts udp://127.0.0.1:23000?pkt_size=1316) (VLC open media network tab udp://#:23000:pkt_size=1316). So only if pkt_size is defined (and equal) VLC is able to capture.

Replace Bento4 with libav / ffmpeg

We use Bento4 - a really well designed SDK - to demux mp4 files in .mov containers. Decoding is done by an own codec, so only the raw (intraframe) samples are needed. By now this works pretty straightforward
AP4_Track *test_videoTrack = nullptr;
AP4_ByteStream *input = nullptr;
AP4_Result result = AP4_FileByteStream::Create(filename, AP4_FileByteStream::STREAM_MODE_READ, input);
AP4_File m_file (*input, true);
//
// Read movie tracks, and metadata, find the video track
size_t index = 0;
uint32_t m_width = 0, m_height = 0;
auto item = m_file.GetMovie()->GetTracks().FirstItem();
auto track = item->GetData();
if (track->GetType() == AP4_Track::TYPE_VIDEO)
{
m_width = (uint32_t)((double)test_videoTrack->GetWidth() / double(1 << 16));
m_height = (uint32_t)((double)test_videoTrack->GetHeight() / double(1 << 16));
std::string codec("unknown");
auto sd = track->GetSampleDescription(0);
AP4_String c;
if (AP4_SUCCEEDED(sd->GetCodecString(c)))
{
codec = c.GetChars();
}
// Find and instantiate the decoder
AP4_Sample sample;
AP4_DataBuffer sampleData;
test_videoTrack->ReadSample(0, sample, sampleData);
}
For several reasons we would prefer replacing Bento4 with libav/ffmpeg (mainly because we already have in the project and want to reduce dependencies)
How would we ( preferrably in pseudo-code ) replace the Bento4-tasks done above with libav? Please remember that the used codec is not in the ffmpeg library, so we cannot use the standard ffmpeg decoding examples. Opening the media file simply fails. Without decoder we got no size or any other info so far. What we need would
open the media file
get contained tracks (possibly also audio)
get track size / length info
get track samples by index
It turned out to be very easy:
AVFormatContext* inputFile = avformat_alloc_context();
avformat_open_input(&inputFile, filename, nullptr, nullptr);
avformat_find_stream_info(inputFile, nullptr);
//Get just two streams...First Video & First Audio
int videoStreamIndex = -1, audioStreamIndex = -1;
for (int i = 0; i < inputFile->nb_streams; i++)
{
if (inputFile->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO && videoStreamIndex == -1)
{
videoStreamIndex = i;
}
else if (inputFile->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO && audioStreamIndex == -1)
{
audioStreamIndex = i;
}
}
Now test for the correct codec tag
// get codec id
char ct[64] = {0};
static const char* codec_id = "MPAK";
av_get_codec_tag_string( ct, sizeof(ct),inputFile->streams[videoStreamIndex]->codec->codec_tag);
assert(strncmp( ct , codec_id, strlen(codec_id)) == 0)
I did not know that the sizes are set even before a codec is chosen (or even available).
// lookup size
Size2D mediasize(inputFile->streams[videoStreamIndex]->codec->width, inputFile->streams[videoStreamIndex]->codec->height);
Seeking by frame and unpacking (video) is done like this:
AVStream* s = m_file->streams[videoStreamIndex];
int64_t seek_ts = (int64_t(frame_index) * s->r_frame_rate.den * s->time_base.den) / (int64_t(s->r_frame_rate.num) * s->time_base.num);
av_seek_frame(m_hap_file, videoStreamIndex, seek_ts, AVSEEK_FLAG_ANY);
AVPacket pkt;
av_read_frame(inputFile, &pkt);
Now the packet contains a frame ready to unpack with own decoder.

Transcoding videos with LibAvFormat for playback on iOS devices

I’m trying to transcode a video on my iOS app using FFMpeg/LibAv.
What I’m trying to accomplish is to transcode a video in order to resize each frame and possibly lower the bitrate in order to save valuable MB in the device.
The resulting video must be playable on all iPhone5+ devices.
After reading the documentation I found out that:
I do not need to encode/decode the audio stream -> I’ll copy as-is to the output file
I need to encode the video using the h264 codec (LibX264) with a profile supported by iOS (baseline profile with level 3.0 - https://trac.ffmpeg.org/wiki/Encode/H.264#Compatibility)
I’m also setting the picture format to YUV planar since it’s the only one supported by iOS
For the sake of testing I’m not using any filter (I’m using a dummy/passthrough) at all or even trying to lower the bitrate, I’m just trying to decode the video stream and encode it again
Most of the code is based on the transcoding.c and filtering.c available on the FFMpeg examples directory
FFMpeg-wise what I’m trying to achieve with LibAv is:
ffmpeg -i INPUT.MOV -c:v libx264 -preset ultrafast -profile:v baseline -level 3.0 -c:a copy output.MOV
(the resulting file - which can be found below - is playable on QuickTime if it’s generated by FFMpeg through the command line)
The original video was generated with a regular iPhone using iOS 8.2 but the problem is not device specific or iOS specific, it occurs on all videos generated with LibAv.
Although both resulting files are playable by VideoLan (VLC) the one I generated through LibAv is not playable by QuickTime even though I can’t find anything wrong with it.
As you can see below, I create the video stream with the proper video codec on the call to avformat_new_stream:
AVStream *out_stream; // output stream
AVStream *in_stream; // input stream
AVCodecContext *dec_ctx, *enc_ctx; // codec context for the stream
AVCodec *encoder; // codec used
int ret;
unsigned int i;
ofmt_ctx = NULL;
// Allocate an AVFormatContext for an output format. This will be the file header (similar to avformat_open_input but with an zero'ed memory)
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, filename);
if (!ofmt_ctx) {
av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
[self errorWith:kErrorCreatingOutputContext and:#"Could not create output context"];
return AVERROR_UNKNOWN;
}
// we must not use the AVCodecContext from the video stream directly! So we have to use avcodec_copy_context() to copy the context to a new location (after allocating memory for it, of course).
// iterate over all input streams
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
in_stream = ifmt_ctx->streams[i]; // input stream
dec_ctx = in_stream->codec; // get the codec context for the decoder
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
// lets use h264
encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
if (!encoder) {
[self errorWith:kErrorCodecNotFound and:#"H264 Codec Not Found"];
return AVERROR_UNKNOWN;
}
out_stream = avformat_new_stream(ofmt_ctx, encoder); // create a new stream with h264 codec
if (!out_stream) {
av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
[self errorWith:kErrorAllocateOutputStream and:#"Failed allocating output stream"];
return AVERROR_UNKNOWN;
}
enc_ctx = out_stream->codec; // pointer to the stream codec context
/* we transcode to same properties (picture size,
* sample rate etc.). These properties can be changed for output
* streams easily using filters */
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
enc_ctx->width = dec_ctx->width;
enc_ctx->height = dec_ctx->height;
enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
enc_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
enc_ctx->time_base = dec_ctx->time_base;
av_opt_set(enc_ctx->priv_data, "preset", "ultrafast", 0);
av_opt_set(enc_ctx->priv_data, "profile", "baseline", 0);
av_opt_set(enc_ctx->priv_data, "level", "3.0", 0);
}
out_stream->time_base = in_stream->time_base;
AVDictionaryEntry *tag = NULL;
while ((tag = av_dict_get(in_stream->metadata, "", tag, AV_DICT_IGNORE_SUFFIX))) {
printf("%s=%s\n", tag->key, tag->value);
char *k = av_strdup(tag->key); // if your strings are already allocated,
char *v = av_strdup(tag->value); // you can avoid copying them like this
av_dict_set(&out_stream->metadata, k, v, 0);
}
ret = avcodec_open2(enc_ctx, encoder, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
[self errorWith:kErrorCantOpenOutputFile and:[NSString stringWithFormat:#"Cannot open video encoder for stream #%u",i]];
return ret;
}
}
else if(dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
// if we cant figure out the stream type, fail
av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
[self errorWith:kErrorUnknownStream and:[NSString stringWithFormat:#"Elementary stream #%d is of unknown type, cannot proceed",i]];
return AVERROR_INVALIDDATA;
}
else {
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream) {
av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
[self errorWith:kErrorAllocateOutputStream and:#"Failed allocating output stream"];
return AVERROR_UNKNOWN;
}
enc_ctx = out_stream->codec;
/* this stream must be remuxed */
// copies ifmt_ctx->streams[i]->codec into ofmt_ctx->streams[i]->codec - Copy the settings of the source AVCodecContext into the destination AVCodecContext.
ret = avcodec_copy_context(ofmt_ctx->streams[i]->codec,
ifmt_ctx->streams[i]->codec);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n");
[self errorWith:kErrorCopyStreamFailed and:#"Copying stream context failed"];
return ret;
}
}
// dunno what this is for
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
enc_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
// Create and initialize a AVIOContext for accessing the
// resource indicated by url.
ret = avio_open(&ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
[self errorWith:kErrorCantOpenOutputFile and:[NSString stringWithFormat:#"Could not open output file '%s'", filename]];
return ret;
}
}
/* init muxer, write output file header */
// Allocate the stream private data and write the stream header to an output media file.
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
[self errorWith:kErrorOutFileCantWriteHeader and:#"Error occurred when opening output file"];
return ret;
}
return 0;
You can find the files here:
Original final: https://www.dropbox.com/s/2jjs1uy2pu2veyy/IMG_5705.MOV?dl=0
File generated with FFMpeg - https://www.dropbox.com/s/9hfmq3fcifgpfqc/local-ffmpeg.MOV?dl=0
File generated by code - https://www.dropbox.com/s/rttvny39rj7ejpf/generated-by-Ze.MOV?dl=0
Thank you so much,
Ze

FFmpeg muxing to avi

sI have program, that succefully shows h264 stream using SDL: I'm getting h264 frame, decode it using ffmpeg and draw using SDL.
Also I can write frames to file (using fwrite) and play this file through ffplay.
But I want to mux data to the avi and face some problems in av_write_frame.
Here is my code:
...
/*Initializing format context - outFormatContext is the member of my class*/
AVOutputFormat *outFormat;
outFormat = av_guess_format(NULL,"out.avi",NULL);
outFormat->video_codec = AV_CODEC_ID_H264;
outFormat->audio_codec = AV_CODEC_ID_NONE;
avformat_alloc_output_context2(&outFormatContext, outFormat, NULL, "out.avi");
AVCodec *outCodec;
AVStream *outStream = add_stream(outFormatContext, &outCodec, outFormatContext->oformat->video_codec);
avcodec_open2(outStream->codec, outCodec, NULL);
av_dump_format(outFormatContext, 0, "out.avi", 1);
if (avio_open(&outFormatContext->pb, "out.avi", AVIO_FLAG_WRITE) < 0)
throw Exception("Couldn't open file");
if (avformat_write_header(outFormatContext, NULL) < 0)
throw Exception("Couldn't write to file");
//I don't have exceptions here - so there is 6KB header in out.avi.
...
static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
enum AVCodecID codec_id)
{
AVCodecContext *c;
AVStream *st;
/* find the encoder */
*codec = avcodec_find_encoder(codec_id);
if (!(*codec))
throw("Could not find encoder");
st = avformat_new_stream(oc, *codec);
if (!st)
throw ("Could not allocate stream");
st->id = oc->nb_streams-1;
c = st->codec;
c->bit_rate = 400000;
/* Resolution must be a multiple of two. */
c->width = 1920;
c->height = 1080;
c->pix_fmt = PIX_FMT_YUV420P;
c->flags = 0;
c->time_base.num = 1;
c->time_base.den = 25;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
return st;
}
...
/* Part of decoding loop. There is AVPacket packet - h264 packet;
int ret = av_write_frame(outFormatContext, &packet); //it return -22 code - Invadlid argument;
if (avcodec_decode_video2(pCodecCtx, pFrame, &frameDecoded, &packet) < 0)
return;
if (frameDecoded)
{
//SDL stuff
}
Also i tried to use avcodec_encode_video2 (encode pFrame back to the H264) next to the SDL stuff but encoding is not working - i've got empty packets :( It is the second problem.
Using av_interleaved_write_frame causes acces violation.
Code of the muxing part i copied from ffmpeg muxing example (https://www.ffmpeg.org/doxygen/2.1/doc_2examples_2muxing_8c-example.html)

Convert YUV frames into RGBA frames with FFMPEG

I would like to develop an application which would be able to convert YUV frames into RGBA frames using the ffmpeg library.
I have begun writing this code:
void Decode::video_encode_example(const char *filename, int codec_id)
{
AVCodec *codec;
AVCodecContext *c= NULL;
int i, ret, x, y, got_output;
FILE *f;
AVFrame *frame;
AVPacket pkt;
uint8_t endcode[] = { 0, 0, 1, 0xb7 };
printf("Encode video file %s\n", filename);
/* find the mpeg1 video encoder */
codec = avcodec_find_encoder((enum AVCodecID)codec_id);
if (!codec) {
fprintf(stderr, "Codec not found\n");
exit(1);
}
c = avcodec_alloc_context3(codec);
if (!c) {
fprintf(stderr, "Could not allocate video codec context\n");
exit(2);
}
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = 352; // Avant c'était du 352x288
c->height = 288;
/* frames per second */
c->time_base = (AVRational){1,25};
/* emit one intra frame every ten frames
* check frame pict_type before passing frame
* to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
* then gop_size is ignored and the output of encoder
* will always be I frame irrespective to gop_size
*/
c->gop_size = 10;
c->max_b_frames = 1;
printf("Avant\n");
c->pix_fmt = PIX_FMT_RGBA;// Avant c'était AV_PIX_FMT_YUV420P
printf("Après\n");
if (codec_id == AV_CODEC_ID_H264)
av_opt_set(c->priv_data, "preset", "slow", 0);
/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
exit(3);
}
f = fopen(filename, "wb");
if (!f) {
fprintf(stderr, "Could not open %s\n", filename);
exit(4);
}
frame = avcodec_alloc_frame();// Dans une version plus récente c'est av_frame_alloc
if (!frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(5);
}
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;
However, each time I run this application, the following error appears in my Linux terminal:
[mpeg2video # 0x10c7040] Specified pix_fmt is not supported
Could you help me please ?
I'm not sure how you believe your code is relevant to your question; your question suggests you'd like to do a pixel format conversion from YUV to RGB, for which you could e.g. use ffmpeg's libswscale. However, your code is creating a MPEG-1/2 encoder object and tries to encode RGB input data into MPEG-1/2. This is not possible, ffmpeg's MPEG-1/2 encoders only support YUV420P. I'm not quite sure what to recommend other than to figure out whether you want to encode MPEG-1/2 video, in which case your input should be YUV420P, not RGBA, or whether you want to do pixel format conversion, in which case you should use libswscale...

Resources