A long time ago, I implemented a C++ class to create MP4 video files from an array of images. The code works pretty well, nevertheless, I discovered a deprecation warning that I want to rid off. The parameter "codec" from the AVStream structure has been deprecated and I want to replace it.
Here is my current working code:
AVOutputFormat *outputFormat = av_guess_format("ffh264", movieFile.toLocal8Bit().data(), nullptr);
if (!outputFormat)
return false;
enum AVCodecID videoCodecID = outputFormat->video_codec;
AVCodec *videoCodec = avcodec_find_encoder(videoCodecID);
if (!videoCodec)
return false;
AVStream *stream = avformat_new_stream(formatContext, videoCodec);
if (!stream)
return false;
AVCodecContext *videoCodecContext = stream->codec; // <- codec is a deprecated parameter
videoCodecContext->width = videoW;
videoCodecContext->height = videoH;
Now, to replace the "codec" parameter, the libav developers team recommends using the parameter "codecpar" (AVCodecParameters) that was included in the AVStream structure. The example they use to share is this:
if (avcodec_parameters_to_context(videoCodecContext, stream->codecpar) < 0)
return nullptr;
Note: codecpar (AVCodecParameters) is a data structure itself.
Unfortunately, when I try to use that code, I got this problem: usually, all the information stored in the codecpar parameter comes from the data structure from a previous video file that was opened previously. In other words, the information already exists. In my case, the situation is different because I am creating an MP4 file from scratch so there is no previous codecpar record to use, therefore I have to create a new instance of AVCodecParameters structure by myself, setting every variable manually.
As far, I was able to set all the variables from the codecpar structure, except for two:
uint8_t * extradata
int extradata_size
Note: currently I can create an MP4 file "successfully" without setting those variables, but the file is incomplete and when I try to play it using "mplayer" I got this error message:
[extract_extradata # 0x55b5bb7e45c0] No start code is found.
I was researching these two fields, and it seems they store some kind of information related to the codec, which in my case is H264.
So, my specific question is: if I am setting a codecpar variable (AVCodecParameters) from scratch, how can I set values for the fields extradata and extradata_size in the right way for the codec H264?
Solution:
This is a basic list of steps I followed to replace the deprecated stream->codec data structure successfully:
Initialize AVFormatContext, AVOutputFormat variables (using av_guess_format and avformat_alloc_output_context2)
Open video codec (using avcodec_find_encoder)
Add/Initialize AVStream variable (using avformat_new_stream)
Initialize AVCodecContext variable (using avcodec_alloc_context3)
Customize AVCodecContext parameters, only if you need to. (In example: width, height, bit_rate, etc)
Add this piece of code:
if (formatContext->oformat->flags & AVFMT_GLOBALHEADER)
videoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
Open AVCodecContext variable (using avcodec_open2)
Copy AVCodecContext codecpar structure into AVStream codecpar (using avcodec_parameters_from_context)
From this point, you will be able to create and add frames to your output file.
PS: The example I used as a reference to code this implementation is available on doc/examples/muxing.c
Related
When I am decoding protobuf encoded data, I use parseFrom() method which is available in the code generated by protoc.
What I want to know is, is there a way to load protocol buffers data into some kind of a generic object, and read data from it using field names or tag numbers, without using code generation?
This is available in Avro with GenericRecord. What I want to know is, whether there is a similar capability in protobuf too.
Found it
Compile the proto file into a desc file
protoc --descriptor_set_out=point.desc --include_imports point.proto
Load the desc file.
InputStream input = new FileInputStream("point.desc");
DescriptorProtos.FileDescriptorSet descriptorSet = DescriptorProtos.FileDescriptorSet.parseFrom(input);
DescriptorProtos.FileDescriptorProto fileDescriptorProto = descriptorSet.getFile(0);
Descriptors.Descriptor messageDescriptor = Descriptors.FileDescriptor
.buildFrom(fileDescriptorProto, new Descriptors.FileDescriptor[0])
.findMessageTypeByName("Point");
Use the loaded descriptor to decode data.
DynamicMessage dynamicMessage = DynamicMessage.parseFrom(messageDescriptor, encodedBytes);
int x = (int) dynamicMessage.getField(messageDescriptor.findFieldByName("x"));
I want to extract RTP extension header data while reading ffmpeg packets using int av_read_frame(AVFormatContext *s, AVPacket *pkt);
But seems that ffmpeg skips RTP extension header data while creating AVPacket data (link to code ). ffmpeg makes AVPackets from RTPPacket data. So probably there is a way to get current RTPPacket after or before calling av_read_frame? ... or probably somebody knows another way?
I implemented this feature but only pushed it today for the fork of ffmpeg-2.4.2 tag. Here is the commit.
For example in iOS you can do something like this:
AVPacket _packet; // Get your decoded packet
NSData *extData = nil;
if (_packet.extlen > 0) {
extData = [[NSData alloc] initWithBytes:_packet.ext length:_packet.extlen];
}
As a learning task I am converting my software I use every day to NIO, with the somewhat arbitrary objective of having zero remaining instances of java.io.File.
I have been successful in every case except one. It seems an ImageWriter can only write to a FileImageOutputStream which requires a java.io.File.
Path path = Paths.get(inputFileName);
InputStream is = Files.newInputStream(path, StandardOpenOption.READ);
BufferedImage bi = ImageIO.read(is);
...
Iterator<ImageWriter> iter = ImageIO.getImageWritersBySuffix("jpg");
ImageWriter writer = iter.next();
ImageWriteParam param = writer.getDefaultWriteParam();
File outputFile = new File(outputFileName);
ImageOutputStream ios = new FileImageOutputStream(outputFile);
IIOImage iioi = new IIOImage(bi, null, null);
writer.setOutput(ios);
writer.write(null, iioi, param);
...
Is there a way to do this with a java.nio.file.Path? The java 8 api doc for ImageWriter only mentions FileImageOutputStream.
I understand there might only be a symbolic value to doing this, but I was under the impression that NIO is intended to provide a complete alternative to java.io.File.
A RandomAccessFile, constructed with just a String for a filename, can be supplied to the ImageOutputStream constructor constructor.
This doesn't "use NIO" any more than just using the File in the first place, but it doesn't require File to be used directly..
For direct support of Path (or to "use NIO"), the FileImageOutputStream (or RandomAccessFile) could be extended, or a type deriving from the ImageOutputStream interface created, but .. how much work is it worth?
The intended way to instantiate an ImageInputStream or ImageOutputStream in the javax.imageio API, is through the ImageIO.createImageInputStream() and ImageIO.createImageOutputStream() methods.
You will see that both these methods take Object as its parameter. Internally, ImageIO will use a service lookup mechanism, and delegate the creation to a provider able to create a stream based on the parameter. By default, there are providers for File, RandomAccessFile and InputStream.
But the mechanism is extendable. See the API doc for the javax.imageio.spi package for a starting point. If you like, you can create a provider that takes a java.nio.Path and creates a FileImageOutputStream based on it, or alternatively create your own implementation using some more fancy NIO backing (ie. SeekableByteChannel).
Here's source code for a sample provider and stream I created to read images from a byte array, that you could use as a starting point.
(Of course, I have to agree with #user2864740's thoughts on the cost/benefit of doing this, but as you are doing this for the sake of learning, it might make sense.)
I am using Microsoft Media Foundation to encode a H.264 video file.
I am using the SinkWriter to create the video file. The input is a buffer (MFVideoFormat_RGB32) where I draw the frames and the output is a MFVideoFormat_H264.
The encoding works and it creates a video file with my frames in it. But I want to set the quality for that video file. More specifically, I want to set the CODECAPI_AVEncCommonQuality property on the H.264 encoder.
In order to get a handle to the H.264 encoder, I call GetServiceForStream on the SinkWriter. Then I set the CODECAPI_AVEncCommonQuality property.
The problem is that my property change is ignored. As stated in the documentation:
To set this parameter in Windows 7, set the property before calling IMFTransform::SetOutputType. The encoder ignores changes after the output type is set.
The problem is that I don't create the H.264 encoder manually. I set the input and the output type on the SinkWriter, and the SinkWriter creates the H.264 encoder automatically. As soon as it creates the encoder, it calls the IMFTransform::SetOutputType method, and I can't change the CODECAPI_AVEncCommonQuality property anymore. The documentation also says that the property change isn't ignored in Windows 8, but I need this to run on Windows 7.
Do you know how I can change the quality for the encoded file while using SinkWriter on Windows 7?
PS: Someone asked the same question on the msdn forums, and he didn't seem to get an answer.
As the documentation says, you just can't change the CODECAPI_AVEncCommonQuality property after the output type is set, and the SinkWriter sets the output type before you can get a hand on the encoder.
In order to bypass this problem I managed to create a class factory and register it in Media Foundation, so that the SinkWriter uses it to create a new encoder. In my class factory, I create a new H264 encoder and set whatever properties I want before passing it on to the SinkWriter.
I have written in more detail the steps I took to create this class factory on the MSDN forums, here: http://social.msdn.microsoft.com/Forums/en-US/mediafoundationdevelopment/thread/6da521e9-7bb3-4b79-a2b6-b31509224638
That was the only way I could get around my problem on Windows 7.
CODECAPI_AVEncCommonRateControlMode and CODECAPI_AVEncCommonQuality can be passed to the h.264 encoder using IMFSinkWriter->SetInputMediaType(/* ... */,, IMFAttributes pEncodingParameters). I suspect other CODECAPI_ values would work as well.
CComPtr<IMFAttributes> pEncAttrs;
ATLENSURE_SUCCEEDED(MFCreateAttributes(&pEncAttrs, 1));
ATLENSURE_SUCCEEDED(pEncAttrs->SetUINT32(CODECAPI_AVEncCommonRateControlMode, eAVEncCommonRateControlMode_Quality));
ATLENSURE_SUCCEEDED(pEncAttrs->SetUINT32(CODECAPI_AVEncCommonQuality, 40));
ATLENSURE_SUCCEEDED(writer->SetInputMediaType(sink_stream, mtSource, pEncAttrs));
// ^^^^^^^^^
i'm using Core-Audio/AudioToolbox (Extended Audio File services) to read audio files on OSX.
for my specific application, i need to find out, whether the file i opened successfully using ExtAudioFileOpenURL() is a CAF-file.
unfortunately i seem to miss how to do this properly, as i cannot retrieve the AudioFileTypeID from an ExtAudioFileRef.
(when writing such a file, i can define the type by passing the AudioFileTypeID to ExtAudioFileCreateWithURL; what about the reverse?
like with my other question, it turned out that i have to use the normal AudioAPI (rather than ExtAudio API)
static bool isCAF(const AudioFileID*file) {
/* trying to read format (is it caf?) */
UInt32 format=0;
UInt32 formatSize=sizeof(format);
AudioFileGetProperty (*file, kAudioFilePropertyFileFormat, &formatSize, &format);
return (kAudioFileCAFType==format);
}