Context: I have a file called libffmpeg.so, that I took from the APK of an Android application that is using FFMPEG to encode and decode files between several Codecs. Thus, I take for grant that this is compiled with encoding options enable and that this .so file is containing all the codecs somewhere. This file is compiled for ARM (what we call ARMEABI profile on Android).
I also have a very complete class with interops to call API from ffmpeg. Whatever is the origin of this static library, all call responses are good and most endpoints exist. If not I add them or fix deprecated one.
When I want to create an ffmpeg Encoder, the returned encoder is correct.
var thisIsSuccessful = avcodec_find_encoder(myAVCodec.id);
Now, I have a problem with Codecs. The problem is that - let's say that out of curiosity - I iterate through the list of all the codecs to see which one I'm able to open with the avcodec_open call ...
AVCodec codec;
var res = FFmpeg.av_codec_next(&codec);
while((res = FFmpeg.av_codec_next(res)) != null)
{
var name = res->longname;
AVCodec* encoder = FFmpeg.avcodec_find_encoder(res->id);
if (encoder != null) {
AVCodecContext c = new AVCodecContext ();
/* put sample parameters */
c.bit_rate = 64000;
c.sample_rate = 22050;
c.channels = 1;
if (FFmpeg.avcodec_open (ref c, encoder) >= 0) {
System.Diagnostics.Debug.WriteLine ("[YES] - " + name);
}
} else {
System.Diagnostics.Debug.WriteLine ("[NO ] - " + name);
}
}
... then only uncompressed codecs are working. (YUV, FFmpeg Video 1, etc)
My hypothesis are these one:
An option that was missing at the time of compiling to the .so file
The av_open_codec calls is acting depending on the properties of the AVCodecContext I've referenced in the call.
I'm really curious about why only a minimum set of uncompressed codecs are returned?
[EDIT]
#ronald-s-bultje answer led me to read AVCodecContext API description, and there are a lot of mendatory fileds with "MUST be set by user" when used on an encoder. Setting a value for these parameters on AVCodecContext made most of the nice codecs available:
c.time_base = new AVRational (); // Output framerate. Here, 30fps
c.time_base.num = 1;
c.time_base.den = 30;
c.me_method = 1; // Motion-estimation mode on compression -> 1 is none
c.width = 640; // Source width
c.height = 480; // Source height
c.gop_size = 30; // Used by h264. Just here for test purposes.
c.bit_rate = c.width * c.height * 4; // Randomly set to that...
c.pix_fmt = FFmpegSharp.Interop.Util.PixelFormat.PIX_FMT_YUV420P; // Source pixel format
The av_open_codec calls is acting depending on the properties of the
AVCodecContext I've referenced in the call.
It's basically that. I mean, for the video encoders, you didn't even set width/height, so most encoders really can't be expected to do anything useful like this, and are right to error right out.
You can set default parameters using e.g. avcodec_get_context_defaults3(), which should help you a long way to getting some useful settings in the AVCodecContext. After that, set typical ones like width/height/pix_fmt to the ones describing your input format (if you want to do audio encoding - which is actually surprisingly unclear from your question, you'll need to set some different ones like sample_fmt/sample_rate/channels, but same idea). And then you should be relatively good to go.
Related
I'm trying to create an NV12 resource as source for a video encoder in DX12. While I intend to eventually populate a resource from GPU, what I'm trying to do now is take an ffmpeg AVFrame I already have (in AV_PIX_FMT_YUV420P format) and create a texture in DXGI_FORMAT_NV12 format using that data.
I understand the NV12 format (https://learn.microsoft.com/en-us/windows/win32/medfound/recommended-8-bit-yuv-formats-for-video-rendering#nv12) has U and V interleaved while the AV_PIX_FMT_YUV420P doesn't.
My main question is what does the D3D12_RESOURCE_DESC look like for an NV12 texture - do I tell it I need more than one array/mip level to make it planar? Or do I just give it a single memory address with both planes layed out as per the NV12 format, and it figures out subresources for me based on the format?
I understand that to read the data I define two SRVs, one for Y mapped to the Red channel and a second for U and V, but it's how I initialise it that's confusing me.
Just create the resource as normal, and then when you query the layout description, it will be planar.
D3D12_RESOURCE_DESC desc = {};
desc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;
desc.Format = DXGI_FORMAT_NV12;
desc.MipLevels = 1;
desc.DepthOrArraySize = 1;
desc.Width = 1024;
desc.Height = 720;
desc.SampleDesc.Count = 1;
const CD3DX12_HEAP_PROPERTIES defaultHeapProperties(D3D12_HEAP_TYPE_DEFAULT);
ComPtr<ID3D12Resource> res;
HRESULT hr = device->CreateCommittedResource(
&defaultHeapProperties,
D3D12_HEAP_FLAG_NONE,
&desc,
D3D12_RESOURCE_STATE_COMMON,
nullptr,
IID_PPV_ARGS(res.GetAddressOf()));
if (FAILED(hr))
{
// error
}
D3D12_FEATURE_DATA_FORMAT_INFO formatInfo = { DXGI_FORMAT_NV12, 0 };
if (FAILED(device->CheckFeatureSupport(D3D12_FEATURE_FORMAT_INFO, &formatInfo, sizeof(formatInfo))))
{
formatInfo = {};
}
D3D12_PLACED_SUBRESOURCE_FOOTPRINT footprint[2] = {};
UINT numRows;
UINT64 rowBytes, totalBytes;
device->GetCopyableFootprints(&desc, 0, 2, 0, footprint, &numRows, &rowBytes, &totalBytes);
The formatInfo.PlaneCount is 2, which is why you have to ask for two subresources.
footprint[0].Format is DXGI_FORMAT_R8_TYPELESS with 1024x720 size. The footprint[0].Offset is likely 0.
footprint[1].Format is DXGI_FORMAT_R8G8_TYPELESS with 512x360 size. The footprint[1].Offset is something other than 0.
In Direct3D 12 Video the layouts are very simple to understand. In Direct3D 11 Video, it was all implicitly defined so it was a bit of a mess. That said, DDS files were defined as non-planar data, so you may want to examine how these are handled in DirectXTex.
Q1) How can I get video file details with macOS APIs?
Q2) How do I assess video quality of an mp4 file?
I need a program to separate a large archive of mp4 files based on the video quality - i.e., clarity, sharpness - roughly, where they'd appear along the TV spectrum of analog -> 720 -> 1080 -> 2/4k. In this case, audio, color levels, file size, CPU/GPU load, etc., are not considerations per se.
Q1) It is easy to find "natural" dimensions with AVPlayer. A bit more poking around (https://developer.apple.com/documentation/avfoundation/avpartialasyncproperty/3816116-formatdescriptions ), my files have "avc1" as the media subtype; I gather that means h264. Can't locate ways to get more details with Apple APIs, like bit rate, that even Quicktime Player provides.
Lots of info is available with ffprobe, so I added it to my program. You too can embed a CLI program that runs inside a macOS application in background - see code at bottom.
Q2) To a video noob, dimensions are the obvious first approximation for video quality ... and codec, but mine have previously been converted to h264. Then I consider bit rates from ffprobe.
For testing, I located two h264 files with same dimensions (1280, 720), bit depth (8), and similar file size, frame rate, duration, amount of motion, color content. To my eye, one of the two looks better, distinctly sharper; that file is smaller and has a lower video bit rate (20-40%), even when normalized for its slightly lower frame rate and duration.
From an info theory perspective, doesn't seem possible. I've learned codecs can provide "quality" optimizations during compression - way past my understanding - but I can't find, looking at the video stream data, indicators of any that would impact quality/sharpness. Nothing in per-frame and per-packet data from ffprobe stands out.
Are there any tell-tale signs I should look for? Is this a fool's errand?
Here's my swift hack to run ffprobe inside a macOS application (written with XC 13 on 11.6). If you know how to run a Process() that lives in /usr/bin/..., please post - I don't get the entitlements thing. (Aliases/links to home directory don't work.)
// takes a local fileURL and determines video properties using ffprobe
func runFFProbe(targetURL:URL){
func buildArguments(url:URL) -> [String] {
// for ffprobe introduction,see: https://ottverse.com/ffprobe-comprehensive-tutorial-with-examples/
// and for complete info: https://ffmpeg.org/ffprobe.html
var arguments:[String] = []
// note: don't interpolate URL paths - may have spaces in them
let argString = "-v error -hide_banner -of default=noprint_wrappers=0 -print_format flat -select_streams v:0 -show_entries stream=width,height,bit_rate,codec_name,codec_long_name,profile,codec_tag_string,time_base,avg_frame_rate,r_frame_rate,duration_ts,bits_per_raw_sample,nb_frames "
let _ = argString.split(separator: " ").map{arguments.append(String($0))}
// let _ suppresses compiler warning about unused result of map call
arguments.append(url.path) // spaces in URL path seem to be okay here
return arguments
}
let task = Process()
// task.executableURL = URL(fileURLWithPath: "/usr/local/bin/ffprobe")
// reports "doesn't exist", but really access is blocked by macOS :(
// statically-linked ffprobe is added to the app bundle
// downloadable here - https://evermeet.cx/ffmpeg/#sExtLib-ffprobe
task.executableURL = Bundle.main.url(forResource: "ffprobe", withExtension: nil)
task.arguments = buildArguments(url: targetURL)
let pipe = Pipe()
task.standardOutput = pipe // ffprobe writes console thru standardOutput
// (ffmpeg uses standardError)
let fh = pipe.fileHandleForReading
var cumulativeResults = "" // adds the result from each buffer dump
fh.waitForDataInBackgroundAndNotify() // setup handle for listening
// object must be specified when running multiple simultaneous calls
// otherwise every instance receives messages from all other filehandles too
NotificationCenter.default.addObserver(forName: .NSFileHandleDataAvailable, object: fh, queue: nil) {notif in
let closureFileHandle:FileHandle = notif.object as! FileHandle
// Get the data from the FileHandle
let data:Data = closureFileHandle.availableData
// print("received bytes: \(data.count)\n") // debugging
if data.count > 0 {
// re-arm fh for any addition data
fh.waitForDataInBackgroundAndNotify()
// append new data to the accumulator
let str = String(decoding: data, as: UTF8.self)
cumulativeResults += str
// optionally insert code here for intermediate reporting/parsing
// self.printToTextView(string: str)
}
}
task.terminationHandler = {task -> Void in
DispatchQueue.main.async(execute: {
// run the whole termination on the main queue
if task.terminationReason==Process.TerminationReason.exit {
// roll your own reporting method
self.printToTextView(string: targetURL.lastPathComponent)
self.printToTextView(string: targetURL.fileSizeString) //custom URL extension
self.printToTextView(string: cumulativeResults)
let str = "\nSuccess!\n"
self.printToTextView(string: str)
} else {
print("Task did not terminate properly")
// post an error in UI too
return
}
// successful conversion if this point is reached
}) // end dispatchqueue
} // end termination handler
do { try
task.run()
} catch let error as NSError {
print(error.localizedDescription)
// post in UI too
return
}
} // end runFFProbe()
I'm trying to use libavcodec library in FFMpeg to decode then re-encode a h264 video.
I have the decoding part working (rendes to an SDL window fine) but when I try to re-encode the frames I get bad data in the re-encoded videos samples.
Here is a cut down code snippet of my encode logic.
EncodeResponse H264Codec::EncodeFrame(AVFrame* pFrame, StreamCodecContainer* pStreamCodecContainer, AVPacket* pPacket)
{
int result = 0;
result = avcodec_send_frame(pStreamCodecContainer->pEncodingCodecContext, pFrame);
if(result < 0)
{
return EncodeResponse::Fail;
}
while (result >= 0)
{
result = avcodec_receive_packet(pStreamCodecContainer->pEncodingCodecContext, pPacket);
// If the encoder needs more frames to create a packed then return and wait for
// method to be called again upon a new frame been present.
// Else check if we have failed to encode for some reason.
// Else a packet has successfully been returned, then write it to the file.
if (result == AVERROR(EAGAIN) || result == AVERROR_EOF)
{
// Higher level logic, dedcodes next frame from source
// video then calls this method again.
return EncodeResponse::SendNextFrame;
}
else if (result < 0)
{
return EncodeResponse::Fail;
}
else
{
// Prepare packet for muxing.
if (pStreamCodecContainer->codecType == AVMEDIA_TYPE_VIDEO)
{
av_packet_rescale_ts(m_pPacket, pStreamCodecContainer->pEncodingCodecContext->time_base,
m_pDecodingFormatContext->streams[pStreamCodecContainer->streamIndex]->time_base);
}
m_pPacket->stream_index = pStreamCodecContainer->streamIndex;
int result = av_interleaved_write_frame(m_pEncodingFormatContext, m_pPacket);
av_packet_unref(m_pPacket);
}
}
return EncodeResponse::EncoderEndOfFile;
}
Strange behaviour I notice is that before I get the first packet from avcodec_receive_packet I have to send 50+ frames to avcodec_send_frame.
I built a debug build of FFMpeg and stepping into the code I notice that AVERROR(EAGAIN) is returned by avcodec_receive_packet because of the following in x264encoder::encode in encoder.c
if( h->frames.i_input <= h->frames.i_delay + 1 - h->i_thread_frames )
{
/* Nothing yet to encode, waiting for filling of buffers */
pic_out->i_type = X264_TYPE_AUTO;
return 0;
}
For some reason my code-context (h) never has any frames. I have spent a long time trying to debug ffmpeg and to determine what I'm doing wrong. But have reached the limit of my video codec knowledge (which is little).
I'm testing this with a video that has no audio to reduce complication.
I have created a cut down version of my application and provided a self contained (with ffmpeg and SDL built dependencies) project. Hopefully this can help anyone-one willing to help me :).
Project Link
https://github.com/maxhap/video-codec
After looking into encoder initialisation I found that I have to set the codec AV_CODEC_FLAG_GLOBAL_HEADER before calling avcodec_open2
pStreamCodecContainer->pEncodingCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
This change led to the re-encoded moov box looking much heathier (used MP4Box.js to parse it). However, the video still does not play correctly, the output video has grey frames at the start when played in VLC and won't play in other players.
I have since tried creating an encoding context via the sample code, rather than using my decoding codec parameters. This led to fixing the bad/data or encoding issue. However, my DTS times are scaling to huge numbers
Here is my new codec init
if (pStreamCodecContainer->codecType == AVMEDIA_TYPE_VIDEO)
{
pStreamCodecContainer->pEncodingCodecContext->height = pStreamCodecContainer->pDecodingCodecContext->height;
pStreamCodecContainer->pEncodingCodecContext->width = pStreamCodecContainer->pDecodingCodecContext->width;
pStreamCodecContainer->pEncodingCodecContext->sample_aspect_ratio = pStreamCodecContainer->pDecodingCodecContext->sample_aspect_ratio;
/* take first format from list of supported formats */
if (pStreamCodecContainer->pEncodingCodec->pix_fmts)
{
pStreamCodecContainer->pEncodingCodecContext->pix_fmt = pStreamCodecContainer->pEncodingCodec->pix_fmts[0];
}
else
{
pStreamCodecContainer->pEncodingCodecContext->pix_fmt = pStreamCodecContainer->pDecodingCodecContext->pix_fmt;
}
/* video time_base can be set to whatever is handy and supported by encoder */
pStreamCodecContainer->pEncodingCodecContext->time_base = av_inv_q(pStreamCodecContainer->pDecodingCodecContext->framerate);
pStreamCodecContainer->pEncodingCodecContext->sample_aspect_ratio = pStreamCodecContainer->pDecodingCodecContext->sample_aspect_ratio;
}
else
{
pStreamCodecContainer->pEncodingCodecContext->channel_layout = pStreamCodecContainer->pDecodingCodecContext->channel_layout;
pStreamCodecContainer->pEncodingCodecContext->channels =
av_get_channel_layout_nb_channels(pStreamCodecContainer->pEncodingCodecContext->channel_layout);
/* take first format from list of supported formats */
pStreamCodecContainer->pEncodingCodecContext->sample_fmt = pStreamCodecContainer->pEncodingCodec->sample_fmts[0];
pStreamCodecContainer->pEncodingCodecContext->time_base = AVRational{ 1, pStreamCodecContainer->pEncodingCodecContext->sample_rate };
}
Any ideas why my DTS time is re-scaling incorrectly?
I managed to fix the DTS scalling by using the time_base value directly from the decoding streams.
So
pStreamCodecContainer->pEncodingCodecContext->time_base = m_pDecodingFormatContext->streams[pStreamCodecContainer->streamIndex]->time_base
Instead of
pStreamCodecContainer->pEncodingCodecContext->time_base = av_inv_q(pStreamCodecContainer->pDecodingCodecContext->framerate);
I will create an answer based on all my finding.
To fix the initial problem of a corrupted moov box I had to add the AV_CODEC_FLAG_GLOBAL_HEADER flag to the encoding codec context before calling avcodec_open2.
encCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
The next issue was badly scaled DTS values in the encoded package, this was causing a side effect of the final mp4 duration being in the hundreds of hours long. To fix this I had to change the encoding codec context timebase to be that of the decoding context streams timebase. This is different than using av_inv_q(framerate) as suggested in the avcodec transcoding example.
encCodecContext->time_base = decCodecFormatContext->streams[streamIndex]->time_base;
I am trying to make a custom media sink for video playback in an OpenGL application (without the various WGL_NV_DX_INTEROP, as I am not sure if all my target devices support this).
What I have done so far is to write a custom stream sink that accepts RGB32 samples and set up playback with a media session, however i encountered a problem with initial testing of playing an mp4 file:
one (or more) of the MFTs in the generated topology keep failing with an error code MF_E_TRANSFORM_NEED_MORE_INPUT, therefore my stream sink never receives samples
After a few samples have been requested, the media session receives the event MF_E_ATTRIBUTENOTFOUND, but I still don't know where it is coming from
If, however, I configure the stream sink to receive NV12 samples, everything seems to work fine.
My best guess is the color converter MFT generated by the TopologyLoader needs some more configuration, but I don't know how to do that, considering that I need to keep this entire process indipendent from the original file types.
I've made a minimal test case, that demonstrate the use of a custom video renderer with a classical Media Session.
I use big_buck_bunny_720p_50mb.mp4, and i don't see any problems using RGB32 format.
Sample code here : https://github.com/mofo7777/Stackoverflow under MinimalSinkRenderer.
EDIT
Your program works well with big_buck_bunny_720p_50mb.mp4. I think that your mp4 file is the problem. Share it, if you can.
I just made a few changes :
You Stop on MESessionEnded, and you Close on MESessionStopped.
case MediaEventType.MESessionEnded:
Debug.WriteLine("MediaSession:SesssionEndedEvent");
hr = mediaSession.Stop();
break;
case MediaEventType.MESessionClosed:
Debug.WriteLine("MediaSession:SessionClosedEvent");
receiveSessionEvent = false;
break;
case MediaEventType.MESessionStopped:
Debug.WriteLine("MediaSession:SesssionStoppedEvent");
hr = mediaSession.Close();
break;
default:
Debug.WriteLine("MediaSession:Event: " + eventType);
break;
Adding this to wait for the sound, and to check sample is ok :
internal HResult ProcessSample(IMFSample s)
{
//Debug.WriteLine("Received sample!");
CurrentFrame++;
if (s != null)
{
long llSampleTime = 0;
HResult hr = s.GetSampleTime(out llSampleTime);
if (hr == HResult.S_OK && ((CurrentFrame % 50) == 0))
{
TimeSpan ts = TimeSpan.FromMilliseconds(llSampleTime / (10000000 / 1000));
Debug.WriteLine("Frame {0} : {1}", CurrentFrame.ToString(), ts.ToString());
}
// Do not call SafeRelease here, it is done by the caller, it is a parameter
//SafeRelease(s);
}
System.Threading.Thread.Sleep(26);
return HResult.S_OK;
}
In
public HResult SetPresentationClock(IMFPresentationClock pPresentationClock)
adding
SafeRelease(PresentationClock);
before
if (pPresentationClock != null)
PresentationClock = pPresentationClock;
In a user-space application, I'm writing v210 formatted video data to a V4L2 loopback device. When I watch the video in VLC or other viewer, I just get clownbarf and claims that the stream is UYUV or other, not v210. I suspect I need to tell the loopback device something more than what I have, to make the stream appear as v210 to the viewer. Is there one more place/way to tell it that it'll be handling a certain format?
What I do now:
int frame_w, frame_h = ((some sane values))
outputfd = open("/dev/video4", O_RDWR);
// check VIDIOC_QUERYCAPS, ...
struct v4l2_format fmt;
memset(&fmt, 0, sizeof(fmt));
fmt.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
fmt.fmt.pix.width = frame_w;
fmt.fmt.pix.height = frame_h;
fmt.fmt.pix.bytesperline = inbpr; // no padding
fmt.fmt.pix.field = 1;
fmt.fmt.pix.sizeimage = frame_h * fmt.fmt.pix.bytesperline;
fmt.fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
v210width = (((frame_w+47)/48)*48); // round up to mult of 48 px
byte_per_row = (v210width*8)/3;
fmt.fmt.pix.pixelformat = 'v' | '2' << 8 | '1' << 16 | '0' << 24;
fmt.fmt.pix.width = v210width;
fmt.fmt.pix.bytesperline = byte_per_row ;
ioctl(outputfd, VIDIOC_S_FMT, &fmt);
// later, in some inner loop...
... write stuff to uint8_t buffer[] ...
write(outputfd, buffer, buffersize);
If I write UYVY format, or RGB or others, it can be made to work. Viewers display the video and report the correct format.
This code is based on examples, reading the V4L docs, and some working in-house code. No one here knows exactly what are all the things one must do to open and write to a video device.
While there is an easily found example online of how to read video from a V4L device, I couldn't find a similar quality example for writing. If such exists, it may show the missing piece.