fast encoding video codec? - ffmpeg

can anybody compare popular video codecs by encoding speed? I understand that usually better compression requires more processing time, but it's also possible that some codecs still provide comparably good compression with fast encoding. any comparison links?
thanks for your help
[EDIT]: codecs can be compared by used algorithms, regardless of its particular implementation, hardware used or video source, something like big O for mathematical algorithms

When comparing VP8 and x264, VP8 also shows 5-25 times lower encoding speed with 20-30% lower quality at average. For example x264 High-Speed preset is faster and has higher quality than any of VP8 presets at average."
its tough to compare feature sets vs speed/quality.
see some quality comparison http://www.compression.ru/video/codec_comparison/h264_2012/
The following paragraph and image are from VP9 encoding/decoding performance vs. HEVC/H.264 by Ronald S. Bultje:
x264 is an incredibly well-optimized encoder, and many people still
use it. It’s not that they don’t want better bitrate/quality ratios,
but rather, they complain that when they try to switch, it turns out
these new codecs have much slower encoders, and when you increase
their speed settings (which lowers their quality), the gains go away.
Let’s measure that! So, I picked a target bitrate of 4000kbps for each
encoder, using otherwise the same settings as earlier, but instead of
using the slow presets, I used variable-speed presets (x265/x264:
–preset=placebo-ultrafast; libvpx: –cpu-used=0-7).

This is one of those topics where Your Mileage May Vary widely. If I were in your position, I'd start off with a bit of research on Wikipedia, and then gather the tools to do some testing and benchmarking. The source video format will probably affect overall encoding speed, so you should test with video that you intend to use on the Production system.
Video encoding time can vary widely depending on the hardware used, and whether you used an accelerator card, and so on. It's difficult for us to make any hard and fast recommendations without explicit knowledge of your particular set up.
The only way to make decisions like this, is to test these things yourself. I've done the same thing when comparing Virtualisation tools. It's fun too!

Related

ffmpeg libx264: whats the difference between crf and profile and preset in terms of quality (bitrate)

In ffmpeg while encoding with libx264 i came across crf, profile and preset
Whats the difference between them in terms bitrate.
And if i am using all the three, will they conflict among each other or which one will be effective.
No they are independent of each other. CRF is a quality setting. Lower is better, but requires more bits. Profile tells the encoder what tools it can expect the decoder to be able to handle (b frames and CABAC for example). The more tools, the better the quality at a given bitrate. High is best, but usually does not do much better than main, and is not supported by older decoders. Use main. Presets are created by a human in an attempt to choose good default settings for each tool by trading encoding time for quality. Slower is better, but requires more CPU time.

what FFMPEG performance settings to use for processing videos for the web

I have a few questions regarding usage of ffmpeg for processing videos for the web. I'm a beginner so please bear with me (although I read some docs on the internet)
Performance
First of all, given the fact that FFMPEG utilizes all cores at 100%, what is the actual parallelism efficiency?
Let's assume the following scenario. I have a video (fullHD, doesn't matter what encoders / compression format was used to obtain the video) and I want to resize (downscale) to various sizes (e.g. 240px, 480px and 720px height) using mp4 format (thus using libx264 with aac codecs).
Using ffmpeg, I see that all of my laptop's cores (8) are used at 100% and I was wondering what scenarios can improve the overall performance of the whole processing task. So this leads us to basically 2 scenarios: Assuming the video mentioned above as input, for obtaining the 3 output videos (# 240px, 480px and 720px height sizes), we:
Process input video and obtain 1 output video at a time, and let all the cores work at the same time at 100%;
Process the video to obtain all output videos in parallel, by bounding each output video to a single processor core which'll work at 100%;
So the question is actually reduced to the parallelism efficiency of the ffmpeg program.
This means that letting ffmpeg process the task procVideo - which takes 1 input video to produce 1 single output video (transcoding/downscaling and so on) - on N processor cores doesn't mean it finish the task N times faster than letting it run the same task bound to a single core. So if the efficiency is smaller than 100%, it's better to have N procVideo tasks in parallel, each bound to a single core, rather than doing the task sequentially for each output video.
Codecs
Other than the above performance problem, the usage of codecs bugs me. I am trying to obtain mp4 videos because of the wide implementation of the format in html5 browsers.
So having a video as input in any format, I want to convert it to mp4. So I'm using libx264 codec with aac.
Use libx264, x264 or h264 for video encoding/decoding?
Use libfdk_aac, libaacplus or aac for audio encoding/decoding to aac?
Also, I would like to know what are the licesing fees for each of the above codec, as the online resources on these are quite limited / hard to understand.
If anyone could shed some light on those questions, I would really be grateful! Thanks for your time!
There are a few unrelated questions here.
FFmpeg performance
All that follows is based on my personal experience, and is by no means empirical evidence.
Try as you might, you'll be very hard-pressed to find a software that is more optimized than FFmpeg in performance.
Also keep in mind that most of the work in this case will be done by libx264, which is very mature and insanely fast. (Just try to encode an equivalent video to H.265 using ffmpeg and the not-quite-mature-yet x265, and you'll understand what I mean).
So in summary, you can assume that a single encoding is as fast as possible on the machine, and parallelizing will not improve anything.
An alternative solution to test is to ask ffmpeg to encode several files in a single invocation, so that the decoding part of the pipeline is only done once, as explained here: https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs.
In the end, you should test each case by carefully measuring the total encoding time for each scenario.
Codecs
x264 and libx264 are one and the same, the difference being that the latter is used by ffmpeg instead of being a standalone tool.
By "h264", I'm not sure what you mean. In ffmpeg, h264 is only a decoder, while libx264 is the encoder. You don't have much choice there.
About AAC, all essential information is present in this web page: https://trac.ffmpeg.org/wiki/Encode/AAC
So if you can obtain a build of ffmpeg linked against libfdk_aac, this is the safest bet for good quality audio.
License fees
This is a very sensitive subject. Most people will outright refuse to give you advice, and I'm no exception, because any legal advice implies liability in case of litigation.
To sum things up, see the following urls:
https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC#Patent_licensing
https://en.wikipedia.org/wiki/Advanced_Audio_Coding#Licensing_and_patents
Some might argue that the difficulty of understanding the information is somehow done on purpose in order to confuse the general public.

Pseudocode (or code) for main compression algorithms

I'm really interested in image and video compression, but its hard for me to find a main source to start implementing the major algorithms.
What I want is just a source of information to begin the implementation of my own codec. I want to implement it from scratch (for example, for jpeg, implement my own Huffman, cosine conversion ...). All I need is a little step by step guide showing me which steps are involved in each algorithm.
I'm interested mainly on image compression algorithms (by now, JPEG) and video compression algorithms (MPEG-4, M-JPEG, and maybe AVI and MP4).
Can anyone suggest me an on-line source, with a little more information than wikipedia? (I checked it, but information is not really comprehensive)
Thank you so much :)
Start with JPEG. You'll need the JPEG standard. It will take a while to go through, but that's the only way to have a shot at writing something compatible. Even then, the standard won't help much with deciding on how and how much you quantize the coefficients, which requires experimentation with images.
Once you get that working, then get the H.264 standard and read that.
ImpulseAdventure site has fantastic series of articles about basics of JPEG encoding.
I'm working on an experimental JPEG encoder that's partly designed to be readable and easy to change (rather than obfuscated by performance optimizations).

What is the fastest way to combine audio files on a web server?

Disclaimer: Forgive my ignorance of audio/sound processing, my background is web and mobile development and this is a bespoke requirement for one of my clients!
I have a requirement to concatenate 4 audio files, with a background track playing behind all 4 audio files. The source audio files can be created in any format, or have any treatment applied to them, to improve the processing time, but the output quality is still important. For clarity, the input files could be named as follows (.wav is only an example format):
background.wav
segment-a.wav
segment-b.wav
segment-c.wav
segment-d.wav
And would need to be structured something like this:
[------------------------------background.wav------------------------------]
[--segment-a.wav--][--segment-b.wav--][--segment-c.wav--][--segment-d.wav--]
I have managed to use the SoX tool to achieve the concatenation portion of the above using MP3 files, but on a reasonably fast computer I am getting roughly an hours worth of concatenated audio per minute of processing, which isn't fast enough for my requirements, and I haven't applied the background sound or any 'nice to haves' such as trimming/fading yet.
My questions are:
Is SoX the best/only tool for this kind of operation?
Is there any way to make the process faster without sacrificing (too much) quality?
Would changing the input file format result in improved performance? If so, which format is best?
Any suggestions from this excellent community would be much appreciated!
Sox may not be the best tool, but I doubt you will find anything much better without hand-coding.
I would venture to guess that you are doing pretty well to process that much audio in that time. You might do better, but you'll have to experiment. You are right that probably the main way to improve speed is to change the file format.
MP3 and OGG will probably give you similar performance, so first identify how MP3 compares to uncompressed audio, such as wav or aiff. If MP3/OGG is better, try different compression ratios and sample rates to see which goes faster. With wav files, you can try lowering the sample rate (you can do this with MP3/OGG as well). If this is speech, you can probably go as low as 8kHz, which should speed things up considerably. For music, I would say 32kHz, but it depends on the requirements. Also, try mono instead of stereo, which should also speed things up.

What is a small and fast real time compression technique, like lz77?

What is the minimum source length (in bytes) for LZ77? Can anyone suggest a small and fast real time compression technique (preferable with c source). I need it to store compressed text and fast retrieval for excerpt generation in my search engine.
thanks for all the response, im using D language for this project so it's kinda hard to port LZO to D codes. so im going with either LZ77 or Predictor. thanks again :)
I long ago had need for a simple, fast compression algorithm, and found Predictor.
While it may not be the best in terms of compression ratio, Predictor is certainly fast (very fast), easy to implement, and has a good worst-case performance. You also don't need a license to implement it, which is goodness.
You can find a description and C source for Predictor in Internet RFC 1978: PPP Predictor Compression Protocol.
The lzo compressor is noted for its smallness and high speed, making it suitable for real-time use. Decompression, which uses almost zero memory, is extremely fast and can even exceed memory-to-memory copy on modern CPUs due to the reduced number of memory reads. lzop is an open-source implementation; versions for several other languages are available.
If you're looking for something more well known this is about the best compressor in terms of general compression you'll get. LZMA, the 7-zip encoder. http://www.7-zip.org/sdk.html
There's also LZJB:
https://hg.java.net/hg/solaris~on-src/file/tip/usr/src/uts/common/os/compress.c
It's pretty simple, based on LZRW1, and is used as the basic compression algorithm for ZFS.

Resources