I need to merge/mix multiple audio file into one single audio file using ffmpeg api, I googled a lot but didn't find any useful code samples, could anyone help to provide some guide on how to do this with ffmpeg apis?
You can use SOX for this task. Check documentation http://sox.sourceforge.net/sox.html. FFMPEG is widely used for audio conversions from one format to another. SOX is the best library which is widely used for mixing or concatenating audios.
Related
So we have an ancient compiled program that has been converting AVI files to MPEG for television broadcast. The program is pure sorcery, as the original programmer is long gone, but it has created 10's of thousands of MPEG files of a very particular format that our (also ancient) broadcast server uses.
So...the question is whether or not, we can use FFMPEG to initially "get the details" of one of those MPEG files, and use THAT to convert future MP4 files to that legacy MPEG format?
In short, we don't know all the intricacies of everything that the program is or may be doing, and want to replace it with FFMPEG, being confident that we're getting exactly the same output that works without a hitch in the fussy broadcast server.
FFmpeg cannot automatically retrieve and store all information from an existing file which is salient to reproducing those features in a new instance.
ffprobe or ffmpeg will show you basic stream and metadata information but that information has to be parsed outside of ffmpeg and then a conversion command manually crafted to reproduce those properties. However, this is only a start. There may be many aspects, like those related to flags, headers and packetization that ffprobe won't show, and which a fussy consumer expects in a certain way.
FFmpeg should be able to produce a standard vanilla file. You mention 'MPEG' but that could refer to MPEG-1/2 Program Stream (ISO 11172) or Transport Stream (ISO 13818). The latter is still widely produced & used and you should be able to find multiple software, FLOSS or otherwise, that produce it.
You can use ffprobe to get information about any AV media file and retain this information for later use:
https://www.ffmpeg.org/ffprobe.html
https://trac.ffmpeg.org/wiki/FFprobeTips
If you have ffmpeg installed on your system than ffprobe should already be installed ...
So, assuming we got a distribution without proprietary codecs installed.
Let's take Linux Mint for example. I want to store and playback wav and ogg format sounds, either by using my own software, or by using another developer's software. So far so good right?
Imagine now that we have the following scenario. For some reason, I wanna playback a file that is either an mp4 or mp3 or mpeg or any other format, made by proprietary codecs. Instantly, I will need a codec for these formats.
I read somewhere that Fluendo sells solutions for "legal codec usage" for linux distros.
URL of fluendo: http://www.fluendo.com/en/
So here comes the questions:
Using VLC and ffmpeg is enough for me to convert a file to an ogg or ogv so I can playback a song or a video using an open format. You can also playback playback files made by proprietary formats. But are VLC and ffmpeg legal to use, to playback such files made by proprietary codecs? For example, ss VLC codecs okay to be used without paying anyone for mp4 playback? Is it okay to convert a file from mp4 to ogv?
If not, are there any legal and open source and free (as in freedom) codecs around that can solve the issue, or does someone have to pay a product, to be ethically correct, to the developers of the proprietaty codecs?
Note that I do not ask for Windows, since codec licenses are included to the price of the operating system. I ask exclusively for a free linux distribution.
Since #LordNeckbeard pointed me to the FAQ of FFmpeg, that I really can't believe I missed, it became clear to me that there is a problem in using proprietary codecs, thus there are some file formats that could be avoided to keep ourselves safe. Otherwise if someone can afford a license to use them too, that would be perfectly fine.
So mp3, mp4, mpeg and some more patented formats are to be avoided, if not licensed.
ffmpeg can be built so it can exclude support for such formats and if you need to use sound or video to your software ogg and ogv are nice and efficient formats as we all know.
Digging a little deeper Ι found that too.
https://www.fsf.org/resources/playogg_radiostation.pdf
I wanted to know if there is a way to convert regular mp4 to a fragmented mp4 via javascript. (like mp4box does) Is it efficient enough (not suppose to be a complicated task)? did anyone write something like this?
to make it harder, can it be on the fly? meaning I will not download the whole mp4 from the server but download in parts and convert it into fragments compatible with fragmented mp4 and mpeg-dash - I'm trying to overcome to problem to not have to use 2 different file types to play a video or do mp4box on all my library in advance.
Regardless, is it possible to convert from h.264 compatible files with different containers (mov, flv etc.) to fragmented without a server? meaning do it in the browser with javascript somehow?
appreciate the help,
Yug
I am working on something similar (which lead me to here) but no clue so far. However, below is my finding:
Broadway:
https://github.com/mbebenita/Broadway
The idea is you may write a C/C++ using FFMPEG source library, then use Emscripten to compile your C/C++ coding into Javascript. I yet start working with this method, not sure this will work or not. If you did do let me know.
I use Qt & OpenCV to record video and QAudioInput to record audio into wav format. I want to combine them into one video file. How can I accomplish this? I have researched so much but I can't seem to find a command to accomplish this.
I use both Windows and Mac.
FYI, this operation seems to be accomplished through the cmd-line in this thread. This approach may turn into an easy hack since you can call this command using system().
But if you still want to do it programatically, I suggest you take a look at Dranger's FFmpeg tutorials. It provides 8 interesting tutorials that shows how to do simple stuff, from taking snapshots of a video to more complex stuffs like writing a simple video player with audio/video sync.
These tutorials teach how to work independently with audio and video streams, which is what you need to do: read the audio stream from the WAV file and then insert it as the audio stream of a video file.
Maybe not directly related to what you are aim for, but this answer demonstrates how to use FFmpeg to retrieve the audio stream of one file and play it with SDL, while simultaneously using OpenCV to retrieve video frames and display them in a SDL window.
How to convert PCM to flac using only ffmpeg API, but not the ffmpeg binary? I am looking for the full explanation of the process, but not only a solution
The best way is to look at the source of the ffmpeg binary itself, it uses the ffmpeg APIs itself. There are also code samples there. Here's the ffmpeg source code.