Server side video mixing - ffmpeg

I have a serie of video files encoded in mpeg2 (I can change this encoding), and I have to produce a movie in flash flv (this is a requirement, I can't change that encoding).
One destination movie is a compilation of different source video files.
I have a playlist defining the destination movie. For example:
Video file Position Offset Length
little_gnomes 0 0 8.5
fairies 5.23 0.12 12.234
pixies 14 0 9.2
Video file is the name of the file, position is when the file should be started (in the master timeline), offset is the offset within the video file, and length is the length of the video to play. The numbers are seconds (in double).
This would result in something like that (final movie timeline):
0--5.23|--8.5|--14|--17.464|--23.2|
little_nomes **************
fairies *********************
pixies *****************
Where video overlaps, the last video to be added override the last one, the audio should be mixed.
The resulting video track would be:
0--5.23|--8.5|--14|--17.464|--23.2|
little_nomes *******
fairies ***********
pixies *****************
While the resulting audio would be:
0--5.23|--8.5|--14|--17.464|--23.2|
little_nomes 11111112222222
fairies 222222211112222222222
pixies 22222222221111111
Where 1 or 2 is the number of mixed audio tracks.
There can be a maximum of 3 audio tracks.
I need to write a program which takes the playlist as input and produce the flv file. I'm open to any solution (must be free/open source).
An existing tool that can do that would be the simplest, but I found none. As for making my own solution, I found only ffmpeg, I was able to do basic things with it, but the documentation is terribly lacking.
It can be any language, it doesn't have to be super fast (if it takes 30 minutes to build a 1h movie it's fine).
The solution will run on opensolaris based x64 servers. If I have to use linux, this would work too. But windows is out of the question.

I finally ended writing my solution from scratch, using ffmpeg library. It's a lot of boiler plate code but in the end the logic in not complicated.
I found the MLT framework which helped me greatly.

Here are two related questions:
Command line video editing tools
https://superuser.com/questions/74028/linux-command-line-tool-for-video-editing
Avisynth sounds as if it might do what you want, but it's Windows-only.
You may very well end up writing your own application using the FFmpeg library. You're right, the documentation could be better... but the tutorial by Stephen Dranger is a good place to start (if you don't know it yet).

Well, if you prefer Java, I've written several similar programs using Xuggler's API.

If your videos / images are already online, you may use the Stupeflix API to create the final videos. You can change the soundtrack, add filters to the video and much more. Here the documentation and an online demo : https://developer.stupeflix.com/documentation/ .

Related

ffmpeg read the current segmentation file

I'm developing a system using ffmpeg to store some ip camera videos.
i'm using the segmentation command for store each 5 minutes a video for camera.
I have a wpf view where i can search historycal videos by dates. In this case i use the ffmpeg command concat to generate a video with the desire duration.
All this work excelent, my question is: it's possible concatenate the current file of the segmentation? i need for example, make a serch from the X date to the current time, but the last file is not generated yet by the ffmpeg. when i concatenate the files, the last one is not showing because is not finish the segment.
I hope someone can give me some guidance on what I can do.
Some video formats can always be playable during the build process. That is, you can make a copy of the unfinished segmentation directly and use it to merge.
I suggest you use flv or ts format to do this. mp4 is not supported. Also note that there is a delay from encoding to actually writing to the disk.
I'm not sure if direct copy will cause some data problems at the end of the segmentation file, but ffmpeg will ignore this part of the data during the merge process, so the merged video should be fine.

Mix more then 1000 mp3's sounds bad / Can I use FFmpeg to merge 1000 mp3s?

I am working on a project where everybody has to activate a part of a song. I have about 7000 mp3's, each with the same length of the final mix but with only a small part of audio. So for example you can hear a drum hit at the 15th second and the rest of the mp3 (about 4 min.) is silence.
I use the mix filter to add all the mp3's. I add them 32 mp3s at a time.
The first test I've run results in the first mixed mp3s to be silenced? (I set the Volume on the mix to the number of tracks) Also the sound is of poor quality after the mix. Can I fix this?
Or do you think this can not be done by ffmpeg? Do you know an alternative program to do this?
Thanks!
B.
If you are using the amix filter, add normalize=0 at the end, before specifying the output file. This will make ffmpeg keep all the volumes of your audio inputs at the same level.

MPEG-DASH not working. MPD validation fails

I am trying to serve video using MPEG-DASH. No success. I have tried the following:
Following the instructions on webproject.org, using FFMPEG, I have created several variants of the original video and the DASH MPD manifest, containing metadata. However, the manifest does not validate using http://dashif.org/conformance.html. This validator itself is quite useless, as it provides unusable info about the error. I have found in a post from 2014, that one of the errors generated by FFMPEG is capital letters in some metadata (not a critical one, but could have been fixed for years!). Other errors detected, but not described. No tangible info from any of these other validators either: http://www-itec.uni-klu.ac.at/dash/?page_id=605 (produces rubbish info), https://github.com/Eyevinn/dash-validator-js (throws an exception)
Following instructions on mozilla.org, produces the same non-working result, as the instructions are nearly identical (including same resolution*bitrate sets), except that Mozilla omits the use of dash.js, which is deemed necessary by the rest of the internet.
This guide on Bitmovin, utilizing x264 and MP4Box does not work either. Going by the instructions, I have to recode the original x264 video twice. The final version of videos are in some cases twice the size of their intermediate versions and 720p video is actually larger than its 1080p, higher bitrate counterpart. No need to go further. (Yet, this is the only way that actually produced segments..)
I have spent 3 days on the above, read about all there is on the web from the other frustrated adopters, and ran out of options. I would really apreciate some pro tips! Thanks!

FFMpeg video clipping

I would like to use the ffmpeg apis (not the command line) for clipping videos to a specific size (e.g say 1hr video, create a new video starting at 10 minutes and ending at 30 minutes). Are there any examples of doing this out there?
I have used the apis to stream and record video so I have a bit of background knowledge.
Thanks.
ffmpeg (the command line tool) is just a frontend to the APIs with some extras. The whole source of the ffmpeg CLI tool is contained in one single source file ffmpeg.c. I suggest you just take a look into that to see, how ffmpeg does it internally.

How to play multiple mp3/wma files at once?

I have the need to play multiple soundeffects at once in my WP7 app.
I currently have it working with wav files that takes around 5 megabyte, instead of 500kb when coded in wma/mp3.
Current part of the code:
Stream stream = TitleContainer.OpenStream(String.Format("/location/{0}.wav", value)
SoundEffect effect = SoundEffect.FromStream(stream);
effect.Play();
This works great in a loop, preparing all effects, and then playing them.
However, I would really like to use mp3/wma/whatever-codec to slim my xap file down.
I tried to use MediaElement, but it appears that you also can't use that to play multiple files. Also the XNA MediaPlayer can't be instantiated, and as far as I experienced can't be made to play multiple files at once.
The only solution I see left is that I somehow decode the mp3 to wav and feed that Stream to SoundEffect.
Any ideas on how to accomplish the multiple playback? Or suggestions on how to decode mp3 to wav?
On the conversion... sorry - but I don't think there's any api currently available for WMA or MP3 decoding.
Also, I don't think there are any implementations of MP3, WMA or Ogg decoders which are available in pure c# code - all of them I've seen use DirectShow or PInvoke - e.g. see C# Audio Library.
I personally do expect audio/video compression/decompression to be available at some point in the near future in the WP7 APIs - but I can't guess when!
For some simple compression you can try things like shipping mono instead of stereo files, or shipping 8 bit rather than 16 bit audio files - these are easy to convert back to 16 bit (with obvious loss of resolution) on the phone.
Using compression like zip might also help for some sound effects... but I wouldn't expect it to be hugely successful.

Resources