MPEG-DASH not working. MPD validation fails - ffmpeg

I am trying to serve video using MPEG-DASH. No success. I have tried the following:
Following the instructions on webproject.org, using FFMPEG, I have created several variants of the original video and the DASH MPD manifest, containing metadata. However, the manifest does not validate using http://dashif.org/conformance.html. This validator itself is quite useless, as it provides unusable info about the error. I have found in a post from 2014, that one of the errors generated by FFMPEG is capital letters in some metadata (not a critical one, but could have been fixed for years!). Other errors detected, but not described. No tangible info from any of these other validators either: http://www-itec.uni-klu.ac.at/dash/?page_id=605 (produces rubbish info), https://github.com/Eyevinn/dash-validator-js (throws an exception)
Following instructions on mozilla.org, produces the same non-working result, as the instructions are nearly identical (including same resolution*bitrate sets), except that Mozilla omits the use of dash.js, which is deemed necessary by the rest of the internet.
This guide on Bitmovin, utilizing x264 and MP4Box does not work either. Going by the instructions, I have to recode the original x264 video twice. The final version of videos are in some cases twice the size of their intermediate versions and 720p video is actually larger than its 1080p, higher bitrate counterpart. No need to go further. (Yet, this is the only way that actually produced segments..)
I have spent 3 days on the above, read about all there is on the web from the other frustrated adopters, and ran out of options. I would really apreciate some pro tips! Thanks!

Related

Inexperienced with videos and looking for advice for dealing with incorrect avi framerate and possible alternatives

Hi there I am aiming to record 1 hr videos at 500x375) from a raspberry pi (running 64-bit bullseye) which need to be recorded in such a way that they can endure unexpected program termination or system shutdown.
Currently I am using a bash script utilising libcamera-vid and libav:
libcamera-vid -t $filmDuration --framerate 5 --width 500 --height 375 --nopreview --codec libav --libav-format avi -o "$(date +%Y%m%d_%H%M.avi)" --tuning-file /usr/share/libcamera/ipa/raspberrypi/imx219_noir.json
I initially encoded h.264 as mp4 but found that any interruption of the script would corrupt the file and I lack the understanding to work around this (though I suspect a method exists). The avi format on the other hand seems more robust and so I moved to using it but I am having a fairly serious issue by which the file appears to think the video is running at 600fps, rather than 5.
As far as I can tell this is not the case and there has been no loss in video duration that I would expect if the frames were being condensed but the machine learning toolkit (utilising openCV) these videos are recorded for takes the fps information as part of its novel video analysis effectively making it unable to analyse them.
I am not sure why exactly this is occurring or how to fix it but any advice would be very welcome; including any suggestions for other encoding software or solutions to recording to mp4 in a way that avoids corruption.
Not resolved as such but after opening an issue at the libcamera-apps repo this behaviour has been replicated and confirmed to be unintended.
While a similar issue that was effecting the mkv format incorrectly reporting its fps (as 30 according ffprobe) has been fixed, currently the issue with avi files incorrectly reporting fps has not.
Edit: New update to the libcamera-apps has now fixed the avi issue as well according to latest commit.

How to make mpv more compatible with ffmpeg filters like minterpolate?

ffmpeg filter minterpolate (motion interpolation) does not work in MPV.
(Nevertheless the file then is played normally without the minterpolate).
(I researched using search engines and throughout documentation and troubleshooted to make a use of opengl and generally tried everything apart from asking for help and learning to understand more in the source code and I'm not a programmer)…
--gpu-context=angle --gpu-api=opengl also does not make opengl work. (I'm guessing opengl could help from seeing its use in the documentations).
Note
To get a full list of available video filters, see --vf=help and
http://ffmpeg.org/ffmpeg-filters.html .
Also, keep in mind that most actual filters are available via the
lavfi wrapper, which gives you access to most of libavfilter's
filters. This includes all filters that have been ported from MPlayer
to libavfilter.
Most builtin filters are deprecated in some ways, unless they're only
available in mpv (such as filters which deal with mpv specifics, or
which are implemented in mpv only).
If a filter is not builtin, the lavfi-bridge will be automatically
tried. This bridge does not support help output, and does not verify
parameters before the filter is actually used. Although the mpv syntax
is rather similar to libavfilter's, it's not the same. (Which means
not everything accepted by vf_lavfi's graph option will be accepted by
--vf.)
You can also prefix the filter name with lavfi- to force the wrapper.
This is helpful if the filter name collides with a deprecated mpv
builtin filter. For example --vf=lavfi-scale=args would use
libavfilter's scale filter over mpv's deprecated builtin one.
I expect MPV to play with minterpolate (one of several filters that MPV can use, listed in http://ffmpeg.org/ffmpeg-filters.html) enabled. But this is what happens:
Input: "--vf=lavfi=[minterpolate=fps=60000/1001:mi_mode=mci]"
Output:
cplayer: (+) Video --vid=1 (*) (h264 1280x720 29.970fps)
cplayer: (+) Audio --aid=1 (*) (aac 2ch 44100Hz)
vd: Using hardware decoding (d3d11va).
ffmpeg: Impossible to convert between the formats supported by the filter 'mpv_src_in0' and the filter 'auto_scaler_0'
lavfi: failed to configure the filter graph
vf: Disabling filter lavfi.00 because it has failed.
(Interesting is also that --gpu-api=opengl does not work (despite that according to specification my—not to brag—HD Graphics 400 Braswell supports its 4.2 version)… And that aresample seems to have no effect too, and with the few audio filters selected playback often doesn't start nor output errors.)
The problem is that you're using hardware decoding WITHOUT copying the decoded video back to system memory. This means your video filter can't access it. The fix is simple but that error message makes it very hard to figure this out.
To fix this just pass in --hwdec=no. Though --hwdec=auto-copy also fixes it but minterpolate in mci mode is so CPU intensive there's not much point in also using hardware decoding. (for most video sources)
All together:
mpv input.mkv --hwdec=no --vf=lavfi="[minterpolate=fps=60000/1001:mi_mode=mci]"
Explanation: The most efficient hardware decoding doesn't copy the video data back to system memory after decoding. But you need it in memory for running CPU based filtering on the decoded video data. You were asking mpv to do some video filtering but it doesn't have access to the decoded video data.
More details from the mpv docs:
auto-copy selects only modes that copy the video data back to system memory after decoding. This selects modes like vaapi-copy (and so on). If none of these work, hardware decoding is disabled. This mode is usually guaranteed to incur no additional quality loss compared to software decoding (assuming modern codecs and an error free video stream), and will allow CPU processing with video filters. This mode works with all video filters and VOs.
Because these copy the decoded video back to system RAM, they're often less efficient than the direct modes, and may not help too much over software decoding.

How to play multiple mp3/wma files at once?

I have the need to play multiple soundeffects at once in my WP7 app.
I currently have it working with wav files that takes around 5 megabyte, instead of 500kb when coded in wma/mp3.
Current part of the code:
Stream stream = TitleContainer.OpenStream(String.Format("/location/{0}.wav", value)
SoundEffect effect = SoundEffect.FromStream(stream);
effect.Play();
This works great in a loop, preparing all effects, and then playing them.
However, I would really like to use mp3/wma/whatever-codec to slim my xap file down.
I tried to use MediaElement, but it appears that you also can't use that to play multiple files. Also the XNA MediaPlayer can't be instantiated, and as far as I experienced can't be made to play multiple files at once.
The only solution I see left is that I somehow decode the mp3 to wav and feed that Stream to SoundEffect.
Any ideas on how to accomplish the multiple playback? Or suggestions on how to decode mp3 to wav?
On the conversion... sorry - but I don't think there's any api currently available for WMA or MP3 decoding.
Also, I don't think there are any implementations of MP3, WMA or Ogg decoders which are available in pure c# code - all of them I've seen use DirectShow or PInvoke - e.g. see C# Audio Library.
I personally do expect audio/video compression/decompression to be available at some point in the near future in the WP7 APIs - but I can't guess when!
For some simple compression you can try things like shipping mono instead of stereo files, or shipping 8 bit rather than 16 bit audio files - these are easy to convert back to 16 bit (with obvious loss of resolution) on the phone.
Using compression like zip might also help for some sound effects... but I wouldn't expect it to be hugely successful.

movie atom problem in mp4 conversion

In our project, we convert any given video file into mp4 file which works fine when we publish it via our site.
But when we publish the stream link in our itunes-rss and try to download and play the files in Itunes or quicktime, we get an error on the movie-atom in some of the movies and those don't play as they're downloaded to local machine.
After some research, we got that the problem is in the framerate value, to be more specific, the problem is related with 32bit - 64bit value differences. And the conversion should be done with the following formula:
newFrameRate = (int(oldFrameRate)+1)*(1000/1001)
- as we found so far.
We tried to learn the framerate value through ffmpeg and movieinfo, but the results were always different and not accurate.
What's your suggestion to solve this issue?
Tolga
I found one useful way to solve this problem and wanted to report.
I installed MP4Box, and used
mp4box -frag 1000
which solves all the moov-atom related problems.
I tried other values for fragmantation but in larger values, second half of the movie loose its movie track and turns into white.
FYI,
Tolga

Server side video mixing

I have a serie of video files encoded in mpeg2 (I can change this encoding), and I have to produce a movie in flash flv (this is a requirement, I can't change that encoding).
One destination movie is a compilation of different source video files.
I have a playlist defining the destination movie. For example:
Video file Position Offset Length
little_gnomes 0 0 8.5
fairies 5.23 0.12 12.234
pixies 14 0 9.2
Video file is the name of the file, position is when the file should be started (in the master timeline), offset is the offset within the video file, and length is the length of the video to play. The numbers are seconds (in double).
This would result in something like that (final movie timeline):
0--5.23|--8.5|--14|--17.464|--23.2|
little_nomes **************
fairies *********************
pixies *****************
Where video overlaps, the last video to be added override the last one, the audio should be mixed.
The resulting video track would be:
0--5.23|--8.5|--14|--17.464|--23.2|
little_nomes *******
fairies ***********
pixies *****************
While the resulting audio would be:
0--5.23|--8.5|--14|--17.464|--23.2|
little_nomes 11111112222222
fairies 222222211112222222222
pixies 22222222221111111
Where 1 or 2 is the number of mixed audio tracks.
There can be a maximum of 3 audio tracks.
I need to write a program which takes the playlist as input and produce the flv file. I'm open to any solution (must be free/open source).
An existing tool that can do that would be the simplest, but I found none. As for making my own solution, I found only ffmpeg, I was able to do basic things with it, but the documentation is terribly lacking.
It can be any language, it doesn't have to be super fast (if it takes 30 minutes to build a 1h movie it's fine).
The solution will run on opensolaris based x64 servers. If I have to use linux, this would work too. But windows is out of the question.
I finally ended writing my solution from scratch, using ffmpeg library. It's a lot of boiler plate code but in the end the logic in not complicated.
I found the MLT framework which helped me greatly.
Here are two related questions:
Command line video editing tools
https://superuser.com/questions/74028/linux-command-line-tool-for-video-editing
Avisynth sounds as if it might do what you want, but it's Windows-only.
You may very well end up writing your own application using the FFmpeg library. You're right, the documentation could be better... but the tutorial by Stephen Dranger is a good place to start (if you don't know it yet).
Well, if you prefer Java, I've written several similar programs using Xuggler's API.
If your videos / images are already online, you may use the Stupeflix API to create the final videos. You can change the soundtrack, add filters to the video and much more. Here the documentation and an online demo : https://developer.stupeflix.com/documentation/ .

Resources