In our project, we convert any given video file into mp4 file which works fine when we publish it via our site.
But when we publish the stream link in our itunes-rss and try to download and play the files in Itunes or quicktime, we get an error on the movie-atom in some of the movies and those don't play as they're downloaded to local machine.
After some research, we got that the problem is in the framerate value, to be more specific, the problem is related with 32bit - 64bit value differences. And the conversion should be done with the following formula:
newFrameRate = (int(oldFrameRate)+1)*(1000/1001)
- as we found so far.
We tried to learn the framerate value through ffmpeg and movieinfo, but the results were always different and not accurate.
What's your suggestion to solve this issue?
Tolga
I found one useful way to solve this problem and wanted to report.
I installed MP4Box, and used
mp4box -frag 1000
which solves all the moov-atom related problems.
I tried other values for fragmantation but in larger values, second half of the movie loose its movie track and turns into white.
FYI,
Tolga
Related
Hi there I am aiming to record 1 hr videos at 500x375) from a raspberry pi (running 64-bit bullseye) which need to be recorded in such a way that they can endure unexpected program termination or system shutdown.
Currently I am using a bash script utilising libcamera-vid and libav:
libcamera-vid -t $filmDuration --framerate 5 --width 500 --height 375 --nopreview --codec libav --libav-format avi -o "$(date +%Y%m%d_%H%M.avi)" --tuning-file /usr/share/libcamera/ipa/raspberrypi/imx219_noir.json
I initially encoded h.264 as mp4 but found that any interruption of the script would corrupt the file and I lack the understanding to work around this (though I suspect a method exists). The avi format on the other hand seems more robust and so I moved to using it but I am having a fairly serious issue by which the file appears to think the video is running at 600fps, rather than 5.
As far as I can tell this is not the case and there has been no loss in video duration that I would expect if the frames were being condensed but the machine learning toolkit (utilising openCV) these videos are recorded for takes the fps information as part of its novel video analysis effectively making it unable to analyse them.
I am not sure why exactly this is occurring or how to fix it but any advice would be very welcome; including any suggestions for other encoding software or solutions to recording to mp4 in a way that avoids corruption.
Not resolved as such but after opening an issue at the libcamera-apps repo this behaviour has been replicated and confirmed to be unintended.
While a similar issue that was effecting the mkv format incorrectly reporting its fps (as 30 according ffprobe) has been fixed, currently the issue with avi files incorrectly reporting fps has not.
Edit: New update to the libcamera-apps has now fixed the avi issue as well according to latest commit.
I have a video in fragmented form which is an output of an Azure Media Services Live Event (Smooth Streaming).
I'm trying to concatenate the segments to get a single MP4 file, however I've run into a A/V sync problem - no matter what I do (time-shifting/speeding up/slowing down/using FFmpeg filters), the audio delay is always floating. To get the output MP4 file, I tried concatenating the segments for video and audio streams (both at OS file level and with FFmpeg) and then muxing with FFmpeg.
I've tried everything I found on the web and I'm always ending up with exactly the same result. What's important, when I play the source from the manifest file, it's all good. That made me skim through the manifest once again, and I realized there's CodecPrivateData value which I'm not using anywhere in the process. What is it? Could it somehow help solving my problem?
Mystery solved: the manifest file contains the list of stream discontinuities, which need to be taken into account when concatenating the streams.
I'm developing a system using ffmpeg to store some ip camera videos.
i'm using the segmentation command for store each 5 minutes a video for camera.
I have a wpf view where i can search historycal videos by dates. In this case i use the ffmpeg command concat to generate a video with the desire duration.
All this work excelent, my question is: it's possible concatenate the current file of the segmentation? i need for example, make a serch from the X date to the current time, but the last file is not generated yet by the ffmpeg. when i concatenate the files, the last one is not showing because is not finish the segment.
I hope someone can give me some guidance on what I can do.
Some video formats can always be playable during the build process. That is, you can make a copy of the unfinished segmentation directly and use it to merge.
I suggest you use flv or ts format to do this. mp4 is not supported. Also note that there is a delay from encoding to actually writing to the disk.
I'm not sure if direct copy will cause some data problems at the end of the segmentation file, but ffmpeg will ignore this part of the data during the merge process, so the merged video should be fine.
I am trying to serve video using MPEG-DASH. No success. I have tried the following:
Following the instructions on webproject.org, using FFMPEG, I have created several variants of the original video and the DASH MPD manifest, containing metadata. However, the manifest does not validate using http://dashif.org/conformance.html. This validator itself is quite useless, as it provides unusable info about the error. I have found in a post from 2014, that one of the errors generated by FFMPEG is capital letters in some metadata (not a critical one, but could have been fixed for years!). Other errors detected, but not described. No tangible info from any of these other validators either: http://www-itec.uni-klu.ac.at/dash/?page_id=605 (produces rubbish info), https://github.com/Eyevinn/dash-validator-js (throws an exception)
Following instructions on mozilla.org, produces the same non-working result, as the instructions are nearly identical (including same resolution*bitrate sets), except that Mozilla omits the use of dash.js, which is deemed necessary by the rest of the internet.
This guide on Bitmovin, utilizing x264 and MP4Box does not work either. Going by the instructions, I have to recode the original x264 video twice. The final version of videos are in some cases twice the size of their intermediate versions and 720p video is actually larger than its 1080p, higher bitrate counterpart. No need to go further. (Yet, this is the only way that actually produced segments..)
I have spent 3 days on the above, read about all there is on the web from the other frustrated adopters, and ran out of options. I would really apreciate some pro tips! Thanks!
I have a serie of video files encoded in mpeg2 (I can change this encoding), and I have to produce a movie in flash flv (this is a requirement, I can't change that encoding).
One destination movie is a compilation of different source video files.
I have a playlist defining the destination movie. For example:
Video file Position Offset Length
little_gnomes 0 0 8.5
fairies 5.23 0.12 12.234
pixies 14 0 9.2
Video file is the name of the file, position is when the file should be started (in the master timeline), offset is the offset within the video file, and length is the length of the video to play. The numbers are seconds (in double).
This would result in something like that (final movie timeline):
0--5.23|--8.5|--14|--17.464|--23.2|
little_nomes **************
fairies *********************
pixies *****************
Where video overlaps, the last video to be added override the last one, the audio should be mixed.
The resulting video track would be:
0--5.23|--8.5|--14|--17.464|--23.2|
little_nomes *******
fairies ***********
pixies *****************
While the resulting audio would be:
0--5.23|--8.5|--14|--17.464|--23.2|
little_nomes 11111112222222
fairies 222222211112222222222
pixies 22222222221111111
Where 1 or 2 is the number of mixed audio tracks.
There can be a maximum of 3 audio tracks.
I need to write a program which takes the playlist as input and produce the flv file. I'm open to any solution (must be free/open source).
An existing tool that can do that would be the simplest, but I found none. As for making my own solution, I found only ffmpeg, I was able to do basic things with it, but the documentation is terribly lacking.
It can be any language, it doesn't have to be super fast (if it takes 30 minutes to build a 1h movie it's fine).
The solution will run on opensolaris based x64 servers. If I have to use linux, this would work too. But windows is out of the question.
I finally ended writing my solution from scratch, using ffmpeg library. It's a lot of boiler plate code but in the end the logic in not complicated.
I found the MLT framework which helped me greatly.
Here are two related questions:
Command line video editing tools
https://superuser.com/questions/74028/linux-command-line-tool-for-video-editing
Avisynth sounds as if it might do what you want, but it's Windows-only.
You may very well end up writing your own application using the FFmpeg library. You're right, the documentation could be better... but the tutorial by Stephen Dranger is a good place to start (if you don't know it yet).
Well, if you prefer Java, I've written several similar programs using Xuggler's API.
If your videos / images are already online, you may use the Stupeflix API to create the final videos. You can change the soundtrack, add filters to the video and much more. Here the documentation and an online demo : https://developer.stupeflix.com/documentation/ .