I am using ffmpeg to save live streams with an .m3u8 url. Regularly I see the following message. This results in the output video freezing.
skipping 5 segments ahead, expired from playlists
How can I tell ffmpeg to just write the frames and ignore that they are expired? I would rather see a choppy video than having it just freeze.
Old question, but WTH... maybe it helps somebody.
If I understand the situation correctly, that message means that ffmpeg is skipping the download of 5 entire chunks ("segments").
It's not about "expired frames", but "chunks that I have scheduled for download, but are no longer published on the playlists".
5 chunks/segments may be several seconds long, and not just 5 frames. That's why you see a freeze.
Perhaps you could try using some filter with the input, so the output muxer may fill the segments gap. Take a look at the overlay filter, for example: https://ffmpeg.org/ffmpeg-filters.html#Examples-82
Related
Hi there I am aiming to record 1 hr videos at 500x375) from a raspberry pi (running 64-bit bullseye) which need to be recorded in such a way that they can endure unexpected program termination or system shutdown.
Currently I am using a bash script utilising libcamera-vid and libav:
libcamera-vid -t $filmDuration --framerate 5 --width 500 --height 375 --nopreview --codec libav --libav-format avi -o "$(date +%Y%m%d_%H%M.avi)" --tuning-file /usr/share/libcamera/ipa/raspberrypi/imx219_noir.json
I initially encoded h.264 as mp4 but found that any interruption of the script would corrupt the file and I lack the understanding to work around this (though I suspect a method exists). The avi format on the other hand seems more robust and so I moved to using it but I am having a fairly serious issue by which the file appears to think the video is running at 600fps, rather than 5.
As far as I can tell this is not the case and there has been no loss in video duration that I would expect if the frames were being condensed but the machine learning toolkit (utilising openCV) these videos are recorded for takes the fps information as part of its novel video analysis effectively making it unable to analyse them.
I am not sure why exactly this is occurring or how to fix it but any advice would be very welcome; including any suggestions for other encoding software or solutions to recording to mp4 in a way that avoids corruption.
Not resolved as such but after opening an issue at the libcamera-apps repo this behaviour has been replicated and confirmed to be unintended.
While a similar issue that was effecting the mkv format incorrectly reporting its fps (as 30 according ffprobe) has been fixed, currently the issue with avi files incorrectly reporting fps has not.
Edit: New update to the libcamera-apps has now fixed the avi issue as well according to latest commit.
I'm using 'vlc/ffmpeg' package to grab the screen and convert it to H.264 file.
The problem arises when the host is heavily loaded. I need to maintain correct time stamps and use the 5 fps (relatively low frame rate). Yet sometimes the resulting file jumps few seconds forward, apparently due to frame loss.
I can deal with the frame loss, it's OK, but I need to duplicate lost frames to maintain correct timing.
My configuration file:
vlc.exe screen:// -I dummy --verbose=2 --one-instance :screen-fps=5 :screen-caching=10000 :sout=#transcode{venc=x264{preset=ultrafast,tune=zerolatency},vcodec=h264,fps=5,vb=3000,width=1024,height=576,acodec=none}:file{dst="C:\tmp\output.mp4"}
What should I add/config to preserve proper time stamps and clip duration?
Many thanks for your help.
OK, I found adding 'copyts' option does exactly what I need.
The 7160 Capture card original video was shown fine in the Honestech HD DVR software that is included.
However, when the card was captured using ffmpeg and publish out. This error occurred after a while running ffmpeg:
real-time buffer [7160 HD Capture] video input too full or near too full ...
I have already set -rtbufsize 2000M which is nearly the maximum that is allowed and can not be increased further.
Please tell me how to resolve this bug or give me an example that can be used without producing this bug. Thank you very much. You do not neeed the code that I used because almost any code even the simplest code I used produced this error after running for a while. The published video also lag and lost.
I am streaming short videos (4 or 5 seconds) encoded in H264 at 15 fps in VGA quality from different clients to a server using RTMP which produced an FLV file. I need to analyse the frames from the video as images as soon as possible so I need the frames to be written as PNG images as they are received.
Currently I use Wowza to receive the streams and I have tried using the transcoder API to access the individual frames and write them to PNGs. This partially works but there is about a second delay before the transcoder starts processing and when the stream ends Wowza flushes its buffers causing the last second not to get transcoded meaning I can lose the last 25% of the video frames. I have tried to find a workaround but Wowza say that it is not possible to prevent the buffer getting flushed. It is also not the ideal solution because there is a 1 second delay before I start getting frames and I have to re-encode the video when using the transcoder which is computationally expensive and unnecessarily for my needs.
I have also tried piping a video in real-time to FFmpeg and getting it to produce the PNG images but unfortunately it waits until it receives the entire video before producing the PNG frames.
How can I extract all of the frames from the stream as close to real-time as possible? I don’t mind what language or technology is used as long as it can run on a Linux server. I would be happy to use FFmpeg if I can find a way to get it to write the images while it is still receiving the video or even Wowza if I can find a way not to lose frames and not to re-encode.
Thanks for any help or suggestions.
Since you linked this question from the red5 user list, I'll add my two cents. You may certainly grab the video frames on the server side, but the issue you'll run into is transcoding from h.264 into PNG. The easiest was would be to use ffmpeg / avconv after getting the VideoData object. Here is a post that gives some details about getting the VideoData: http://red5.5842.n7.nabble.com/Snapshot-Image-from-VideoData-td44603.html
Another option is on the player side using one of Dan Rossi's FlowPlayer plugins: http://flowplayer.electroteque.org/snapshot
I finally found a way to do this with FFmpeg. The trick was to disable audio, use a different flv meta data analyser and to reduce the duration that FFmpeg waits for before processing. My FFmpeg command now starts like this:
ffmpeg -an -flv_metadata 1 -analyzeduration 1 ...
This starts producing frames within a second of receiving an input from a pipe so writes the streamed frames pretty close to real-time.
I hope someone can give me pointer, I have a php script that runs the command below to record an live radio mp3 stream to create hour long mp3 recordings. It works very well for my purpose. The only issue is occasionally no recording is made. As far as I can tell its because the stream has dropped out and ffmpeg just aborts.
/usr/local/bin/ffmpeg -i http://www.mystream.com:8000/radiostream.mp3 -t 60:00 -acodec copy /var/www/mydomain/audio/".$recorded_audio_title;
So my question, is there anyway to tell ffmpeg to continuously record for the 60:00 minutes to make a recording even if their are drop outs? I'd be happy with a odd bit of silence providing it completed the recording.
I hope this makes sense and I'd appreciate even a pointer to a FFMPEG option or flag. Having Google'd I havnt seen anything that would fit the bill.
Many thanks in advance
rob
Assuming you need to record exactly 60 mins files by completing dropped streaming by silence.
FFMPEG doesn't have such explicit option. But you can simulate it. Prepare 60 mins mp3 of silence. Record your stream. When recording finished, check its duration. If it's shorter than 60 mins - join your recording with silence file and specify that final duration should be 60 mins. Joining is described here.
EDIT:
To continue recording you just need to check your previous recording duration and if it's too short - run the same FFMPEG command again, with different file name, and then join two files. Loop this until you receive 60 mins file
try looking at ffmpeg -segment command to split up your audio recording into 60 minute chunks ffmpeg documentation