ffmpeg error > av_interleaved_write_frame(): No space left on device - ffmpeg

I am recording screen using ffmpeg with following command
ffmpeg -f alsa -f x11grab -i :0.0+0,0 -framerate 30 -crf 30 -video_size 400x400 output.mp4
When I have low memory on disk, ffmpeg throws av_interleaved_write_frame(): No space left on device error. And while opening recorded file getting error This file contains no playable streams..
Is it possible to make the video file playable?

Not with mp4, no. Mp4 requires the files be closed at the end to write out the moov box. This box containes information required to play the file, and is not possible to write out until the very end due to its structure. You can use a different container like mkv or flv that uses a different init structure and can be written and updated within the stream.

In your case, yes. Recently I also ended up with a similar non-playable file due to low disk space. Try this free tool to fix it up:
https://www.videohelp.com/software/recover-mp4-to-h264
Usage:
recover_mp4.exe reference_file.mp4 --analyze
recover_mp4.exe broken.mp4 repaired.h264 [audio.aac | audio.wav | audio.mp3]

Related

How to include a stream in ffmpeg output? (Output file #0 does not contain any stream)

This question has been asked many times and I have tried most of the proposed solutions to no avail.
With this command:
ffmpeg -loop 1 -i dummy.jpg -t 10 -pix_fmt yuv420p output.mp4
I can create a "dummy" .mp4 file that lasts 10 seconds with a fixed image (dummy.jpg).
This mp4 file plays fine in "QuickTime Player".
However, running ffmpeg -v error output.mp4 -f null outputs:
Output file #0 does not contain any stream
The Web Audio API (AudioContext.decodeAudioData) can't decode this file and the cause is probably the error reported by ffmpeg.
How can I include and stream in output.mp4?
Cheers!
Your first command is fine. The second command is missing the input -i and is also missing an output name, which is required syntax when using the null muxer.
ffmpeg -v error -i output.mp4 -f null -
The - is just a placeholder: no file named - is actually being written.
I can't comment on the Web Audio API, since I've never heard of that, but the file doesn't have audio if that makes a difference.

Downloading a Few Seconds of Audio from a Youtube Video with ffmpeg and youtube-dl Results in [youtube]: No such file or directory error

Below is the program that I wrote:
ffmpeg -ss 60 -t 10 -i $(youtube-dl -f 140 https://www.youtube.com/watch?v=kDCk3hLIVXo) output.mp3
This is supposed to get 10 seconds of audio starting at the one minute mark of the video and write it to output.mp3. If I run the youtube-dl command separately, and then the ffmpeg command with the entire video audio as input, it works. But, I do not want to download the entire video as well as create a new file with only a few seconds of audio.
In its current state, I am getting [youtube]: No such file or directory errors. Does anyone know how I can fix this and keep it in one line?
The issue is the output which is returned from youtube-dl is several lines of information, so ffmpeg doesn't know how to deal with it properly.
You'll want to return the actual name of the file without any other information included; a tool like awk or sed can be helpful for this. In addition there will need to be an encoding step added at the end so the audio stream gets copied to the output file (libmp3lame->mp3).
Example:
ffmpeg -ss 60 -t 10 $(youtube-dl -f 140 -g https://www.youtube.com/watch?v=kDCk3hLIVXo | \
sed "s/.*/-i &/") -c:v copy -codec:a libmp3lame output.mp3
This command should return an audio mp3 file 60 seconds in which is 10 seconds in duration.
Result:
output.mp3: Audio file with ID3 version 2.4.0, contains:MPEG ADTS, layer III, v1, 64 kbps, 44.1 kHz, Stereo

Using ffmpeg to convert an SEC file

I need to convert an SEC file into any video format that I can share and/or upload to Youtube. MP4, etc.
I'm a complete newbie at all things terminal. I've tried:
ffmpeg -i video.sec video.mp4
ffmpeg -i video.sec -bsf:v h264_mp4toannexb -c:v copy video.avi
ffmpeg -i video.sec -b 256k -vcodec h264 -acodec aac video.mp4
I don't understand what any of these mean, they're just examples I found online. However, whatever I try returns this error:
Invalid data found when processing input
Any thoughts? Thanks!
I had to add the following option so it would skip the SEC's custom header.
-skip_initial_bytes 48
i know this is old, but i was trying to figure this out as well, what ended up finally working for me was this command.
./ffmpeg -f h264 -i INPUT.sec -filter:v "setpts=4*PTS" OUTPUT.avi
the -f h264 was the part i was missing. and the -filter:v "setpts=4*PTS" part is to slow it back down to the original speed. you can also change the .avi at the end to whichever format works best for you.
i hope this helps someone out :)
OK, just to clear up some recent threads…
The Samsung DVR used here was an SRD-440. RB kindly sent me a file to test and he sent me a .BU file with an associated .db2 file. This was a bit of a surprise as in all older Samsung DVR’s, the .bu files can only be played back in the DVR. I mentioned this here, https://spreadys.wordpress.com/2014/07/21/ifsec-samsung-exports/
It appears that Samsung have caught on, and the BU file is now playable due to it being a H264/AVC Stream conforming to a standard profile. I have updated the IFSEC Post mentioned above to highlight this change.
Back to RB’s stream and the challenge was to get these files viewable in WMV format. They were all field based, at 704×288.
The speed of playback is controlled by the Samsung software, using the .db2 file. As such, the metadata and timing information in the video stream was wrong. This caused speed issues and then quality issues when attempting to correct this.
As a result, I found it necessary to force an input rate and generate a new Presentation Time Stamp BEFORE the input file.
The following FFmpeg string did the job…
ffmpeg -r 12 -fflags genpts -i FILE.bu -vf scale=704:528 -sws_flags lanczos -q:v 2 FILE.wmv
Remember, this is for preview – analysis would be completed differently due to the scaling, the interpolation method, and the WMV compression!
As its likely that RB may have quite a few .bu files in a folder, I placed this into a batch file to transcode the whole lot within a few minutes… more on batch files coming in a new post soon!
https://spreadys.wordpress.com/2014/07/21/ifsec-samsung-exports/
or
ffmpeg -i (name of file).sec (name of final file).mp4
ffmpeg -i (name of file).sec -filter:v "setpts=3.3*PTS" (name of final_file).mp4

ffmpeg command to is throwing error

I am new to ffmpeg usage.
I am trying to merge two video file.
The below bullets will provide you more details about it.
1. I-ball usb camera
2. Screen capture utility named UScreenCapture.
The below command i am using on DOS.
ffmpeg -f dshow -i video="iBall Face2Face Webcam C12.0" -f dshow -i video="UScreenCapture" -r 25 -vcodec mpeg4 -q 12 -f mpegts test.ts
This command captures only from Uscreencapture source.
while grabbing frames from Camera it is giving me an error saying that
real-time buffer 90% full! frame dropped!
real-time buffer 121% full! frame dropped!
Can any one provide me the solution for this issue?
Looks like you need the ffmpeg -map function
"Designate one or more input streams as a source for the output file"
FFMPEG "-map" Documentation

Using FFMPEG to stream continuously videos files to a RTMP server

ffmpeg handles RTMP streaming as input or output, and it's working well.
I want to stream some videos (a dynamic playlist managed by a python script) to a RTMP server, and i'm currently doing something quite simple: streaming my videos one by one with FFMPEG to the RTMP server, however this causes a connection break every time a video end, and the stream is ready to go when the next video begins.
I would like to stream those videos without any connection breaks continuously, then the stream could be correctly viewed.
I use this command to stream my videos one by one to the server
ffmpeg -re -y -i myvideo.mp4 -vcodec libx264 -b:v 600k -r 25 -s 640x360 \
-filter:v yadif -ab 64k -ac 1 -ar 44100 -f flv \
"rtmp://mystreamingserver/app/streamName"
I looked for some workarounds over the internet for many days, and i found some people talking about using a named pipe as input in ffmpeg, I've tried it and it didn't work well since ffmpeg does not only close the RTMP stream when a new video comes but also closes itself.
Is there any way to do this ? (stream a dynamic playlist of videos with ffmpeg to RTMP server without connection breaks
Update (as I can't delete the accepted answer): the proper solution is to implement a custom demuxer, similar to the concat one. There's currently no other clean way. You have to get your hands dirty and code!
Below is an ugly hack. This is a very bad way to do it, just don't!
The solution uses the concat demuxer and assumes all your source media files use the same codec. The example is based on MPEG-TS but the same can be done for RTMP.
Make a playlist file holding a huge list of entry points for you dynamic playlist with the following format:
file 'item_1.ts'
file 'item_2.ts'
file 'item_3.ts'
[...]
file 'item_[ENOUGH_FOR_A_LIFETIME].ts'
These files are just placeholders.
Make a script that keeps track of you current playlist index and creates symbolic links on-the-fly for current_index + 1
ln -s /path/to/what/to/play/next.ts item_1.ts
ln -s /path/to/what/to/play/next.ts item_2.ts
ln -s /path/to/what/to/play/next.ts item_3.ts
[...]
Start playing
ffmpeg -f concat -i playlist.txt -c copy output -f mpegts udp://<ip>:<port>
Get chased and called names by an angry system administrator
Need to create two playlist files and at the end of each file specify a link to another file.
list_1.txt
ffconcat version 1.0
file 'item_1.mp4'
file 'list_2.txt'
list_2.txt
ffconcat version 1.0
file 'item_2.mp4'
file 'list_1.txt'
Now all you need is to dynamically change the contents of the next playlist file.
You can pipe your loop to a buffer, and from this buffer you pipe to your streaming instance.
In shell it would look like:
#!/bin/bash
for i in *.mp4; do
ffmpeg -hide_banner -nostats -i "$i" -c:v mpeg2video \
[proper settings] -f mpegts -
done | mbuffer -q -c -m 20000k | ffmpeg -hide_banner \
-nostats -re -fflags +igndts \
-thread_queue_size 512 -i pipe:0 -fflags +genpts \
[proper codec setting] -f flv rtmp://127.0.0.1/live/stream
Of course you can use any kind of loop, also looping through a playlist.
I figure out that mpeg is a bit more stabile, then x264 for the input stream.
I don't know why, but minimum 2 threads for the mpeg compression works better.
the input compression need to be faster then the output frame rate, so we get fast enough new input.
Because of the non-continuing timestamp we have to skip them and generate a new one in the output.
The buffer size needs to be big enough for the loop to have enough time to get the new clip.
Here is a Rust based solution, which uses this technique: ffplayout
This uses a JSON playlist format. The Playlist is dynamic, in that way that you can edit always the current playlist and change tracks or add new ones.
Very Late Answer, but I recently ran into the exact same issue as the poster above.
I solved this problem by using OBS and the OBS websockets plugin.
First, set your RTMP streaming app as you have it now. but stream to a LOCAL RTMP stream.
Then have OBS load this RTMP stream as a VLC source layer with the local RTMP as the source.
then (in your app), using the OBS websockets plugin, have your VLC source switch to a static black video or PNG file when the video ends. Then switch back to the RTMP stream once the next video starts. This will prevent the RTMP stream from stopping when the video ends. OBS will go black durring the short transition, but the final OBS RTMP output will never stop.
There is surely a way to do this with manually setting up a intermediate RTMP server that pushes to a final RTMP server, but I find using OBS to be easier, with little overhead.
I hope this helps others, this solutions has been working incredible for me.

Resources