I am new to ffmpeg usage.
I am trying to merge two video file.
The below bullets will provide you more details about it.
1. I-ball usb camera
2. Screen capture utility named UScreenCapture.
The below command i am using on DOS.
ffmpeg -f dshow -i video="iBall Face2Face Webcam C12.0" -f dshow -i video="UScreenCapture" -r 25 -vcodec mpeg4 -q 12 -f mpegts test.ts
This command captures only from Uscreencapture source.
while grabbing frames from Camera it is giving me an error saying that
real-time buffer 90% full! frame dropped!
real-time buffer 121% full! frame dropped!
Can any one provide me the solution for this issue?
Looks like you need the ffmpeg -map function
"Designate one or more input streams as a source for the output file"
FFMPEG "-map" Documentation
Related
Before posting I have searched and found similar questions on stackoverflow (I list some below) - none have helped me towards a solution, hence this post. The duration that each image is shown within the movie file differs from many posts that I have seen thus far.
A camera captures 1 image every 30 seconds. I need stream them, preferably via HLS, thus I wrap 2 images in an MP4. I then convert MP4 to mpegts. Each MP4 and TS file play fine individually (each contain two images, each image transitions after 30seconds, each movie file is 1minute long).
When I reference the two TS files in an M3U8 playlist, only the first TS file gets played. Can anyone advise why it stops and how I can get it to play all the TS files that I expect to create, not just the first TS file? Besides my ffmpeg commands, I also include my VLC log file (though I expect to stream to Firefox/Chrome clients). I am using ffmpeg 4.2.2-static installed on an AWS EC2 with AMI2 Linux.
I have four jpgs named image11.jpg, image12.jpg, image21.jpg, image22.jpg - The images look near identical as only the timestamp in top left changes.
The following command creates 1.mp4, using image11.jpg and image12.jpg, each image displayed for 30 seconds, total duration of the mp4 is 1 minute. It plays like expected.
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 1.mp4
I then convert 1.mp4 to an mpegts file, creating 1.ts. It plays like expected.
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 1.ts
I repeat the above steps except specific to image21.jpg and image22.jpg, creating 2.mp4 and 2.ts
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 2.mp4
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 2.ts
Thus now I have 1.mp4, 1.ts, 2.mp4, 2.ts and all four play individually just fine.
Using ffprobe I can confirm their duration is 60seconds, for example:
ffprobe -i 1.ts -v quiet -show_entries format=duration -hide_banner -print_format json
My m3u8 playlist follows:
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-TARGETDURATION:60.000
#EXTINF:60.0000,
1.ts
#EXTINF:60.000,
2.ts
#EXT-X-ENDLIST
Can anyone advise where I am going wrong?
VLC Error Log (though I expect to play via web browser)
I have researched the process using these (and other pages) as a guide:
How to create a video from images with ffmpeg
convert from jpg to mp4 by ffmpeg
ffmpeg examples page
FFMPEG An Intermediate Guide/image sequence
How to use FFmpeg to convert images to video
Take a look at the start_pts/start_time in the ffprobe -show_streams output, my guess is that they all start at zero/near-zero which will cause playback to fail after your first segment.
You can still produce them independently but you will want to use something like -output_ts_offset to correctly set the timestamps for subsequent segments.
The following solution works well for me. I have tested it uninterrupted for more than two hours and believe it ticks all my boxes. (Edited because I forgot the all important -re tag)
ffmpeg will loop continuously, reading test.jpg and stream it to my RTMP server. When my camera posts an image every 30seconds, I copy the new image on top of the existing test.jpg which in effect changes what is streamed out.
Note the command below is all one line, I have put new lines in to assist reading and The order of the parameters are important - the loop and fflags genpts for example must appear before the -i parameter
ffmpeg
-re
-loop 1
-fflags +genpts
-framerate 1/30
-i test.jpg
-c:v libx264
-vf fps=25
-pix_fmt yuvj420p
-crf 30
-f fifo -attempt_recovery 1 -recovery_wait_time 1
-f flv rtmp://localhost:5555/video/test
Some arguments explained:
-re implies play in real time
loop 1 (1 turns the loop on, 0 off)
-fflags +genpts is something I only half understand. PTS I believe is the start/end time of the segment and without this flag, the PTS is reset to zero with every new image. Using this arguement means I avoid EXT-X-DISCONTINUITY when a new image is served.
-framerate 1/30 means one frame for 30seconds
-i test.jpg is my image 'placeholder'. As new images are received via a separate script, it overwrites this image. When combined with loop it means the ffmpeg output will reference the new image.
-c:v libx264 is for H264 video output formating
-vf fps=25 Removing this, or using a different value resulted in my output stream not being 30seconds.
-pix_fmt yuvj420p (sometimes I have seen yuv420p referenced but this did not work on my environment). I believe there are different jpg colour palettes and this switch ensures I can process a wider choice.
-crf 30 implies highest quality image, lowest compression (important for my client)
-f fifo -attempt_recovery 1 -recovery_wait_time 1 -f flv rtmp://localhost:5555/video/test is part of the magic to go with loop. I believe it keeps the connection open with my stream server, reduces the risk of DISCONTINUITY in the play list.
I hope this helps someone going forward.
The following links helped nudge me forward and I share as it might help others to improve upon my solution
Creating a video from a single image for a specific duration in ffmpeg
How can I loop one frame with ffmpeg? All the other frames should point to the first with no changes, maybe like a recusion
Display images on video at specific framerate with loop using FFmpeg
Loop image ffmpeg HLS
https://trac.ffmpeg.org/wiki/Slideshow
https://superuser.com/questions/1699893/generate-ts-stream-from-image-file
https://ffmpeg.org/ffmpeg-formats.html#Examples-3
https://trac.ffmpeg.org/wiki/StreamingGuide
I am recording stream from camera using different approaches. I tried to record camera stream using ffmpeg. I looked at this question: https://stackoverflow.com/a/26464175/5128696 and used the following command:
ffmpeg -i http://user:password#192.168.0.101/video.cgi?resolution=vga -c copy -map 0 -f segment -segment_time 300 -segment_format mp4 "outfile%03d.mp4"
It worked, but file size peaked at 45Mb for 2 minutes video.
Then I used VLC to record stream. Result file for fragment of same length was about 2Mb
How do I optimize the recorded file size? My camera gives 640x480 video stream without audio, and 45Mb is too much.
-c copy enables stream copy mode. Remove this option to re-encode:
ffmpeg -i http://user:password#192.168.0.101/video.cgi?resolution=vga -map 0 -f segment -segment_time 300 "outfile%03d.mp4"
This will use the default settings for the encoder named libx264. See FFmpeg Wiki: H.264 for more options.
I am recording screen using ffmpeg with following command
ffmpeg -f alsa -f x11grab -i :0.0+0,0 -framerate 30 -crf 30 -video_size 400x400 output.mp4
When I have low memory on disk, ffmpeg throws av_interleaved_write_frame(): No space left on device error. And while opening recorded file getting error This file contains no playable streams..
Is it possible to make the video file playable?
Not with mp4, no. Mp4 requires the files be closed at the end to write out the moov box. This box containes information required to play the file, and is not possible to write out until the very end due to its structure. You can use a different container like mkv or flv that uses a different init structure and can be written and updated within the stream.
In your case, yes. Recently I also ended up with a similar non-playable file due to low disk space. Try this free tool to fix it up:
https://www.videohelp.com/software/recover-mp4-to-h264
Usage:
recover_mp4.exe reference_file.mp4 --analyze
recover_mp4.exe broken.mp4 repaired.h264 [audio.aac | audio.wav | audio.mp3]
I want to capture thumbnail every 1 second from tv card(tv signal) using ffmpeg in windows.
first of all, to record live video from tv card, I tried below.
ffmpeg -f dshow -i video="SKYTV HD USB Maxx Video Capture" -r 20 -threads 0 D://test.mkv
But it didn't work.
the Error message is
"[dshow#000000000034d920] Could not run filter
video=SKYTV HD USB Maxx Video Capture: Input/output error"
I use the device called 'SKYTV HD USB Maxx Video Capture' for getting tv signal(TV card).
(people usually suggest "ffmpeg -f oss -i dev/dsp -f video4linux2 -i dev/video0/tmp/out.mpg"
but I dont think it works at window. this is the error message i got: "Unknown input format: 'video4linux2'")
what should i do to record live video and get thumbnail every 1 second from tv card(tv signal) using ffmpeg in window?
Please help..!
First be sure that the video label you use is really the label return by:
ffmpeg -list_devices true -f dshow -i dummy
More info here
But another solution should be ti use the old "Video For Windows" (VFW).
To try that, list your device with:
ffmpeg -y -f vfwcap -i list
And use your device number as value of the -ioption:
ffmpeg -y -f vfwcap -r 25 -i 0 out.mp4
And if finally you are able to record your stream, there is different options, but in your case everything is clear describe here
ffmpeg -y -f vfwcap -r 25 -i 0 -f image2 -vf fps=fps=1 out%d.jpg
I'm using FFMPEG to capture a live stream from a directshow device (here a VGA2USB adapter).
I need to generate snapshot on scene change which I manage to do with the following command line :
ffmpeg -v verbose -r 20 -f dshow -rtbufsize 2000M -i "video=VGA2USB V2U115452" -s 1024x768 -pix_fmt yuv420p -filter:v yadif=2:0:0 -vcodec mjpeg -muxdelay 0.1 -f image2 -vf select='gt(scene\,0.1)' -vsync vfr "c:\tmp\image%3d.jpg"
This command line generates the snapshot but they are "delayed". I mean, when a scene change is detected with the filter, the previous snapshot is written in the jpg file and the current stay in "buffer" (or whereever it is).
If I try to generate a snapshot every 5 seconds (with -vf fps=fps=1/5 option) the first snapshot is written on the hard disk at the 5th second.
How can I force FFMPEG to write immediately the snapshot and not to wait the next snapshot ?
Thanks for any help you can provide.