I'm trying to record video/audio from a VIDBOX device using ffmpeg. Using Windows 10 and ffmpeg version N-86129-g1e8daf3, I can see and hear the video/audio fine when I execute
ffplay -f dshow -i video="VIDBOX NW07":audio="Microphone (VIDBOX NW07)"
but, I only record a black screen (and the correct audio) when I execute
ffmpeg -f dshow -i video="VIDBOX NW07":audio="Microphone (VIDBOX NW07)" -c:v libx264 out.mp4
What could be causing this to work in ffplay but not ffmpeg?
You are probably testing the output in a generic player. Add -pix_fmt yuv420p to force ffmpeg's output to a standard pixel format that all players can show.
Related
Im working on a video editing project and im using ffmpeg for video rendering
To play video before rendering i wana show it to user and i thought of using ffplay
ffplay -i C:/Users/thota/OneDrive/Desktop/VET/test.mp4 -filter:a "volume="1.0",atempo="1.0"" -vf "transpose=2,transpose=2,setpts=1/"1.0"*PTS,scale="1280*720"" -aspect 16:9 D:/videos.mp4output.mp4
but this is giving an error:
Failed to set value 'volume=1.0,atempo=1.0' for option 'filter:a': Option not found
ffplay only recognizes -vf/-af for video and audio filters respectively.
-af "volume=1.0,atempo=1.0"
I have a ffmpeg command that convert a video for seamless loop. But now i want to crop Video as well in same command.
is there any solution to do crop video 720x720 in same command
--Seamless Commmand--
ffmpeg -i video.mp4 -filter_complex [0:v]split[body][pre];[pre]trim=duration=1,format=yuva420p,fade=d=1:alpha=1,setpts=PTS+(28/TB)[jt];[body]trim=1,setpts=PTS-STARTPTS[main];[main][jt]overlay -c:v libx264 -strict experimental out.mp4
Change [0:v]split[body][pre] to [0:v]crop=720:720,split[body][pre]
I'm trying to extract alpha channel from ProRes (mov) in greyscale to a separate mp4 file (to emulate video with transperancy on the html page later).
ffmpeg -i in.mov -hide_banner -f mp4 -vcodec libx264 -vf alphaextract,format=yuv420p out.mp4
but I don't get a filled alpha channel but only a sort of border of it. Pretty sure that original file is ok (tried with different files) and encoding it to webm showed correct transperency.
What I get from ffmpeg
How original file looks like
It's a bug; patched in git master.
Workaround for older versions is
ffmpeg -i in.mov -vf format=yuva444p16le,alphaextract,format=yuv420p -c:v libx264 out.mp4
I want asymmetrical side by side video with resolution 1920x1080. The first video has bitrate 1mb/s and the second video has bitrate 500kb/s. Both videos have the same resolution 1920x1080 and encoded h.265, container mp4.
I used ffmpeg code:
ffmpeg -i leftvideo.mp4 -i rightvideo.mp4 -filter_complex "[0:v] scale=iw/2:ih, pad=2*iw:ih [left]; [1:v] scale=iw/2:ih [right]; [left][right] overlay=main_w/2:0 [out]" -map [out] -c:v libx265 output.mp4
It works well but I want the resulting video quality while keeping. I don't want re-encoded.
Is it possible the two videos change resolution (960x1080) and together packed into container mp4?
EDIT: or another method?
Using ffmpeg
You are required to re-encode if you want to use filters in ffmpeg, but if you want to "keep the quality" you can use a lossless output:
ffmpeg -i left.mp4 -i right.mp4 -filter_complex \
"[0:v]scale=iw/2:ih[l];[1:v]scale=iw/2:ih[r];[l][r]hstack" \
-c:v libx264 -qp 0 output.mp4
The resulting file size may be huge. If this is not acceptable you can try a "visually lossless" output by changing -qp 0 to -crf 18.
You did not provide full details about your inputs, and did not mention audio, so I assumed you are not concerned with the audio.
You did not provide the complete console output from your command so I assumed your ffmpeg is new enough to use the hstack filter.
Using ffplay
Another option is to just use your player to play side-by-side and not even deal with re-encoding. Example using ffplay.
ffplay -f lavfi "movie=left.mp4,scale=iw/2:ih[v0];movie=right.mp4,scale=iw/2:ih[v1];[v0][v1]hstack"
I am trying to create a video output from multiple video cameras.
Following the example given here Presenting more than 2 videos using FFmpeg
and other similar examples.
but Im getting the error
Output pad "default" for the filter "src" of type "buffer" not connected to any destination
when i run
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih[a];[a][1:0]overlay=w[b];[b][2:0]overlay=w:h" -shortest output.mp4
Im not really sure what this means or how to fix it.
Any help would be greatly appreciated!
Thanks.
When using the "padding" option, you have to specify which is the size of the output image and where you want to put the input image
[0:0]pad=iw*2:ih:0:0
tested under windows 7 with file of same size
ffmpeg -i out.avi -i out.avi -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[a][1:0]overlay=w" -shortest output.mp4
and with WebCam Cap (vfwcap) and a still picture (as i have only o=1 WebCam). BTW you can see how to scale one the source to fit in the target (just in case your source have different resolution)
ffmpeg -y -f vfwcap -r 10 -i 0 -loop 1 -i photo.jpg -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[1:0]scale=640:480[b];[a][b]overlay=w" -shortest output.mp4
under Linux:
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih:0:0[[a];a][1:0]overlay=w" -shortest output.mp4
if it doesn't work test a simple record of video 1 and after of video 0 and check their properties (type, resolution, fps).
ffmpeg -i /dev/video1 -shortest output1.mp4
ffmpeg -I output1.mp4
If you still have issue, update your question with ffmpeg console output (as text) for video and video 0 capture and also of the call with the overlay