I am trying to record my desktop using pipe, but ffmpeg fails.
OS windows:
ffmpeg -filter_complex ddagrab=output_idx=0:framerate=5,hwdownload,format=bgra -c:v libx264 -crf 18 -y pipe:1 | cat > test.mp4
OS mac:
ffmpeg -f avfoundation -framerate 5 -capture_cursor 1 pipe:1 | cat > output.mkv
However, on windows, this command works
ffmpeg -f gdigrab -i desktop -f mpegts pipe:1 | cat > out.mp4
it turned out to solve the problem by adding a parameter -f mpegts
Related
i found a couple of threads on stackoverflow that allow me to create a dummy mov file for picture:
ffmpeg -f lavfi -i color=c=black:s=640x480 -c:v prores_ks -profile:v 3 -tune stillimage -pix_fmt yuv422p10 -t 10 output_proreshq.mov
the above creates a 10 second, picture-only file.
this:
ffmpeg -f lavfi -i anullsrc=channel_layout=5.1:sample_rate=48000 -t 10 output.wav
creates a 6 channel wav file.
i haven't been able to figure out how to combine these two commands to create a file with blank picture and six channels of blank audio. can someone show me how to get this done? thanks!
Combined command is:
ffmpeg -f lavfi -i color=c=black:s=640x480 -f lavfi -i anullsrc=channel_layout=5.1:sample_rate=48000 -c:v prores_ks -profile:v 3 -pix_fmt yuv422p10 -c:a pcm_s16le -t 10 output_proreshq.mov
-tune stillimage is for libx264 only and is being ignored in your command so I removed it.
As for your duration queries, all of the simplified examples below will have a 10 second duration:
ffmpeg -f lavfi -i color -f lavfi -i anullsrc -t 10 output.mov
ffmpeg -f lavfi -i color=d=10 -f lavfi -i anullsrc -shortest output.mov
ffmpeg -f lavfi -i color -t 10 -f lavfi -i anullsrc -shortest output.mov
ffmpeg -t 10 -f lavfi -i color -t 10 -f lavfi -i anullsrc output.mov
ffmpeg -f lavfi -i color,trim=duration=10 -f lavfi -i anullsrc,atrim=duration=10 output.mov
ffmpeg -f lavfi -i color,trim=duration=10 -f lavfi -i anullsrc -shortest output.mov
Use whatever method you prefer.
i am trying this:
ffmpeg -v verbose -re -y -i syncTest.mp4 -af azmq,volume=1 \
-c:v copy -c:a aac ./output.mp4
then invoke
echo 'Parsed_volume_1 volume 0' | ./zmqsend
it works, audio is muted until i invoke it again with 1
but,
ffmpeg -v verbose -re -y -i syncTest.mp4 -af \
azmq,adelay=delays=0S:all=1 -c:v copy -c:a aac ./output.mp4
and then doing something like
echo Parsed_adelay_1 delays 20000S | ./zmqsend
echo Parsed_adelay_1 all 1 | ./zmqsend
does not work, it prints:
78 Function not implemented
is there totally no way to do it?
I have 2 commands as listed below.
Add intro image to a video
ffmpeg -y -loop 1 -framerate 10 -t 3 -i intro.png -i video.mp4 -filter_complex "[0:0] [1:0] concat=n=2:v=1:a=0" -c:v libx264 -crf 23 videoWithIntro.mp4
Add watermark to video
ffmpeg -y -i video.mp4 -i watermark_color.png -filter_complex "overlay=x=(main_w-overlay_w)/2:y=(main_h-overlay_h)/2" videoWithWatermark.mp4
I was wondering is it possible to combine these into the 1 command?
Use
ffmpeg -y -loop 1 -framerate 10 -t 3 -i intro.png -i video.mp4 -i watermark_color.png -filter_complex "[0][1]concat=n=2:v=1:a=0[v];[v][2]overlay=x=(main_w-overlay_w)/2:y=(main_h-overlay_h)/2" videoWithWatermark.mp4
I assume your videos don't have audio, else use
ffmpeg -y -loop 1 -framerate 10 -t 3 -i intro.png -i video.mp4 -i watermark_color.png -f lavfi -t 3 -i anullsrc -filter_complex "[0][1]concat=n=2:v=1:a=0[v];[v][2]overlay=x=(main_w-overlay_w)/2:y=(main_h-overlay_h)/2;[3][1]concat=n=2:v=0:a=1" videoWithWatermark.mp4
The final command to get this working correctly is as follows
ffmpeg -y -loop 1 -framerate 25 -t 3 -i 1920x1080_intro.png -i DSC_0002.MOV -i watermark_color.png -report -an -filter_complex "[1][2]overlay=x=(main_w-overlay_w)/2:y=(main_h-overlay_h)/2,setsar=1[v];[0]setsar=1[pre];[pre][v]concat=n=2:v=1:a=0" ../testing/videoWithIntroAndWatermark.mp4
I use the following command to pipe the FFmpeg output to 2 ffplay , but it doesn't work.
ffmpeg -ss 5 -t 10 -i input.avi -force_key_frames 00:00:00.000 -tune zerolatency -s 1920x1080 -r 25 -f mpegts output.ts -f avi -vcodec copy -an - | ffplay -i - -f mpeg2video - | ffplay -i -
How can I pipe the FFmpeg output to 2 (or more) ffplay?
I saw this page but it doesn't work for ffplay. (it is for Linux but my OS is windows)
Please help me
Thanks
There's some kind of Tee-Object (alias tee) in PowerShell but I'm not sure if it's similar to the one on Linux. You can try:
ffmpeg -re -i [...] -f mpegts - | tee >(ffplay -) | ffplay -
An alternative is to output to a multicast port on the local subnetwork:
ffmpeg -re -i [...] -f mpegts udp://224.0.0.1:10000
You can then connect as many clients as you require on the same address/port:
ffplay udp://224.0.0.1:10000
To force ffmpeg to read, decode, and scale only once in order to bring CPU usage down I put those steps in one ffmpeg process and piped the result into another ffmpeg process that performed the encoding.
This improved the overall processing time by 15–20%.
INPUT_STREAM="ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 -threads auto -f yuv4mpegpipe -"
$INPUT_STREAM | ffmpeg -y -f yuv4mpegpipe -i - \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto out-250.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto out-500.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto out-700.mp4
I then tried the syntax below:
ffmpeg -i c:\sample.mp4 -threads auto -f yuv4mpegpipe - | ffmpeg -y -f yuv4mpegpipe -i -vcodec libx264 -b:v 250k -threads auto c:\out-250.mp4 -vcodec libx264 -b:v 260k -threads auto c:\out-260.mp4
… but this error appears:
At least one output file must be specified
But I have specified the out put file which is: C:\out-260.mp4. Still it doesn't work
What's wrong?
ffmpeg -y -f yuv4mpegpipe -i -vcodec …
You didn't specify any input file. To read from stdin, use -:
ffmpeg -y -f yuv4mpegpipe -i - -vcodec …