ffmpeg not stopping when input is dshow - windows

We were recording a video by specifying a named pipe as input for video frames, like this:
ffmpeg -r 30 -f rawvideo -pix_fmt bgra -s 640x480 -i namedPipe [... output options] out.mp4
It works well, and FFmpeg stops once the named pipe is closed, as is desired.
However, then we also want to record live audio from directshow, like this:
ffmpeg -r 30 -f rawvideo -pix_fmt bgra -s 640x480 -i namedPipe -f dshow -i audio=virtual-audio-capturer [... output options] out.mp4
This also works, but the problem is that the process now does not stop any more once we close the named pipe for the video frames.
My guess is that ffmpeg still gets audio input and thus just keeps running.
How can I change the FFmpeg command so that it stops once the video frames keep coming?

Add -shortest i.e. [... output options] -shortest out.mp4

Related

combine raw video and audio buffer and play using named pipes by ffplay

I have one video buffer and one audio buffer, I want to combine these buffers and play using ffplay as a combined entity, currently I am using this command , which obliviously doesn't work...
ffplay -f rawvideo -pixel_format bgr24 -video_size 1280x720 -vf "transpose=2,transpose=2" -i \.\pipe\VirtualVideoPipe -f s32le -channels 2 -sample_rate 44100 -i \.\pipe\VirtualAudioPipe
error massage says ...
Argument '\.\pipe\VirtualAudioPipe' provided as input filename, but '\.\pipe\VirtualVideoPipe' was already specified.
what should be the command for combining two named pipe video and audio sources and play as one.
kindly help ...
ffplay -f rawvideo -pixel_format bgr24 -video_size 1280x720 -vf "transpose=2,transpose=2" -i \\.\pipe\VirtualVideoPipe | ffplay -f s32le -channels 1 -sample_rate 44100 -i \\.\pipe\VirtualAudioPipe
with this command i am able to play streams but they opens in different windows ...
this isn't my answer ...
i want to combine them into a singe window ...

update ffmpeg filter without interrupting rtmp stream

I am using ffmpeg to read a rtmp stream, add a filter such as a blur box and create a different rtmp stream.
the command for example looks like:
ffmpeg -i <rtmp_source_url> -filter_complex "split=2[a][b];[a]crop=w=300:h=300:x=0:y=0[c];[c]boxblur=luma_radius=10:luma_power=1[blur];[b][blur]overlay=x=0:y=0[output]" -map [output] -acodec aac -vcodec libx264 -tune zerolatency -f flv <rtmp_output_url>
where rtmp_source_url is where the camera/drone is sending the flux and rtmp_output_url is the resuting video with the blur box.
the blur box need to move, either because the target moved or the camera did.
I want to do so without interrupting the output streaming.
I am using fluent-ffmpeg to create the ffmpeg process while a different part of the program compute where the blur box shall be.
thanks for you help and time!
Consider using a pipe to split up the processing.
See here - https://ffmpeg.org/ffmpeg-protocols.html#pipe
The accepted syntax is:
pipe:[number]
number is the number corresponding to the file descriptor of the pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If number is not specified, by default the stdout file descriptor will be used for writing, stdin for reading.
For example to read from stdin with ffmpeg:
cat test.wav | ffmpeg -i pipe:0
# ...this is the same as...
cat test.wav | ffmpeg -i pipe:
For writing to stdout with ffmpeg:
ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
# ...this is the same as...
ffmpeg -i test.wav -f avi pipe: | cat > test.avi
For example, you read a rtmp stream, add a filter such as a blur box and create a different rtmp stream. So, the first step is is to separate the incoming and outgoing stream -
ffmpeg -i <rtmp_source_url> -s 1920x1080 -f rawvideo pipe: | ffmpeg -s 1920x1080 -f rawvideo -y -i pipe: -filter_complex "split=2[a][b];[a]crop=w=300:h=300:x=0:y=0[c];[c]boxblur=luma_radius=10:luma_power=1[blur];[b][blur]overlay=x=0:y=0[output]"
-map [output] -acodec aac -vcodec libx264 -tune zerolatency -f flv <rtmp_output_url>
I do not know what criteria you have to vary the blur box, but now you can process the incoming frame in the second ffmpeg. Also, I used 1920x1080 as the video size - you can replace it with the actual size.
For the first iteration, do not worry about the audio, do your blur operation. As we are feeding rawvideo - in this example the audio will be ignored.

Extracting frames from video while recording using ffmpeg

I am using ffmpeg to record a video using a Raspberry Pi with its camera module.
I would like to run a image classifier on a regular interval for which I need to extract a frame from the stream.
This is the command I currently use for recording:
$ ffmpeg -f video4linux2 -input_format h264 -video_size 1280x720 -framerate 30 -i /dev/video0 -vcodec copy -an test.h264
In other threads this command is recommended:
ffmpeg -i file.mpg -r 1/1 $filename%03d.bmp
I don't think this is intended to be used with files that are still appended to and I get the error "Cannot use -sseof, duration of test.h264 not known".
Is there any way that ffmpeg allows this?
I don't have a Raspberry Pi set up with a camera at the moment to test with, but you should be able to simply append a second output stream to your original command, as follows to get, say, 1 frame/second of BMP images:
ffmpeg -f video4linux2 -input_format h264 -video_size 1280x720 -framerate 30 -i /dev/video0 -vcodec copy -an test.h264 -r 1 frame-%03d.bmp

ffmpeg how to record and preview at the same time

I want to capture video+audio from directshow device like webcam and stream it to RTMP server. This part no problem. But the problem is that I want to be able to see the preview of it. After a lot of search someone said pipe the input using tee muxer to ffplay. but I couldn't make it work. Here is my code for streaming to rtmp server. how should I change it?
ffmpeg -rtbufsize 8196k -framerate 25 -f dshow -i video="Microsoft® LifeCam Studio(TM)":audio="Desktop Microphone (Microsoft® LifeCam Studio(TM))" -vcodec libx264 -acodec aac -strict -2 -b:v 1024k -b:a 128k -ar 48000 -s 720x576 -f flv "rtmp://ip-address-of-my-server/live/out"
Here is the final code I used and it works.
ffmpeg -rtbufsize 8196k -framerate 25 -f dshow -i video="Microsoft® LifeCam Studio(TM)":audio="Desktop Microphone (Microsoft® LifeCam Studio(TM))" -vcodec libx264 -acodec aac -strict -2 -f tee -map 0:v -map 0:a "[f=flv]rtmp://ip-address-and-path|[f=nut]pipe:" | ffplay pipe:
The core command for those running ffmpeg on a Unix-compatible system (e.g. MacOS, BSD and GNU-Linux) is really quite simple. It's to redirect or to "pipe" one of the outputs of ffmpeg to ffplay. The main problem here is that ffmpeg cannot autodetect the media format (or container) if the output doesn't have a recognizable file extension such as .avi or .mkv.
Therefore you should specify the format with the option -f. You can list the available choices for option -f with the ffmpeg -formats command.
In the following GNU/Linux command example, we record from an input source named /dev/video0 (possibly a webcam). The input source can also be a regular file.
ffmpeg -i /dev/video0 -f matroska - filename.mkv | ffplay -i -
A less ambiguous way of writing this for non-Unix users would be to use the special output specifier pipe.
ffmpeg -i /dev/video0 -f matroska pipe:1 filename.mkv | ffplay -i pipe:0
The above commands should be enough to produce a preview. But to make sure that you get the video and audio quality you want, you also need to specify, among other things, the audio and video codecs.
ffmpeg -i /dev/video -c:v copy -c:a copy -f matroska - filename.mkv | ffplay -i -
If you choose a slow codec like Google's AV1, you'd still get a preview, but one that stutters.

FFmpeg input duration?

With FFmpeg you have the option -t which will set the duration of the output. However I do not see a way to limit the duration of the input. Take this command
ffmpeg -i video.mp4 -c copy -t 60 out.mp4
This simply creates a 60 second clip of the original video. However if I wanted to clip the audio while keeping the full video stream, FFmpeg does not seem to have an option for this.
I have tried simply clipping the audio first, then combining the clipped audio with the video file, but this causes video/audio sync issues for me.
‘-aframes number (output)’
Set the number of audio frames to record. This is an alias for -frames:a.
§ Audio Options
ffmpeg -i video.mp4 -c copy -aframes 100 out.mp4
Use the "-itsoffset" option.
This makes the first 10 seconds mute.
ffmpeg -i video.mp4 -vn -acodec copy -ss 10.0 out_audio.mp4
ffmpeg -itsoffset 10.0 -i out_audio.mp4 -i video.mp4 -vcodec copy -acodec copy out.mp4

Resources