I AM USING WINDOWS
I did recording camera,microphone and system sounds each separately with ffmpeg.
ffmpeg -f dshow -i video="USB2.0 PC CAMERA" output.mkv
Above code for camera recording.
ffmpeg -f dshow -i audio="#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{5B4DB0B5-B645-4AFA-930D-4710AAF753DB}" output.wav
And above for microphone.
ffmpeg -f dshow -i audio="#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{ADECEC1D-C3CC-4BAE-8516-752251B8B63F}" output.mkv
And above for system audio.
I mixed system audio with microphone like below:
ffmpeg -f dshow -i audio="#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{5B4DB0B5-B645-4AFA-930D-4710AAF753DB}" -f dshow -i audio="#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{ADECEC1D-C3CC-4BAE-8516-752251B8B63F}" -filter_complex amerge=inputs=2 stream.mp3
BUT there is still issue to volume levels. How do I adjust sound volume levels
for each input or output file?
You can add the volume filter:
ffmpeg -f dshow -i audio="#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{5B4DB0B5-B645-4AFA-930D-4710AAF753DB}" -f dshow -i audio="#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{ADECEC1D-C3CC-4BAE-8516-752251B8B63F}" -filter_complex "[0:a]volume=0.3[a0];[1:a]volume=0.5[a1];[a0][a1]amerge=inputs=2" -ac 1 stream.mp3
Related
I am facing error when i try to record both audio MIC and SPEAKER new i need need command for record video and audio(both mic and speaker).
Tried bwlow command
ffmpeg -f dshow -i audio="Headset Microphone (Plantronics Blackwire 3225 Series)" -f dshow -i audio="virtual-audio-capturer" -f gdigrab -framerate 10 -video_size 1920x1080 -draw_mouse 1 -i desktop -map 2 -map 0 -map 1 screen.avi
but its not working its throughing some error.
I need to overlay audio files at specific times, on an existing silence.mp3. Something like that:
[----[...audio1...]----------[...audio2...]---------------]
I've tried the following but it doesn't work:
ffmpeg -y -i silence.mp3 -itsoffset 4 -i audio1.mp3 -itsoffset 30 -i audio2.mp3 -c:a copy final.mp3
Any help would be appriciated. Thank you.
There are several methods.
adelay, amix
Use the adelay and amix filters:
ffmpeg -i audio1.mp3 -i audio2.mp3 -filter_complex "[0]adelay=4s:all=1[0a];[1]adelay=30s:all=1[1a];[0a][1a]amix=inputs=2[a]" -map "[a]" output.mp3
Note that the amix filter will reduce volume of the output to prevent clipping. Followup with dynaudnorm or volume filters if desired.
adelay, concat filter
Or adelay and concat filters. This assumes audio1.mp4 is 10 seconds long, and both inputs have the same sample rate and channel layout:
ffmpeg -i audio1.mp3 -i audio2.mp3 -filter_complex "[0]adelay=4s:all=1[0a];[1]adelay=16s:all=1[1a];[0a][1a]concat=n=2:v=0:a=1[a]" -map "[a]" output.mp3
anullsrc, concat demuxer
Or generate silent files as spacers with the anullsrc filter:
ffmpeg -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -t 4 4.mp3
ffmpeg -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -t 16 16.mp3
Create input.txt:
file '4.mp3'
file 'audio1.mp3'
file '16.mp3'
file 'audio2.mp3'
Then use the concat demuxer:
ffmpeg -f concat -i input.txt -c copy output.mp3
I am trying to use FFMPEG to splice few videos and output one combined video.
I managed to get all video stream with this command :
ffmpeg.exe -i 1.mov -i 2.mov -filter_complex "[0:v]scale=1920:1080[v0];[1:v]scale=1920:1080[v1];[v0][v1] concat=n=2:v=1[v]" -map "[v]" out.mp4
Also, to add a dummy audio to a video with this command:
ffmpeg.exe -i 1.mov -f lavfi -i aevalsrc=0 -shortest -i out.mov
Above commands work perfectly, however 2.mov has an audio stream while 1.mov does not.
Is there any method that can set a dummy audio for 1.mov and then combine both video and audio streams from 1.mov and 2.mov at one go, so that output a combined video that can play sound when it is at clip 2.mov.
Use
ffmpeg.exe -i 1.mov -i 2.mov -f lavfi -t 1 -i anullsrc -filter_complex "[0:v]scale=1920:1080[v0];[1:v]scale=1920:1080[v1];[v0][2:a][v1][1:a] concat=n=2:v=1:a=1[v][a]" -map "[v]" -map "[a]" out.mp4
-f lavfi -t 1 -i anullsrc adds a silent 1 second audio input, which is used as a counterpart to the video input from 1.mov. The concat filter will pad the audio to match the video duration of 1.mov.
Using FFMPEG I am trying to record a video which will have input from two cameras. In output video I want input of camera side by side for this I have used hstack:
ffmpeg -rtbufsize 200M -f dshow -i video="Integrated Webcam" -f dshow -i video="USB2.0 Camera" -filter_complex "[0:v][1:v]hstack=inputs=2[v];" -map "[v]" -f flv test.flv
But I am getting an error as:
Use
ffmpeg -rtbufsize 200M -f dshow -i video="Integrated Webcam" -rtbufsize 200M -f dshow -i video="USB2.0 Camera" -filter_complex "[0:v][1:v]hstack=inputs=2[v]" -map "[v]" -f flv test.flv
rtbufsize is an input option and has to be applied to each input for which it is intended.
The final filter in a filter_complex should not be terminated with a delimiter (, or ;).
I want to capture video+audio from directshow device like webcam and stream it to RTMP server. This part no problem. But the problem is that I want to be able to see the preview of it. After a lot of search someone said pipe the input using tee muxer to ffplay. but I couldn't make it work. Here is my code for streaming to rtmp server. how should I change it?
ffmpeg -rtbufsize 8196k -framerate 25 -f dshow -i video="Microsoft® LifeCam Studio(TM)":audio="Desktop Microphone (Microsoft® LifeCam Studio(TM))" -vcodec libx264 -acodec aac -strict -2 -b:v 1024k -b:a 128k -ar 48000 -s 720x576 -f flv "rtmp://ip-address-of-my-server/live/out"
Here is the final code I used and it works.
ffmpeg -rtbufsize 8196k -framerate 25 -f dshow -i video="Microsoft® LifeCam Studio(TM)":audio="Desktop Microphone (Microsoft® LifeCam Studio(TM))" -vcodec libx264 -acodec aac -strict -2 -f tee -map 0:v -map 0:a "[f=flv]rtmp://ip-address-and-path|[f=nut]pipe:" | ffplay pipe:
The core command for those running ffmpeg on a Unix-compatible system (e.g. MacOS, BSD and GNU-Linux) is really quite simple. It's to redirect or to "pipe" one of the outputs of ffmpeg to ffplay. The main problem here is that ffmpeg cannot autodetect the media format (or container) if the output doesn't have a recognizable file extension such as .avi or .mkv.
Therefore you should specify the format with the option -f. You can list the available choices for option -f with the ffmpeg -formats command.
In the following GNU/Linux command example, we record from an input source named /dev/video0 (possibly a webcam). The input source can also be a regular file.
ffmpeg -i /dev/video0 -f matroska - filename.mkv | ffplay -i -
A less ambiguous way of writing this for non-Unix users would be to use the special output specifier pipe.
ffmpeg -i /dev/video0 -f matroska pipe:1 filename.mkv | ffplay -i pipe:0
The above commands should be enough to produce a preview. But to make sure that you get the video and audio quality you want, you also need to specify, among other things, the audio and video codecs.
ffmpeg -i /dev/video -c:v copy -c:a copy -f matroska - filename.mkv | ffplay -i -
If you choose a slow codec like Google's AV1, you'd still get a preview, but one that stutters.