Using ffmpeg arecord -L, I am able to identify my Logitech usb webcam as:
hw:CARD=U0x46d0x821,DEV=0
USB Device 0x46d:0x821, USB Audio
Direct hardware device without any conversions
plughw:CARD=U0x46d0x821,DEV=0
USB Device 0x46d:0x821, USB Audio
Hardware device with all software conversions
when I go into /dev/snd/by-id, the webcam is described as:
usb-046d_0821_6813BFD0-00 -> ../controlC1
I know that the command to use a sound device in ffmpeg is
ffmpeg -f alsa -i $ALSA_DEVICE_NAME..
I have tried
ffmpeg -f alsa -i "hw:CARD=U0x46d0x821,DEV=0"
and
ffmpeg -f alsa -i "plughw:CARD=U0x46d0x821,DEV=0"
and in both cases I receive the same error message:
ALSA lib pcm.c:2208:(snd_pcm_open_noupdate) Unknown PCM hw=CARD=U0x46d0x821,DEV=0
[alsa # 0x9c96580] cannot open audio device hw=CARD=U0x46d0x821,DEV=0 (No such file or directory)
hw:CARD=U0x46d0x821,DEV=0: Input/output error
I have also tried:
ffmpeg -f alsa -i "usb-046d_0821_6813BFD0-00"
and
ffmpeg -f alsa -i "usb-046d_0821_6813BFD0-00,DEV=0"
and have still received error message
Could you please help in formulating the correct format of the command
I have finally been able to use the sound portion of the webcam under ffmpeg. The correct way to do it is NOT to enclose the hardware value in quotes. Do not enclose the hardware in quotes:
ffmpeg -f alsa -i plughw:CARD=U0x46d0x821,DEV=0
instead of:
ffmpeg -f alsa -i "plughw:CARD=U0x46d0x821,DEV=0"
I hope this helps someone else.
Maybe this works:
ffmpeg -f alsa -r 16000 -i hw:2,0 -f video4linux2 -s 800x600 -i /dev/video0 -r 30 -f avi -vcodec mpeg4 -vtag xvid -sameq -acodec libmp3lame -ab 96k output.avi
Source
Related
I have a sound card (Behringer UMC202HD) which connected to a Windows 10 computer by usb cable, i am able to recieve audio from input device with the following ffmpeg command:
ffmpeg -f dshow -i audio="IN 1-2 (BEHRINGER UMC 202HD 192k)" -map_channel 0.0.0 -c:a pcm_s24le first_channel.wav -map_channel 0.0.1 -c:a pcm_s24le second_channel.wav
But i can't send audio to sound card's output with the ffmpeg, is there any way to do this? if there is, how can i do it?
Linux version (pseudo command) of what i'm trying to do in Windows:
ffmpeg -i my_input.wav -f alsa alsa.behringer_out
I couldn't find a way to do it with the ffmpeg.exe but, i found a simple way of it with the ffplay:
Set the system's output to sound card from Windows sound settings and turn on mono audio option, simply run this code for send the output audio card's channel 1:
ffplay -i input.mp4 -af pan="stereo|c1=c1" -nodisp
for the channel 0 and channel 1:
ffplay -i input.mp4 -af pan="stereo|c0=c0|c1=c1" -nodisp
I want to capture video+audio from directshow device like webcam and stream it to RTMP server. This part no problem. But the problem is that I want to be able to see the preview of it. After a lot of search someone said pipe the input using tee muxer to ffplay. but I couldn't make it work. Here is my code for streaming to rtmp server. how should I change it?
ffmpeg -rtbufsize 8196k -framerate 25 -f dshow -i video="Microsoft® LifeCam Studio(TM)":audio="Desktop Microphone (Microsoft® LifeCam Studio(TM))" -vcodec libx264 -acodec aac -strict -2 -b:v 1024k -b:a 128k -ar 48000 -s 720x576 -f flv "rtmp://ip-address-of-my-server/live/out"
Here is the final code I used and it works.
ffmpeg -rtbufsize 8196k -framerate 25 -f dshow -i video="Microsoft® LifeCam Studio(TM)":audio="Desktop Microphone (Microsoft® LifeCam Studio(TM))" -vcodec libx264 -acodec aac -strict -2 -f tee -map 0:v -map 0:a "[f=flv]rtmp://ip-address-and-path|[f=nut]pipe:" | ffplay pipe:
The core command for those running ffmpeg on a Unix-compatible system (e.g. MacOS, BSD and GNU-Linux) is really quite simple. It's to redirect or to "pipe" one of the outputs of ffmpeg to ffplay. The main problem here is that ffmpeg cannot autodetect the media format (or container) if the output doesn't have a recognizable file extension such as .avi or .mkv.
Therefore you should specify the format with the option -f. You can list the available choices for option -f with the ffmpeg -formats command.
In the following GNU/Linux command example, we record from an input source named /dev/video0 (possibly a webcam). The input source can also be a regular file.
ffmpeg -i /dev/video0 -f matroska - filename.mkv | ffplay -i -
A less ambiguous way of writing this for non-Unix users would be to use the special output specifier pipe.
ffmpeg -i /dev/video0 -f matroska pipe:1 filename.mkv | ffplay -i pipe:0
The above commands should be enough to produce a preview. But to make sure that you get the video and audio quality you want, you also need to specify, among other things, the audio and video codecs.
ffmpeg -i /dev/video -c:v copy -c:a copy -f matroska - filename.mkv | ffplay -i -
If you choose a slow codec like Google's AV1, you'd still get a preview, but one that stutters.
I am looking for a way to record the audio output (speakers) using Windows ffmpeg.
I need to do this WITHOUT installing any extra dshow filters and without having the StereoMix input enabled (since this is not available on many computers).
I have read in the ffmpeg documentation that the -map would allow redirecting an audio output so that ffmpeg sees it as an audio input but I can't find any example of how to do that.
In Linux I managed to do it like this:
ffmpeg -f pulse -ac 2 -ar 44100 -i alsa_output.pci-0000_00_1f.4.analog-stereo.monitor -f pulse -ac 2 -ar 44100 -i alsa_input.pci-0000_00_1f.4.analog-stereo -filter_complex amix=inputs=2 test.mp4
However I can't find a similar way to do it in Windows and MacOSX.
So in short, is it possible with the Windows ffmpeg to record audio from the speakers without extra dshow filters (out-of-the-box)? Same question goes for MacOSX.
Thanks!
C:\DIR>choco install ffmpeg
C:\DIR>ffmpeg -list_devices true -f dshow -i dummy
[...]
[dshow # 00000000005d1140] DirectShow audio devices
[dshow # 00000000005d1140] "Microphone Array (Realtek High "
[dshow # 00000000005d1140] Alternative name "#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\Microphone Array (Realtek High "
[...]
C:\DIR>ffmpeg -f dshow -i audio="Microphone Array (Realtek High " NAME.wav
[...]
[press 'q' to quit]
[...]
To play the file, I figured I needed to apply a work-around to a bug in the Windows SDL output device configuration,
https://trac.ffmpeg.org/ticket/6891
C:\DIR>#rem https://trac.ffmpeg.org/ticket/6891
C:\DIR>#rem set SDL_AUDIODRIVER=directsound or winmm
C:\DIR>set SDL_AUDIODRIVER=winmm
C:\DIR>ffplay -i NAME.wav
It's fun to watch ffplay's real-time spectrogram.
C:\DIR>ffplay -f dshow -i audio="Microphone Array (Realtek High "
[...]
[press 'm' to mute the echo]
[...]
[press 'q' to quit]
[...]
I saw other ways of playing the audio file using the Windows API from Python or its media player.
C:\DIR>type winsound-play.py
import sys, winsound
winsound.PlaySound(sys.argv[1], winsound.SND_FILENAME)
C:\DIR>c:\Python27\python winsound-play.py NAME.wav
C:\DIR>explorer NAME.wav
C:\DIR>"%ProgramFiles%\Windows Media Player\wmplayer.exe" /task NowPlaying %CD%\NAME.wav
I am very new to ffmpeg and just read some examples on how to open a video file and decode its stream.
But is it possible to open a webcam's stream, something like:
http://192.168.1.173:80/live/0/mjpeg.jpg?x.mjpeg
Is there any examples/tutorials on this?
I need to use ffmpeg as decoder to decode the stream in my own Qt based program.
Nyaruko,
First check if your webcam is supported... Do
ffmpeg -y -f vfwcap -i list
Next ,
ffmpeg -y -f vfwcap -r 25 -i 0 out.mp4 for encoding
This site has helpful info;
http://www.area536.com/projects/streaming-video/
Best of Luck.
This works for live video streaming:
ffplay -f dshow -video_size 1280x720 -i video0
The other option using ffmpeg is:
ffmpeg -f dshow -video_size 1280x720 -i video0 -f sdl2 -
Above both the solution are provided by FFMPED
Have problem capture video with avconv
I using this commands video0 in shell 0 and video1 in shell 1
avconv -f video4linux2 -i /dev/video0 video0.avi
avconv -f video4linux2 -i /dev/video1 video1.avi
But with start second video recorder message
/dev/video1: No space left on device
Question there is the possibility of recording two videos simultaneously?
Other
First capture of video0.avi is work perfectly, but if I interrupt with Ctrl+C and try execute same command the video is not captured.
This message displayed in shell
uvcvideo: Failed to resubmit video URB (-27)
The process still running?
Removing webcam and reconnect work fine in first time.
I ran into the same issue - in my case I resolved it by connecting the webcams to separate USB2 buses. I still cannot make 2 USB webcams work simultaneously on the same bus. I have also found that I must run ffmpeg (now avconv) as root in order to capture and encode both sound and video from both cams simultaneously.
Also, I run this from a bash script, and found I must background one avconv command to run both simultaneously. The script looks like this:
nohup avconv -f video4linux2 -s 640x360 -r 30 /dev/video0 -f alsa -ac 2 -i hw:1,0 -acodec libmp3lame -ab 96k -async1 stream1.mp4
P1=$!
avconv -f video4linux2 -s 640x360 -r 30 /dev/video1 -f alsa -ac 2 -i hw:2,0 -acodec libmp3lame -ab 96k -async1 stream2.mp4
kill $P1