I have a sound card (Behringer UMC202HD) which connected to a Windows 10 computer by usb cable, i am able to recieve audio from input device with the following ffmpeg command:
ffmpeg -f dshow -i audio="IN 1-2 (BEHRINGER UMC 202HD 192k)" -map_channel 0.0.0 -c:a pcm_s24le first_channel.wav -map_channel 0.0.1 -c:a pcm_s24le second_channel.wav
But i can't send audio to sound card's output with the ffmpeg, is there any way to do this? if there is, how can i do it?
Linux version (pseudo command) of what i'm trying to do in Windows:
ffmpeg -i my_input.wav -f alsa alsa.behringer_out
I couldn't find a way to do it with the ffmpeg.exe but, i found a simple way of it with the ffplay:
Set the system's output to sound card from Windows sound settings and turn on mono audio option, simply run this code for send the output audio card's channel 1:
ffplay -i input.mp4 -af pan="stereo|c1=c1" -nodisp
for the channel 0 and channel 1:
ffplay -i input.mp4 -af pan="stereo|c0=c0|c1=c1" -nodisp
Related
I'm trying to stream .wav audio files via RTP multicast. I'm using the following command:
ffmpeg -re -i Melody_file.wav -f rtp rtp://224.0.1.211:5001
It successfully initiates the stream. However, the audio comes out very choppy. Any ideas how I can make the audio stream clean? I do not need any video at all. Below is a screenshot of my output:
Here's some examples expanding upon the useful comments between #Ralf and #Ahmed about setting asetnsamples and aresample, and also those mentioned in the Snom wiki. Basically one can get smoother multicast transmission/playback using these approaches for G711/mulaw audio:
ffmpeg -re -i Melody_file.wav -filter_complex 'aresample=8000,asetnsamples=n=160' -acodec pcm_mulaw -ac 1 -f rtp rtp://224.0.1.211:5001
Or using higher quality G722 audio codec:
ffmpeg -re -i Melody_file.wav -filter_complex 'aresample=16000,asetnsamples=n=160' -acodec g722 -ac 1 -f rtp rtp://224.0.1.211:5001
I have a wifi camera that uses RTSP/ONVIF protocol and after reading FFMPEG docs and some threads at Google I am trying to broadcast the stream to Youtube. So I started a broadcast at youtube and in my computer in ffmpeg I executed this command:
ffmpeg -f lavfi -i anullsrc -rtsp_transport udp -i rtsp://200.193.21.176:6002/onvif1 -tune zerolatency -vcodec libx264 -t 12:00:00 -pix_fmt + -c:v copy -c:a aac -strict experimental -f flv rtmp://x.rtmp.youtube.com/live2/private_key
The command above looks like it's correct cause it ouputs constantly something like this:
The problem is that at YOUTUBE it still says I am offline. Why?
Try replace first part to: ffmpeg -re -i somefile.mp4, so you will to know, if here any problems with your camera or not.
ffmpeg and VLC very similar and even uses same code for codecs. But RTSP it handles differently. But try just ffmpeg -i rtsp://200.193.21.176:6002/onvif1 and nothing more as source.
I am looking for a way to record the audio output (speakers) using Windows ffmpeg.
I need to do this WITHOUT installing any extra dshow filters and without having the StereoMix input enabled (since this is not available on many computers).
I have read in the ffmpeg documentation that the -map would allow redirecting an audio output so that ffmpeg sees it as an audio input but I can't find any example of how to do that.
In Linux I managed to do it like this:
ffmpeg -f pulse -ac 2 -ar 44100 -i alsa_output.pci-0000_00_1f.4.analog-stereo.monitor -f pulse -ac 2 -ar 44100 -i alsa_input.pci-0000_00_1f.4.analog-stereo -filter_complex amix=inputs=2 test.mp4
However I can't find a similar way to do it in Windows and MacOSX.
So in short, is it possible with the Windows ffmpeg to record audio from the speakers without extra dshow filters (out-of-the-box)? Same question goes for MacOSX.
Thanks!
C:\DIR>choco install ffmpeg
C:\DIR>ffmpeg -list_devices true -f dshow -i dummy
[...]
[dshow # 00000000005d1140] DirectShow audio devices
[dshow # 00000000005d1140] "Microphone Array (Realtek High "
[dshow # 00000000005d1140] Alternative name "#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\Microphone Array (Realtek High "
[...]
C:\DIR>ffmpeg -f dshow -i audio="Microphone Array (Realtek High " NAME.wav
[...]
[press 'q' to quit]
[...]
To play the file, I figured I needed to apply a work-around to a bug in the Windows SDL output device configuration,
https://trac.ffmpeg.org/ticket/6891
C:\DIR>#rem https://trac.ffmpeg.org/ticket/6891
C:\DIR>#rem set SDL_AUDIODRIVER=directsound or winmm
C:\DIR>set SDL_AUDIODRIVER=winmm
C:\DIR>ffplay -i NAME.wav
It's fun to watch ffplay's real-time spectrogram.
C:\DIR>ffplay -f dshow -i audio="Microphone Array (Realtek High "
[...]
[press 'm' to mute the echo]
[...]
[press 'q' to quit]
[...]
I saw other ways of playing the audio file using the Windows API from Python or its media player.
C:\DIR>type winsound-play.py
import sys, winsound
winsound.PlaySound(sys.argv[1], winsound.SND_FILENAME)
C:\DIR>c:\Python27\python winsound-play.py NAME.wav
C:\DIR>explorer NAME.wav
C:\DIR>"%ProgramFiles%\Windows Media Player\wmplayer.exe" /task NowPlaying %CD%\NAME.wav
Using ffmpeg arecord -L, I am able to identify my Logitech usb webcam as:
hw:CARD=U0x46d0x821,DEV=0
USB Device 0x46d:0x821, USB Audio
Direct hardware device without any conversions
plughw:CARD=U0x46d0x821,DEV=0
USB Device 0x46d:0x821, USB Audio
Hardware device with all software conversions
when I go into /dev/snd/by-id, the webcam is described as:
usb-046d_0821_6813BFD0-00 -> ../controlC1
I know that the command to use a sound device in ffmpeg is
ffmpeg -f alsa -i $ALSA_DEVICE_NAME..
I have tried
ffmpeg -f alsa -i "hw:CARD=U0x46d0x821,DEV=0"
and
ffmpeg -f alsa -i "plughw:CARD=U0x46d0x821,DEV=0"
and in both cases I receive the same error message:
ALSA lib pcm.c:2208:(snd_pcm_open_noupdate) Unknown PCM hw=CARD=U0x46d0x821,DEV=0
[alsa # 0x9c96580] cannot open audio device hw=CARD=U0x46d0x821,DEV=0 (No such file or directory)
hw:CARD=U0x46d0x821,DEV=0: Input/output error
I have also tried:
ffmpeg -f alsa -i "usb-046d_0821_6813BFD0-00"
and
ffmpeg -f alsa -i "usb-046d_0821_6813BFD0-00,DEV=0"
and have still received error message
Could you please help in formulating the correct format of the command
I have finally been able to use the sound portion of the webcam under ffmpeg. The correct way to do it is NOT to enclose the hardware value in quotes. Do not enclose the hardware in quotes:
ffmpeg -f alsa -i plughw:CARD=U0x46d0x821,DEV=0
instead of:
ffmpeg -f alsa -i "plughw:CARD=U0x46d0x821,DEV=0"
I hope this helps someone else.
Maybe this works:
ffmpeg -f alsa -r 16000 -i hw:2,0 -f video4linux2 -s 800x600 -i /dev/video0 -r 30 -f avi -vcodec mpeg4 -vtag xvid -sameq -acodec libmp3lame -ab 96k output.avi
Source
I'm trying to stream a .ts file containing H.264 and AAC as an RTP stream to an Android device.
I tried:
.\ffmpeg -fflags +genpts -re -i 1.ts -vcodec copy -an -f rtp rtp://127.0.0.1:10
000 -vn -acodec copy -f rtp rtp://127.0.0.1:20000 -newaudio
FFMPEG displays what should be in your SDP file and I copied this into an SDP file and tried playing from VLC and FFPLAY. VLC plays audio but just gives errors re: bad NAL unit types for video. FFPLAY doesn't play anything.
My best guess if that the FFMPEG H.264 RTP implementation is broken or at least it doesn't work in video passthru mode (i.e. using the -vcodec copy).
I need a fix for FFMPEG or an alternate simple open-source solution. I don't want to install FFMPEG in my Android client.
thanks.
Have you tried vlc?I once used vlc for streaming. You can have a look at here.