I can capture with ffmpeg from a device, I can transcode the audio/video, I can stream it to ffserver.
How can I capture and stream with ffmpeg while showing locally what is captured?
Up to now I've been using VLC to capture and stream to localhost, then ffmpeg to get that stream, transcode it again, and stream to ffserver.
I'd like to do this using ffmpeg only.
Thank you.
Option A: Use ffmpeg with multiple outputs and a separate player:
output 1: copy source without transcoding and pipe it or send it to a local port
output 2: transcode and send to server
Example using ffplay
ffmpeg -f x11grab [grab parameters] -i :0.0 \
[transcode parameters] -f [transcode output] \
-f rawvideo - | ffplay -f rawvideo [grab parameters] -i -
Option B: ffmpegonly with OpenGL and a SDL window (requires SDL and --enable-opengl)
ffmpeg -f x11grab [grab parameters] -i :0.0 \
[transcode parameters] -f [transcode output] \
-f opengl "Window title"
You can also use tee separately what is more error prone for me (I couldn't get aergistal's solution to work):
cat file | tee >(program_1) [...] >(program_n) | destination
In this case:
ffmpeg -i rtsp://url -codec:a aac -b:a 192k -codec:v copy -f mpegts - | \
tee >(ffplay -f mpegts -i -) | \
ffmpeg -y -f mpegts -i - -c copy /path/to/file.mp4
(Tested with ffmpeg v:3.2.11 [current in Debian stable])
Related
I am trying to record my desktop using pipe, but ffmpeg fails.
OS windows:
ffmpeg -filter_complex ddagrab=output_idx=0:framerate=5,hwdownload,format=bgra -c:v libx264 -crf 18 -y pipe:1 | cat > test.mp4
OS mac:
ffmpeg -f avfoundation -framerate 5 -capture_cursor 1 pipe:1 | cat > output.mkv
However, on windows, this command works
ffmpeg -f gdigrab -i desktop -f mpegts pipe:1 | cat > out.mp4
it turned out to solve the problem by adding a parameter -f mpegts
I am trying to convert midi file to mp3 using fluidsynth and ffmpeg on Windows 10 OS.
fluidsynth -a alsa -T raw -F - "FluidR3Mono_GM.sf3" simple.mid | ffmpeg -ab 192k -f s32le -i simple.mp3
The audio bit rate specification : -ab 192k or -b:a 192k are creating an error:
You are applying an input option to an output file or viceversa.
Is there an option to specify bit rate in the above command.
Taken from Convert midi to mp3
Option placement matters with ffmpeg. You're attempting to apply an output option to the input.
ffmpeg [input options] input [output options] output
Corrected command:
fluidsynth -T raw -F - sound_font_file.sf3 input.mid | ffmpeg -y -f s32le -i - -b:a 192k output.mp3
Fore more info about MP3 encoding with ffmpeg see FFmpeg Wiki: MP3.
Use timidity and ffmpeg
sudo apt-get install timidity
sudo apt-get install ffmpeg
If I have the file honorthyfather.mid you can choice
For midi to mp3
timidity honorthyfather.mid -Ow -o - | ffmpeg -i - -acodec libmp3lame -ab 320k honorthyfather.mp3
For more quality use WAV
timidity honorthyfather.mid -Ow -o - | ffmpeg -i - -acodec pcm_s16le honorthyfather.wav
For quality same WAV but low size use FLAC
timidity honorthyfather.mid -Ow -o - | ffmpeg -i - -acodec flac honorthyfather.flac
I am trying to encode youtube live stream to UDP destination using youtube-dl and ffmpeg with below command
youtube-dl -f best --buffer-size 2M -o - "https://www.youtube.com/watch?v=tkUvWJiTf9A" | ffmpeg -re -f mp4 -i pipe:0 -codec copy -f mpegts udp://192.168.1.107:1234?pkt_size=1316
But its not working, its just downloading ts segments of that live stream.
When I am trying with video of youtube its working fine with below commands
youtube-dl -f best --buffer-size 2M -o - "https://www.youtube.com/watch?v=snDI6AaL04g" | ffmpeg -re -f mp4 -i pipe:0 -codec copy -f mpegts udp://192.168.1.107:1234?pkt_size=1316
Any help or suggestion appreciated.
I have got it solved using below command using Streamlink with ffmpeg. sharing so anyone needed can refer that.
streamlink --hls-segment-threads 10 --ringbuffer-size 10M https://www.youtube.com/watch?v=NMre6IAAAiU 140p,worst --stdout | ffmpeg -i pipe:0 -codec copy -bsf:v h264_mp4toannexb -f mpegts udp://192.168.2.7:1234?pkt_size=1316
i am trying this:
ffmpeg -v verbose -re -y -i syncTest.mp4 -af azmq,volume=1 \
-c:v copy -c:a aac ./output.mp4
then invoke
echo 'Parsed_volume_1 volume 0' | ./zmqsend
it works, audio is muted until i invoke it again with 1
but,
ffmpeg -v verbose -re -y -i syncTest.mp4 -af \
azmq,adelay=delays=0S:all=1 -c:v copy -c:a aac ./output.mp4
and then doing something like
echo Parsed_adelay_1 delays 20000S | ./zmqsend
echo Parsed_adelay_1 all 1 | ./zmqsend
does not work, it prints:
78 Function not implemented
is there totally no way to do it?
I use the following command to pipe the FFmpeg output to 2 ffplay , but it doesn't work.
ffmpeg -ss 5 -t 10 -i input.avi -force_key_frames 00:00:00.000 -tune zerolatency -s 1920x1080 -r 25 -f mpegts output.ts -f avi -vcodec copy -an - | ffplay -i - -f mpeg2video - | ffplay -i -
How can I pipe the FFmpeg output to 2 (or more) ffplay?
I saw this page but it doesn't work for ffplay. (it is for Linux but my OS is windows)
Please help me
Thanks
There's some kind of Tee-Object (alias tee) in PowerShell but I'm not sure if it's similar to the one on Linux. You can try:
ffmpeg -re -i [...] -f mpegts - | tee >(ffplay -) | ffplay -
An alternative is to output to a multicast port on the local subnetwork:
ffmpeg -re -i [...] -f mpegts udp://224.0.0.1:10000
You can then connect as many clients as you require on the same address/port:
ffplay udp://224.0.0.1:10000