Pipe input in to ffmpeg stdin - ffmpeg

I am trying to use ffmpeg to decode audio data. While it works to load from a file, I would like to avoid using files because to do so, means I would have to use a temporary. Instead, I'd like to pipe in the data(which I've previously loaded) using stdin.
Is this possible?
e.g.,
Manually load mp3 file
Pipe it in to the ffmpeg spawned process
Get raw output
(it should work with ffprobe and ffplay also)

ffmpeg has a special pipe flag that instructs the program to consume stdin.
note that almost always the input format needs to be defined explicitly.
example (output is in PCM signed 16-bit little-endian format):
cat file.mp3 | ffmpeg -f mp3 -i pipe: -c:a pcm_s16le -f s16le pipe:
pipe docs are here
supported audio types are here

- is the same as pipe:
I couldn't find where it's documented, and I don't have the patience to check the source, but - appears to be the exact same as pipe: according to my tests with ffmpeg 4.2.4, where pipe: does what you usually expect from - in other Linux utilities as mentioned in the documentation of the pipe protocol:
If number is not specified, by default the stdout file descriptor will be used for writing, stdin for reading.
So for example you could rewrite the command from https://stackoverflow.com/a/45902691/895245
ffmpeg -f mp3 -i pipe: -c:a pcm_s16le -f s16le pipe: < file.mp3
a bit more simply as:
ffmpeg -f mp3 -i - -c:a pcm_s16le -f s16le - < file.mp3
Related: What does "dash" - mean as ffmpeg output filename

I don't have enough reputation to add a comment, so...
MrBar's example
ffmpeg -i file.mp3 -c:a pcm_s16le -f s16le pipe: | ffmpeg -f mp3 -i pipe: -c:a pcm_s16le -f s16le encoded.mp3
should read,
ffmpeg -i file.mp3 -c:a pcm_s16le -f s16le pipe: | ffmpeg -f s16le -i pipe: -f mp3 encoded.mp3

Since you have to set the incoming stream's properties - and you may not feel like it - here's an alternative that I've used: use a fifo or a pipe (not the one mentioned above).
Here is an example using wget as a stream source, but cou can use anything, cat, nc, you name it:
$ mkfifo tmp.mp4 # or any other format
$ wget -O tmp.mp4 https://link.mp4 &
$ ffmpeg -i tmp.mp4 YourFileHere.mp4
Finally you may want to delete the pipe - you remove it like a normal file:
$ rm tmp.mp4
Voila!

Related

update ffmpeg filter without interrupting rtmp stream

I am using ffmpeg to read a rtmp stream, add a filter such as a blur box and create a different rtmp stream.
the command for example looks like:
ffmpeg -i <rtmp_source_url> -filter_complex "split=2[a][b];[a]crop=w=300:h=300:x=0:y=0[c];[c]boxblur=luma_radius=10:luma_power=1[blur];[b][blur]overlay=x=0:y=0[output]" -map [output] -acodec aac -vcodec libx264 -tune zerolatency -f flv <rtmp_output_url>
where rtmp_source_url is where the camera/drone is sending the flux and rtmp_output_url is the resuting video with the blur box.
the blur box need to move, either because the target moved or the camera did.
I want to do so without interrupting the output streaming.
I am using fluent-ffmpeg to create the ffmpeg process while a different part of the program compute where the blur box shall be.
thanks for you help and time!
Consider using a pipe to split up the processing.
See here - https://ffmpeg.org/ffmpeg-protocols.html#pipe
The accepted syntax is:
pipe:[number]
number is the number corresponding to the file descriptor of the pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If number is not specified, by default the stdout file descriptor will be used for writing, stdin for reading.
For example to read from stdin with ffmpeg:
cat test.wav | ffmpeg -i pipe:0
# ...this is the same as...
cat test.wav | ffmpeg -i pipe:
For writing to stdout with ffmpeg:
ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
# ...this is the same as...
ffmpeg -i test.wav -f avi pipe: | cat > test.avi
For example, you read a rtmp stream, add a filter such as a blur box and create a different rtmp stream. So, the first step is is to separate the incoming and outgoing stream -
ffmpeg -i <rtmp_source_url> -s 1920x1080 -f rawvideo pipe: | ffmpeg -s 1920x1080 -f rawvideo -y -i pipe: -filter_complex "split=2[a][b];[a]crop=w=300:h=300:x=0:y=0[c];[c]boxblur=luma_radius=10:luma_power=1[blur];[b][blur]overlay=x=0:y=0[output]"
-map [output] -acodec aac -vcodec libx264 -tune zerolatency -f flv <rtmp_output_url>
I do not know what criteria you have to vary the blur box, but now you can process the incoming frame in the second ffmpeg. Also, I used 1920x1080 as the video size - you can replace it with the actual size.
For the first iteration, do not worry about the audio, do your blur operation. As we are feeding rawvideo - in this example the audio will be ignored.

Ffmpeg change audio file bitrate and pass the output to pipe

I used to change the bitrate of audio files by using
ffmpeg -i input.mp3 -ab 96k output.mp3
and it works perfectly. Now I want to pass the output as pipe in Ffmpeg and perform some other task. I have took the reference of this documentation and modified the above ffmpeg command into
ffmpeg -i input.mp3 -ab 96k pipe:1 | aws s3 cp - s3://mybucket/output.mp3
But this doesn't work.
Only if i use pipe as below then it works.
ffmpeg -i input.mp3 -f mp3 pipe:1 | aws s3 cp - s3://mybucket/output.mp3
But this doesn't change the bitrate of the audio. Can anyone please help me how can I achieve my target of changing the bitrate and passing the output as Pipe
You have to specify the output format manually. When outputting to file, ffmpeg guesses format based on extension, which can't be done when piping.
Use
ffmpeg -i input.mp3 -ab 96k -f mp3 pipe:1 | aws s3 cp - s3://mybucket/output.mp3

How to specify file type in ffmpeg (-f) for both the input and output?

I'm using ffmpeg to convert the stdin (pipe:0) to stdout (pipe:1).
My input format is "s16le" and my output format is "wav".
How do I specify the two different formats in an ffmpeg command?
I'm also using two different frequencies (-ar), input 44100Hz and output 22050Hz, how do I specify the two different frequencies in an ffmpeg command?
In FFmpeg, the parameters come before the input/output, for that specific input/output.
In your case, your command would look something like:
ffmpeg -sample_rate 44100 -f s16le -i - -ar 22050 -codec copy -f wav -
In this case, -ar 44100 and -f s16le apply to the input, since they came before the input.
-ar 22050, -codec copy, and -f wav apply to the output, since they were after the input but before the output.

Run ffmpeg multiple commands

Im using this ffmpeg command to convert mp3 to wav:
ffmpeg -i audio.mp3 -acodec libmp3lame -ab 64k -ar 16000 audio.wav
and this command to create waveform from audio.wav:
wav2png --foreground-color=ffb400aa --background-color=2e4562ff -o example4.png papa2.wav
I would love to know, how to run this commands multiple? For example, when conversion from .mp3 to .wav is done, then run the wav2png command.
Thank You!
You have several options here:
Option 1: Use &&
In Bash you can use an and list to concatenate commands. Each command will be executed one after the other. The and list will terminate when a command fails, or when all commands have been successfully executed.
ffmpeg -i audio.mp3 audio.wav && wav2png -o output.png audio.wav
Using -acodec libmp3lame when outputting to WAV makes no sense, so I removed that.
WAV ignores bitrate options, so I removed -ab.
Do you really need to change the audio rate (-ar)? Removed.
Option 2: Pipe from ffmpeg to wav2png
Instead of making a temporary WAV file you can pipe the output from ffmpeg directly to wav2png:
ffmpeg -i audio.mp3 -f wav - | wav2png -o output.png /dev/stdin
Option 3: Just use ffmpeg
Saving the best for last, you can try the showwavespic filter.
ffmpeg -i music.wav -filter_complex showwavespic=s=640x320 showwaves.png
If you want to make a video of the wave form, then try showwaves.
You can see a colored example at Generating a waveform using ffmpeg.

ffmpeg fails with: Unable to find a suitable output format for 'pipe:'

I am trying to record my desktop and save it as videos but ffmpeg fails.
Here is the terminal output:
$ ffmpeg -f alsa -i pulse -r 30 -s 1366x768 -f x11grab -i :0.0 -vcodec libx264 - preset ultrafast -crf 0 -y screencast.mp4
...
Unable to find a suitable output format for 'pipe:'
typo
Use -preset, not - preset (notice the space). ffmpeg uses - to indicate a pipe, so your typo is being interpreted as a piped output.
pipe requires the -f option
For users who get the same error, but actually want to output via a pipe, you have to tell ffmpeg which muxer the pipe should use.
Do this with the -f output option. Examples: -f mpegts, -f nut, -f wav, -f matroska. Which one to use depends on your video/audio formats and your particular use case.
You can see a list of muxers with ffmpeg -muxers (not all muxers can be used with pipe).

Resources