My problem basically comes from me having 2 different streams for videoplayback and having to mux them realtime in memory. One for video, and another for audio.
My goal is to create a proxy which can mux 2 different webm streams from their URLs, while supporting range requests (requires knowing the encoded file size). Would this be possible?
This is how I mux the audio and video streams manually using ffmpeg:
ffmpeg -i video.webm -i audio.webm -c copy output.webm
But, this requires me to download the video fully to process it, which I don't want to do unfortunately.
Thanks in advance!
If you are looking for this to work in go you can look into
github.com/at-wat/ebml-go/webm
This provides a BlockWriter interface to write to webm file using buffers; You can see the test file to checkout how to use it
https://github.com/at-wat/ebml-go
Checkout ffmpeg pipes.
Also since you have tagged go - i'm assuming you will use os/exec - in which case also checkout Cmd.ExtraFiles. This lets you use additional pipes(files) beyond just the standard 0, 1 and 2.
So let's say you have a stream for video and one for audio piping to 3 and 4 respectively. The ffmpeg bit of your command becomes:
ffmpeg -i pipe:3 -i pipe:4 -c copy output.webm
Related
I am trying to set up an rtsp stream that can be accessed from an application. I have been experimenting with ffmpeg to realize that. I have succeded as far as I was able to stream from ffmpeg to ffplay but I could not load the stream in vlc for example. Here are the calls that I did from two different shells on the same machine:
ffmpeg.exe -y -loop 1 -r 24 -i test_1.jpg -vcodec libx264 -tune stillimage -f rtsp rtsp://127.0.0.1:1234/stream.sdp
ffplay.exe -rtsp_flags listen rtsp://127.0.0.1:1234/stream.sdp
Can anybody explain to me what I would have to do to load the stream as a network stream using vlc? Any help is appreciated.
I have done this before and I'm not sure what was wrong with rtsp output of ffmpeg. But what i can say right now is please consider using Live555 library if you have any streaming scenario. cause the ffmpeg code (for rtp muxer) is not good and it is buggy. ffmpeg has another solution for streaming server which is called ffserver which prepare ffmpeg pipe for vlc or another third-party application. and that's bad written and buggy too (libav group -another fork of libav* libraries) never used ffserver code and in not sure if they have any plan to consider ffserver as their solution. they have ffplay(avplay), ffmpeg(avconv) and ffprobe but not ffserver.
If you want to use Live555 which really easy, you have to just go to their website (www.live555.com) download the source code and build MediaServer application (It is in 'MediaServer' folder). if you read the code's documentation, I'm sure you will have not any problem.It's a basic rtsp server to stream any (supported) accessible file on your HDD via rtsp url of your server.
if you have any problem with code just comment here, so I can help you more with live555.
How to transfer metadata using FFMPEG or other tools with CMD ?
I'm trying to encode video/audio and since they already have metadata inside obviously i want to preserve them into my new file
btw since i'm using mediamonkey as main player, there's also some Custom metadata. this is the one who wont transfer
for Video output file using mp4/mkv (using x264)
for Audio output file using m4a (using neroAac)
Thank You!
ps. which container is best for neroAac and x264? since i can't seem to edit mkv metadata (when i remove from mediamonkey playlist, they're all gone), mp4 is fine though and i can't seem to play AAC, although it's fine when muxed into video
Copy all custom and global metadata tag information using the following command:
ffmpeg <inputfile> -movflags use_metadata_tags -c copy <outputfile>
Recently I had a task to use ffmpeg as a transcoding as well a streaming tool. The task was to convert the file from a given format to MP4 and immediately stream it, by capturing it from stdout. So far so good. The streaming works well with the native player of android tabs as well as the VLC player. The issue is with the flash player. It gives the following error:
NetStream.Play.FileStructureInvalid : Adobe Flash cannot import files that have invalid file structures.
ffmpeg flags used are
$ ffmpeg -loglevel quiet -i somefile.avi -vbsf h264_mp4toannexb -vcodec libx264 \
-acodec aac -f MP4 -movflags frag_keyframe+empty_moov -re - 2>&1
As noted in the docs for -movflags
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4 file has all the metadata about all packets stored in one location (written at the end of the file, it can be moved to the start for better playback using the qt-faststart tool). A fragmented file consists of a number of fragments, where packets and metadata about these packets are stored together. Writing a fragmented file has the advantage that the file is decodable even if the writing is interrupted (while a normal MOV/MP4 is undecodable if it is not properly finished), and it requires less memory when writing very long files (since writing normal MOV/MP4 files stores info about every single packet in memory until the file is closed). The downside is that it is less compatible with other applications.
Either switch to a flash player that can handle fragmented MP4 files, or use a different container format that supports streaming better.
Also, -re is an input-only option, so it would make more sense to specify it before the input, instead of before the output.
I am trying to use avs2yuv to pipe avs output to ffmpeg for further conversion.
My video file is called "sample.avi" (No sound, just video)
My audio file is called "sample.wav"
My avs file(s) is called sample.avs, and looks like this:
V = AviSource("sample.avi")
A = WavSource("sample.wav")
AudioDub(V ,A)
or
V = DirectShowSource("sample.avi")
A = DirectShowSource("sample.wav")
AudioDub(V ,A)
Here is how I pipe:
avs2yuv sample.avs - | ffmpeg -y -f yuv4mpegpipe -i - output.mp4
Now here is the PROBLEM: No matter what files I try as an input, there is NO SOUND in my output. I do not understand what I am doing wrong, and why my audio does not make it to the output. If anyone has experience with avisynth and avs2yuv, your help would be GREATLY appreciated.
Thank you!
I would try to play your avs file with ffplay in order to check your avs file.
And you can also try to build some GRaph with GraphEdit in order to do something like that
A = DirectShowSource("sample_audio.grf", video=false)
V = DirectShowSource("sample_video.grf", audio=false)
AudioDub(V ,A)
With DirectShow you can add several parameter like fps, frame-count etc... sometime it helps.
Good Luck
As per this link:
Avs2YUV is a command-line program, intended for use under Wine, to
interface between Avisynth and Linux-based video tools.
avs2yuv.exe only handles the video stream which it output in a YUV color-space. It is that simple: the audio stream is ignored.
Here are some ways to process both audio and video streaams in .avs. These methods work in Linux using wine, and do of course work in Windows:
Encode in Avidemux via AvsProxy (AvsProxy ships with Avidemux)
Use VirutalDub as the encoder gui
otherwise encode the audio seperately, then mux in the video in a seperate step.
I believe avs2pipe can handle both video and audio streams fron a .avs, but I haven't tried it yet. Here is a link to some info about avs2pipe
Summary: Using avs2yuv mainly makes sense in a Linux/Unix environment.
Try makeAVIS.exe from the ffdshow package:
wine makeavis.exe -p -i example.avs -a output.wav
I essentially have a situation where I need to pull a stream from one Wowza media server and publish it to a Red5 or Flash Media Server instance with FFMPEG. Is there a command to do this? I'm essentially looking for something like this:
while [ true ]; do
ffmpeg -i rtmp://localhost:2000/vod/streamName.flv rtmp://localhost:1935/live/streamName
done
Is this currently possible from FFMPEG? I remembered reading something like this, but I can't remember how exactly to do it.
Yes. An example (pulling from a local server, publishing to a local server):
$ ffmpeg -analyzeduration 0 -i "rtmp://localhost/live/b live=1" -f flv rtmp://localhost:1936/live/c
analyzeduration is to make it start faster. You can also add other parameters in there to "reencode" etc. if desired.
try typing in this way:
$ffmpeg -i "[InputSourceAddress]" -f [Outputfileformat] "[OutputSourceAddress]"
The input source address can be in type rtmp, or rtsp/m3u8/etc.