Piping avs to ffmpeg using avs2yuv - ffmpeg

I am trying to use avs2yuv to pipe avs output to ffmpeg for further conversion.
My video file is called "sample.avi" (No sound, just video)
My audio file is called "sample.wav"
My avs file(s) is called sample.avs, and looks like this:
V = AviSource("sample.avi")
A = WavSource("sample.wav")
AudioDub(V ,A)
or
V = DirectShowSource("sample.avi")
A = DirectShowSource("sample.wav")
AudioDub(V ,A)
Here is how I pipe:
avs2yuv sample.avs - | ffmpeg -y -f yuv4mpegpipe -i - output.mp4
Now here is the PROBLEM: No matter what files I try as an input, there is NO SOUND in my output. I do not understand what I am doing wrong, and why my audio does not make it to the output. If anyone has experience with avisynth and avs2yuv, your help would be GREATLY appreciated.
Thank you!

I would try to play your avs file with ffplay in order to check your avs file.
And you can also try to build some GRaph with GraphEdit in order to do something like that
A = DirectShowSource("sample_audio.grf", video=false)
V = DirectShowSource("sample_video.grf", audio=false)
AudioDub(V ,A)
With DirectShow you can add several parameter like fps, frame-count etc... sometime it helps.
Good Luck

As per this link:
Avs2YUV is a command-line program, intended for use under Wine, to
interface between Avisynth and Linux-based video tools.
avs2yuv.exe only handles the video stream which it output in a YUV color-space. It is that simple: the audio stream is ignored.
Here are some ways to process both audio and video streaams in .avs. These methods work in Linux using wine, and do of course work in Windows:
Encode in Avidemux via AvsProxy (AvsProxy ships with Avidemux)
Use VirutalDub as the encoder gui
otherwise encode the audio seperately, then mux in the video in a seperate step.
I believe avs2pipe can handle both video and audio streams fron a .avs, but I haven't tried it yet. Here is a link to some info about avs2pipe
Summary: Using avs2yuv mainly makes sense in a Linux/Unix environment.

Try makeAVIS.exe from the ffdshow package:
wine makeavis.exe -p -i example.avs -a output.wav

Related

Realtime Muxing of videos

My problem basically comes from me having 2 different streams for videoplayback and having to mux them realtime in memory. One for video, and another for audio.
My goal is to create a proxy which can mux 2 different webm streams from their URLs, while supporting range requests (requires knowing the encoded file size). Would this be possible?
This is how I mux the audio and video streams manually using ffmpeg:
ffmpeg -i video.webm -i audio.webm -c copy output.webm
But, this requires me to download the video fully to process it, which I don't want to do unfortunately.
Thanks in advance!
If you are looking for this to work in go you can look into
github.com/at-wat/ebml-go/webm
This provides a BlockWriter interface to write to webm file using buffers; You can see the test file to checkout how to use it
https://github.com/at-wat/ebml-go
Checkout ffmpeg pipes.
Also since you have tagged go - i'm assuming you will use os/exec - in which case also checkout Cmd.ExtraFiles. This lets you use additional pipes(files) beyond just the standard 0, 1 and 2.
So let's say you have a stream for video and one for audio piping to 3 and 4 respectively. The ffmpeg bit of your command becomes:
ffmpeg -i pipe:3 -i pipe:4 -c copy output.webm

Setting up rtsp stream on Windows

I am trying to set up an rtsp stream that can be accessed from an application. I have been experimenting with ffmpeg to realize that. I have succeded as far as I was able to stream from ffmpeg to ffplay but I could not load the stream in vlc for example. Here are the calls that I did from two different shells on the same machine:
ffmpeg.exe -y -loop 1 -r 24 -i test_1.jpg -vcodec libx264 -tune stillimage -f rtsp rtsp://127.0.0.1:1234/stream.sdp
ffplay.exe -rtsp_flags listen rtsp://127.0.0.1:1234/stream.sdp
Can anybody explain to me what I would have to do to load the stream as a network stream using vlc? Any help is appreciated.
I have done this before and I'm not sure what was wrong with rtsp output of ffmpeg. But what i can say right now is please consider using Live555 library if you have any streaming scenario. cause the ffmpeg code (for rtp muxer) is not good and it is buggy. ffmpeg has another solution for streaming server which is called ffserver which prepare ffmpeg pipe for vlc or another third-party application. and that's bad written and buggy too (libav group -another fork of libav* libraries) never used ffserver code and in not sure if they have any plan to consider ffserver as their solution. they have ffplay(avplay), ffmpeg(avconv) and ffprobe but not ffserver.
If you want to use Live555 which really easy, you have to just go to their website (www.live555.com) download the source code and build MediaServer application (It is in 'MediaServer' folder). if you read the code's documentation, I'm sure you will have not any problem.It's a basic rtsp server to stream any (supported) accessible file on your HDD via rtsp url of your server.
if you have any problem with code just comment here, so I can help you more with live555.

Transfer custom (all) metadata using ffmpeg

How to transfer metadata using FFMPEG or other tools with CMD ?
I'm trying to encode video/audio and since they already have metadata inside obviously i want to preserve them into my new file
btw since i'm using mediamonkey as main player, there's also some Custom metadata. this is the one who wont transfer
for Video output file using mp4/mkv (using x264)
for Audio output file using m4a (using neroAac)
Thank You!
ps. which container is best for neroAac and x264? since i can't seem to edit mkv metadata (when i remove from mediamonkey playlist, they're all gone), mp4 is fine though and i can't seem to play AAC, although it's fine when muxed into video
Copy all custom and global metadata tag information using the following command:
ffmpeg <inputfile> -movflags use_metadata_tags -c copy <outputfile>

DirectShow Capture Source and FFMPEG

I have an AJA Capture card. The drivers installed with the card include some DirectShow filter. If I pop the filter into GraphEdit I see this:
and if I run the ffmpeg command
ffmpeg -f dshow -list_options true -i video="AJA Capture Source"
I see
[dshow # 0034eec0] DirectShow video device options
[dshow # 0034eec0] Pin "Video"
[dshow # 0034eec0] pixel_format=yuyv422 min s=720x486 fps=27.2604 max s=1024x
486 fps=29.985
...
[dshow # 0034eec0] Pin "Audio 1-2"
[dshow # 0034eec0] Pin "Line21"
video=AJA Capture Source: Immediate exit requested
So I see the Video and Audio pins I need. But when I try to run an ffmpeg command to capture both, I can only figure out how to do the video part. How do I hook in to that audio pin? It seems all the examples and documentation point to using a separate audio device, and nothing about hooking into the pins. I'm running it out of a batch file for now like this and I use the ^ to break the line
ffmpeg.exe ^
-y ^
-rtbufsize 100M ^
-f dshow ^
-i video="AJA Capture Source" ^
-t 00:00:10 ^
-aspect 16:9 ^
-c:v libx264 ^
"C:\VCS_AUD_SAMPLE.mp4"
Again, the command above will get me some beautiful video, but I can't figure out the audio part. Is this even supported in ffmpeg or am I going to have to modify the ffmpeg dshow code?
I am the developer of this filter.
Actually the same device is used for both audio and video streams. Moreover, the data for both streams are the result of one function call. Dividing by separate audio and video filters in other cards (example - DeckLink) is artificial (they must be internally connected). Possible reason for division - an attempt to simplify the graph. However, this can lead to other problems (using streams from different devices).
Why ffmpeg can't work with pins of the same filter - not clear to me. This problem of ffmpeg developers.
About only one instance access - very old version of AJA Capture Source filter used. A more recent version of the filter allow you to create multiple instances simultaneously (but only one instance may be in "Play" state). Please, check AJA site for download latest versions of filters. If you like to check latest beta versions of AJA filters, please, write to me at support#avobjects.com
So after tracing through source code of FFmpeg it was deemed that it could not hook up to multiple pins on a dshow source, so instead of modifying the FFmpeg source, we piped the AJA source pins through two virtual capture sources to achieve the desired result.
OK support for this was (hopefully) added recently in FFmpeg dshow, you can specify ffmpeg -f dshow -i video="AJA Capture Source":audio="AJA Capture Source" now and it work.
There are even new parameters for selecting which pin you want to use, if you need them. https://www.ffmpeg.org/ffmpeg-devices.html#dshow
If it doesn't work for somebody/anybody please let me know rogerdpack#gmail.com or comment here.
From http://ffmpeg.org/trac/ffmpeg/wiki/DirectShow
Also this note that "The input string is in the format video=<video device name>:audio=<audio device name>.
So try
ffmpeg.exe -f dshow -i "video=AJA Capture Source:audio=audio source name"

Live transcoding and streaming of MP4 works in Android but fails in Flash player with NetStream.Play.FileStructureInvalid error

Recently I had a task to use ffmpeg as a transcoding as well a streaming tool. The task was to convert the file from a given format to MP4 and immediately stream it, by capturing it from stdout. So far so good. The streaming works well with the native player of android tabs as well as the VLC player. The issue is with the flash player. It gives the following error:
NetStream.Play.FileStructureInvalid : Adobe Flash cannot import files that have invalid file structures.
ffmpeg flags used are
$ ffmpeg -loglevel quiet -i somefile.avi -vbsf h264_mp4toannexb -vcodec libx264 \
-acodec aac -f MP4 -movflags frag_keyframe+empty_moov -re - 2>&1
As noted in the docs for -movflags
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4 file has all the metadata about all packets stored in one location (written at the end of the file, it can be moved to the start for better playback using the qt-faststart tool). A fragmented file consists of a number of fragments, where packets and metadata about these packets are stored together. Writing a fragmented file has the advantage that the file is decodable even if the writing is interrupted (while a normal MOV/MP4 is undecodable if it is not properly finished), and it requires less memory when writing very long files (since writing normal MOV/MP4 files stores info about every single packet in memory until the file is closed). The downside is that it is less compatible with other applications.
Either switch to a flash player that can handle fragmented MP4 files, or use a different container format that supports streaming better.
Also, -re is an input-only option, so it would make more sense to specify it before the input, instead of before the output.

Resources