DirectShow Capture Source and FFMPEG - ffmpeg

I have an AJA Capture card. The drivers installed with the card include some DirectShow filter. If I pop the filter into GraphEdit I see this:
and if I run the ffmpeg command
ffmpeg -f dshow -list_options true -i video="AJA Capture Source"
I see
[dshow # 0034eec0] DirectShow video device options
[dshow # 0034eec0] Pin "Video"
[dshow # 0034eec0] pixel_format=yuyv422 min s=720x486 fps=27.2604 max s=1024x
486 fps=29.985
...
[dshow # 0034eec0] Pin "Audio 1-2"
[dshow # 0034eec0] Pin "Line21"
video=AJA Capture Source: Immediate exit requested
So I see the Video and Audio pins I need. But when I try to run an ffmpeg command to capture both, I can only figure out how to do the video part. How do I hook in to that audio pin? It seems all the examples and documentation point to using a separate audio device, and nothing about hooking into the pins. I'm running it out of a batch file for now like this and I use the ^ to break the line
ffmpeg.exe ^
-y ^
-rtbufsize 100M ^
-f dshow ^
-i video="AJA Capture Source" ^
-t 00:00:10 ^
-aspect 16:9 ^
-c:v libx264 ^
"C:\VCS_AUD_SAMPLE.mp4"
Again, the command above will get me some beautiful video, but I can't figure out the audio part. Is this even supported in ffmpeg or am I going to have to modify the ffmpeg dshow code?

I am the developer of this filter.
Actually the same device is used for both audio and video streams. Moreover, the data for both streams are the result of one function call. Dividing by separate audio and video filters in other cards (example - DeckLink) is artificial (they must be internally connected). Possible reason for division - an attempt to simplify the graph. However, this can lead to other problems (using streams from different devices).
Why ffmpeg can't work with pins of the same filter - not clear to me. This problem of ffmpeg developers.
About only one instance access - very old version of AJA Capture Source filter used. A more recent version of the filter allow you to create multiple instances simultaneously (but only one instance may be in "Play" state). Please, check AJA site for download latest versions of filters. If you like to check latest beta versions of AJA filters, please, write to me at support#avobjects.com

So after tracing through source code of FFmpeg it was deemed that it could not hook up to multiple pins on a dshow source, so instead of modifying the FFmpeg source, we piped the AJA source pins through two virtual capture sources to achieve the desired result.

OK support for this was (hopefully) added recently in FFmpeg dshow, you can specify ffmpeg -f dshow -i video="AJA Capture Source":audio="AJA Capture Source" now and it work.
There are even new parameters for selecting which pin you want to use, if you need them. https://www.ffmpeg.org/ffmpeg-devices.html#dshow
If it doesn't work for somebody/anybody please let me know rogerdpack#gmail.com or comment here.

From http://ffmpeg.org/trac/ffmpeg/wiki/DirectShow
Also this note that "The input string is in the format video=<video device name>:audio=<audio device name>.
So try
ffmpeg.exe -f dshow -i "video=AJA Capture Source:audio=audio source name"

Related

Realtime Muxing of videos

My problem basically comes from me having 2 different streams for videoplayback and having to mux them realtime in memory. One for video, and another for audio.
My goal is to create a proxy which can mux 2 different webm streams from their URLs, while supporting range requests (requires knowing the encoded file size). Would this be possible?
This is how I mux the audio and video streams manually using ffmpeg:
ffmpeg -i video.webm -i audio.webm -c copy output.webm
But, this requires me to download the video fully to process it, which I don't want to do unfortunately.
Thanks in advance!
If you are looking for this to work in go you can look into
github.com/at-wat/ebml-go/webm
This provides a BlockWriter interface to write to webm file using buffers; You can see the test file to checkout how to use it
https://github.com/at-wat/ebml-go
Checkout ffmpeg pipes.
Also since you have tagged go - i'm assuming you will use os/exec - in which case also checkout Cmd.ExtraFiles. This lets you use additional pipes(files) beyond just the standard 0, 1 and 2.
So let's say you have a stream for video and one for audio piping to 3 and 4 respectively. The ffmpeg bit of your command becomes:
ffmpeg -i pipe:3 -i pipe:4 -c copy output.webm

FFMPEG DASH - Live Streaming a Sequence of MP3 Clips

I am attempting to create a online radio application using FFMPEG - an audio only DASH stream.
I have a directory of mp3 clips (all of the same bitrate and sample size) which I am encoding to the AAC format and outputting to a mpd.
This is the current command I am working with to stream a single mp3 file:
ffmpeg -re -i <input>.mp3 -c:a aac -use_timeline 1 -use_template 1 -window_size 5 -f dash <out>.mpd
(Input and output paths have been substituted for < input >.mp3 and < output >.mpd in this snippet)
I am running a web server and have made the mpd accessible on it. I am testing the stream using VLC player at the moment.
The problem:
Well, the command works, but it will only work for one clip at a time. Once the next command is run immediately proceeding the completion of the first, VLC player will halt and I need to refresh the player to continue.
I'm aiming for an uninterrupted stream wherein the clips play in sequence.
I imagine the problem is that a new mpd is being created with no reference to the previous one, and what I ought to be doing is appending segments to the existing mpd - but I don't know how to do that using FFMPEG.
The question: Is there such a command to append segments to a previously existing mpd file in FFMPEG? or am I coming at this problem all wrong? Perhaps I should be using FFMPEG to format the clips into these segments, but then adjusting the mpd file manually.
Any help or suggestions would be very much appreciated!

ffmpeg progress is freezing frames when scene change

I'm capturing data from IP camera with RTSP protocol with ffmpeg with command:
ffmpeg -rtsp_transport tcp -progress /media/kamip/stats.txt -i rtsp://192.168.1.220:554/live/h264/ch0
-c:v copy -c:a copy -strict 1 -map 0 -f segment -strftime 1
-segment_time 1800 /media/kamip/cam_%d_%m_%Y_%H_%M_%S.mkv
I'm using this for 5 cameras. One is different type and it is in different location.
Because ffmpeg does not support reconnect I'm writing status to /media/kamip/stats.txt file. In another script I'm parsing this output and every 30 seconds I'm checking if frame number changed, if yes - it is ok, if not, I'm restarting above command.
The problem is only in the night. When is quite dark and suddenly lights on, for example when car is parking, the /media/kamip/stats.txt is showing the same frame number, so my script is recognizing this as a lost connection (video freeze)
I tried "-strict 1" option and I think it is better (one false alarm per day instead of 10 per day), so I think this may be related to ffmpeg, not camera/video source, especially because the video is fine even frame number reported by ffmpeg is still the same. Also VLC does not have this kind of problem (but I cannot use it currently for this camera)
I found that ffmpeg has build-in scene change detector, but it should works only when encoding video (I'm using "copy" option for audio and video)?
I'm thinking about different way of analyzing the video capturing, but this "-progress" in ffmpeg should works fine - and it is working fine for other cameras for few years).
I also do not see any errors,
when I encoded one cutted file with "-loglevel debug" I saw only information like below:
[libx264 # 0x25d77a0] scene cut at 174 Icost:2049115 Pcost:2006553
ratio:0.0208 bias:0.1387 gop:54 (imb:3186 pmb:168)
ffmpeg in latest version
ffmpeg version 3.3.3-1ubuntu1~16.04.york0 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609
any help will be appreciated

Live transcoding and streaming of MP4 works in Android but fails in Flash player with NetStream.Play.FileStructureInvalid error

Recently I had a task to use ffmpeg as a transcoding as well a streaming tool. The task was to convert the file from a given format to MP4 and immediately stream it, by capturing it from stdout. So far so good. The streaming works well with the native player of android tabs as well as the VLC player. The issue is with the flash player. It gives the following error:
NetStream.Play.FileStructureInvalid : Adobe Flash cannot import files that have invalid file structures.
ffmpeg flags used are
$ ffmpeg -loglevel quiet -i somefile.avi -vbsf h264_mp4toannexb -vcodec libx264 \
-acodec aac -f MP4 -movflags frag_keyframe+empty_moov -re - 2>&1
As noted in the docs for -movflags
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4 file has all the metadata about all packets stored in one location (written at the end of the file, it can be moved to the start for better playback using the qt-faststart tool). A fragmented file consists of a number of fragments, where packets and metadata about these packets are stored together. Writing a fragmented file has the advantage that the file is decodable even if the writing is interrupted (while a normal MOV/MP4 is undecodable if it is not properly finished), and it requires less memory when writing very long files (since writing normal MOV/MP4 files stores info about every single packet in memory until the file is closed). The downside is that it is less compatible with other applications.
Either switch to a flash player that can handle fragmented MP4 files, or use a different container format that supports streaming better.
Also, -re is an input-only option, so it would make more sense to specify it before the input, instead of before the output.

Piping avs to ffmpeg using avs2yuv

I am trying to use avs2yuv to pipe avs output to ffmpeg for further conversion.
My video file is called "sample.avi" (No sound, just video)
My audio file is called "sample.wav"
My avs file(s) is called sample.avs, and looks like this:
V = AviSource("sample.avi")
A = WavSource("sample.wav")
AudioDub(V ,A)
or
V = DirectShowSource("sample.avi")
A = DirectShowSource("sample.wav")
AudioDub(V ,A)
Here is how I pipe:
avs2yuv sample.avs - | ffmpeg -y -f yuv4mpegpipe -i - output.mp4
Now here is the PROBLEM: No matter what files I try as an input, there is NO SOUND in my output. I do not understand what I am doing wrong, and why my audio does not make it to the output. If anyone has experience with avisynth and avs2yuv, your help would be GREATLY appreciated.
Thank you!
I would try to play your avs file with ffplay in order to check your avs file.
And you can also try to build some GRaph with GraphEdit in order to do something like that
A = DirectShowSource("sample_audio.grf", video=false)
V = DirectShowSource("sample_video.grf", audio=false)
AudioDub(V ,A)
With DirectShow you can add several parameter like fps, frame-count etc... sometime it helps.
Good Luck
As per this link:
Avs2YUV is a command-line program, intended for use under Wine, to
interface between Avisynth and Linux-based video tools.
avs2yuv.exe only handles the video stream which it output in a YUV color-space. It is that simple: the audio stream is ignored.
Here are some ways to process both audio and video streaams in .avs. These methods work in Linux using wine, and do of course work in Windows:
Encode in Avidemux via AvsProxy (AvsProxy ships with Avidemux)
Use VirutalDub as the encoder gui
otherwise encode the audio seperately, then mux in the video in a seperate step.
I believe avs2pipe can handle both video and audio streams fron a .avs, but I haven't tried it yet. Here is a link to some info about avs2pipe
Summary: Using avs2yuv mainly makes sense in a Linux/Unix environment.
Try makeAVIS.exe from the ffdshow package:
wine makeavis.exe -p -i example.avs -a output.wav

Resources