DirectShow WAV file source does not produce any sound when graph runs - windows

We have a DirectShow application where we capture video input from USB, multiplex with audio from a WAV file (backing music), overlay audio and video effects, compress and write to an MP4 file.
Originally we were using an audio input source (microphone) and mixing our backing music and sound effects over the top but the decision was made to not capture live audio, and so I thought it would make more sense to use the backing music WAV file itself as the audio source.
Here is the filter graph we have:
backing.wav is a simple WAV file (stored locally), and was added to the graph using IFilterGraph::AddSourceFilter.
The problem is that when the graph is run, no audio samples are delivered from the WAV file. The video part of the graph runs as normal, but it's as if the audio part of the graph simply isn't running.
If I stop the graph in GraphEdit, add the Default DirectSound Device audio renderer and hook that up in place of the AAC Encoder filter and then run the graph again, the audio plays as you would expect.
Additionally, if backing.wav is replaced with an audio capture source like a microphone, audio data flows through as normal.
Does anyone have any ideas why the above graph, using a WAV file as the audio source, would fail to produce any audio samples?

I suppose the title is incorrectly identifying/summarizing the problem. There is nothing to tell for the fact that audio is not produced. It is likely that it is produced equally well with DirectSound Renderer and with AAC Encoder, specifically the data is reaching output pin of Mixing Transform Filter (is this your filter? You should be able to trace its flow and see media samples passing though).
With the information given, I would say it's likely that custom AAC encoder somehow does not like the feed and somehow either drops data or switches to erroneous state. You should be able to debug this further by inserting a Sample Grabber (or alike) filter¹ before the AAC encoder and tracing the media samples. Also comparing them to data from another source. The encoder might be sensitive to small details like media sample duration or discontinuity flag on the first sample streamed.
¹ With GraphStudioNext (GraphEdit makes no longer sense compared to) you can use internal Analyzer Filter and review media sample flow interactively using filter property page.

Related

Is it possible to set up ffmpeg as a repeater?

I am using this PyLivestream library to stream files to youtube. the problem is that once it finishes each video the scren goes down for a second until the next video starts. because it's simply just creating ffmpeg command and running then directly in a subprocess for each media file.
Is it possible to configure an instance of ffmpeg that will always be streaming to the destination. It could just be a blank screen or an image. And it also has an input, so I can point PyLivestream to the repeater.
This way the repeater will create one long un-interupted stream experience, but I can still use PyLivestream to stream the individual files.

Is the values in avcC box in .mp4 video files affected only by FFmpeg version?

I am studying on the source identification of video files especially about those from smartphones.
I got to know that the values in avcC box in .mp4 video files have the encoding options(h.264) which decoder must know when processing the encoded stream.
And I guess most of the smartphone uses the customized FFmpeg to encode the raw stream. I want to know if the values in the avcC box are affected only by the version of FFmpeg(if not customized version is used).
I didn't delve into this but think that the libavcodec.so in FFmpeg fill the values in avcC box when doing encoding(is this right?).
So what I want to ask is if two different smartphones use the same libavcodec.so(even in the case whether other .so files, .apk file used for the recording, etc are different) and two video files which have the same resolution were filmed from each smartphone, do the values in avcC box the same?
I think this question may equal to "are the values in avcC box affected by other FFmpeg library or other layers in overall Android framework"?
++ there is one more question! Is there any case that two videos which have same resolution from the same smartphone have different values in avcC box? (I suggest the the difference of encoding option originating from low-battery mode, execution conditions of other apps, etc and if any core developer customize FFmpeg for that.)
It would be a great help if anyone let me know the answer~!
the avcC box contains the out of band extradata for the AVC stream. This stores way more than just resolution, such as profile, level, entropy encoding mode, color space information, etc. This is a standard, ffmpeg just implements that standard. iPhones for example produce perfectly valid mp4 file and do not use libav* / ffmpeg. See exactly what is is the avcC box here Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream

Media Foundation video decoding

I'm using Media Foundation and the IMFSampleGrabberSinkCallback to playback video files and render them to a texture. I am able to get video samples in the IMFSampleGrabberSinkCallback::OnProcessSample method, but those samples are compressed. I have way less samples than I have pixels in my render target. According to this, the media session should load any decoder that is needed (if available), but that does not seem to be the case. Even if I create the decoder and add it to the topology myself, the video samples are still compressed. Is there anything in particular I am missing here ?
Thanks.

avformat_write_header produces invalid header (resulting MPG broken)

I am rendering a video file from input pictures that come from a 3D engine at runtime (I don't pass an actual picture file, just RGB memory).
This works perfectly when outputting MP4 using CODEC_ID_H264 as video codec.
But when I want to create an MPG file using CODEC_ID_MPEG2VIDEO, the resulting file is simply broken. No player can play the video correctly and when I then want to concatenate that MPG with another MPG file, and transform the result MP4 in another step, the resulting .mp4 file has both videos, but many frames from the original MPG video (and only video! Sound works fine) are simply skipped.
At first I thought the MPG -> MP4 conversion was the problem, but then I noticed that the initial MPG, which comes from the video render engine, is already broken, which would speak for broken headers. Not sure if it is the system or sequence headers that are broken, though.
Or if it could be something totally different.
If you want to have a look, here is the file:
http://www.file-upload.net/download-7093306/broken.mpg.html
Again, the exact same muxing code works perfectly fine when directly creating an MP4 from the video render engine, so I'm pretty sure the input data, swscale(), etc. is correct. The only difference is that CODEC_ID_H264 is used and some additional variables (like qmin, qmax, etc.) are set, which are all specific to H264 so should not have an impact.
Also, neither avformat_write_header nor av_write_trailer report an error.
As an additional info, when viewing the codec data of the MPG in VLC player, it is not able to show the FPS, resolution and format (should show 640x360, 30 fps and 4:2:0 YUV).
I am using a rather new (2-3 months old, maybe) FFmpeg version, which I compiled from sources with MinGW.
Any ideas on how to resolve this would be welcome. Currently, I am out of those :)
Alright, the problem was not the avformat_write_header, but that I did not set the PTS value of each written video packet to AV_NOPTS_VALUE.
Once I do set it for each video packet, everything works fine.
I assumed that AV_NOPTS_VALUE was the default, as I never needed to set any special PTS value.

How to get the MJPEG stream from a NC541 camera?

I have an NC541 IP camera, which supposedly does have an MJPEG stream, as in the manual it says "The video is compressed by MJPEG", but I can not find a way of how to get that stream from the camera. Seems that it wants to work only with the build-in program, while I need the way mjpeg stream instead.
Any ideas? Thanks!
I don't have this camera, but on many you can simply right click on the video window in your browser, select properties, and it will tell you the URL of the raw stream. If this is a multi codec camera you may or may not get the mjpeg stream depending on which one is chosen for the camera's home page. This often works for me.

Resources