Generate valid ASF file for WMAsfReader - ffmpeg

My legacy software breaks after migrating it to Windows 10 since WMV Encoder 9 SDK isn't longer supported.
I've tried other approaches, and I can generate the ASF file I need using FFmpeg.
I only need mux audio and video into ASF container, and this command do it :
ffmpeg -y -i audio.mp3 -i video.asf -vcodec copy -acodec copy output.asf
It works well, and the file can be played using VLC or Windows Media Player.
But it can't be played by DirectShow. I've got a ASF_E_INVALIDHEADER error when I set as the source of WMAsfReader. Any idea how can I generate a valid ASF file for the WMAsfReader?
Thnks!

You might be unable to use legacy SDK, but current Windows APIs to produce ASF files (with DirectShow and Media Foundation) are in good standing in Windows 10:
DirectShow: WM ASF Writer Filter
Media Foundation: ASF Support in Media Foundation, ASF Media Sinks
This content should be acceptable for WMAsfReader. FFmpeg has always generated "almost good" output, so it was acceptable for long time. However checks for format consistency in OS components were getting stricter and stricter over time and once in a while FFmpeg content is not longer considered valid.

Related

Issue with playing Ant Media Server VODs on macOS after recording on Windows

In Ant Media Server after recording stream on Windows using API, the VOD plays fine on Windows. But when playing the same VOD on macOS using Quick Time Player v10.5, the video freezes after some seconds and audio continues.
VODs playback with Quick Time Player is fine for recordings made on macOS.
How can I overcome this and is it an expected behaviour!
TL;DR;
Transcode the video with ffmpeg after recording or add at least one adaptive bitrate on the Ant Media Server side.
This is a known issue in quick time player. This problem also exists for MacOS/iOS and Safari. Let me tell the cause of the problem and offer a solution.
Problem:
The resolution may be changed in WebRTC sessions according to the network conditions so that the resolution of the recording is being changed to lower or higher resolution.
Most of the players and browsers can handle that. On the other hand, Safari and Quick Time Player cannot handle resolution changes and the problem you mention appear.
Solution:
Transcoding the stream into a specific resolution with ffmpeg or using adaptive bitrate on the server side resolves this issue. Typical ffmpeg command is sufficient
ffmpeg -i INPUT.mp4 OUTPUT.mp4
A. Oguz antmedia.io

Create fake webcam with ffmpeg on Windows?

ffmpeg has all kinds of options in it to record video off of a webcam, transcode video, send video up to streaming servers. Is there a way to loop over a file and make it look like a webcam?
I found this for Linux:
https://gist.github.com/zburgermeiszter/42b651a11f17578d9787
I've search around a lot to try to find something for Windows, but have not yet found anything.
No, that's not part of FFmpeg so you'll need to create this "virtual video device" yourself. See e.g. How to create virtual webcam in Windows 10?.

FFmpeg hardware acceleration on Raspberry PI

I am building a program that use ffmpeg to stream webcam content over internet. I would like to know if it is possible to use the GPU for the streaming part on the raspberry pi model 3. If yes, how could I implement this on ffmpeg?
You'll need some additional configure options:
--enable-mmal – Enable Broadcom Multi-Media Abstraction Layer (Raspberry Pi) via MMAL. For hardware decoding of H.264, VC-1, MPEG-2, MPEG-4. As a dependency you'll need the linux-raspberrypi-headers (Arch Linux) or linux-headers-*-raspi2 (Ubuntu) package which provides the required header file mmal.h.
--enable-omx-rpi – Enable OpenMAX IL code for Raspberry Pi. For hardware encoding of H.264 (encoder is named h264_omx) and MPEG-4 (mpeg4_omx). As a dependency you'll need the libomxil-bellagio (Arch Linux) or libomxil-bellagio-dev (Ubuntu) package which provides the required header file OMX_Core.h.
For Arch Linux users:
Copy the PKGBUILD file for the ffmpeg package (perhaps via the ABS if you prefer). Add the two new configure options shown above, and add the two mentioned packages to the depends line. Compile/install with the makepkg command.
Disclaimer: I don't have one of these devices to test any of this. Most of this info was acquired from the FFmpeg configure file.
The ffmpeg package from apt now comes with hardware codecs enabled so you can just install that using:
sudo apt install ffmpeg
There are a few hardware enabled codecs on the Pi depending on which model you've got. Here's an excerpt from this detailed post/thread on the Raspberry Pi Forum:
Pi0-3 have hardware accelerated decode for H264, MPEG4, H263, and
through optional codec licences for MPEG2 and VC1.
Pi4 has the same hardware accelerated decode for H264, but not the
other codecs. It also has a separate block for HEVC.
There are a few APIs (v4l2m2m, VAAPI, OMX, MMAL,...) to access the hardware codecs, but the main one is now the Video Memory-To-Memory Interface based h264_v4l2m2m, and there's also the [older] h264_omx OMX based one, and others. For full list of codecs for encode and decode run:
ffmpeg -codecs
Note: If you have changed the gpu_mem setting in /boot/config.txt it needs to be greater than 16, otherwise you will get an error with all hardware codecs.

Ffmpeg support rtmp broadcasts from OS X with audio

I've successfully gotten ffmpeg to stream live video from the built-in webcam on my macbook pro to my rtmp server but I cannot figure out how to get it to also send audio from the built-in microphone.
I've tried both the qtkit device as well as the avfoundation. It appears that neither support an audio stream.
Does ffmpeg support audio capture on a mac?
All of the examples I can find only show audio capture working with the DirectShow device.
Turns out it isn't supported at this time. With the help of some of the ffmpeg-devel folks I was able to get it working with a patch. I've applied the patch to a fork available here:
https://github.com/realrunner/FFmpeg

FFmpeg streaming using H.264 (with audio) - Red5 media server (Ubuntu OS)

I'm trying to stream my webcam with FFmpeg to my Red5 server using RTMP. I've done this successfully using FLV format with the following line:
ffmpeg -f video4linux2 -i /dev/video0 -f flv rtmp://localhost/live/livestream
I'm new to FFmpeg and live streaming, and I've tried to stream using H.264/MPEG-4. But my knowledge is a bit limited with the FFmpeg options (which I did find here: http://man.cx/ffmpeg%281%29).
So, my questions would be:
How can I use H.264/MPEG-4 to stream to my Red5 server?
What are the options to stream audio as well?
And one final issue is:
I'm having a delay of about 5 seconds when I play the content with JWPlayer in Mozilla Firefox (on Ubuntu). Can you please help me out to solve this problem? Any suggestions why this might be?
Many thanks
There is no need to use ffmpeg for streaming H.264/MPEG-4 files because red5 has build in support for this. Using ffmpeg will only put an unnecessary load on your CPU usage. Red5 will recognize the file type automatically, you only have to specify the mp4 file in your JWPlayer.
About the delay, as far as I know JWPlayer has a buffer of 3 seconds by default. You can try to lower this (property bufferlength or something like that). And maybe JWPlayer has a "live" property as well to stream with minimal delay, but I am not sure about that. Removing ffmpeg will probably speed up the process also.

Resources