In my Android app, I would like to be able to process audio on the fly from an OGG file by extracting audio samples, process them and redirect them to the audio output.
I know how to make the last 2 steps using Android NDK, but I don't know how to extract audio samples to get them in an array of floats or shorts.
I tried to make this code work that, apparently, can extract raw audio samples on the fly.
The problem is: I don't manage to add FFMpeg in my project. I tried many tutorials (like this one), but it seems pretty difficult since I work on Windows. So after a while, I found Prebuild FFMpeg for Android, that seems interesting since it's available for armeabi-v7a, arm64-v8a, x86 and x86_64 architectures, but again, I don't understand how to add it in my project.
I also took a look at libogg, libvorbis and vorbisfile, but I have no idea how to add them in my project.
So, does anyone have a working example on how to extract audio samples from an OGG file on the fly?
Thanks for your help.
Here is how I finally managed to proceed:
Here is a very detailed tutorial that explains clearly how to compile FFMpeg for Android (please take a close look at the comments): https://www.youtube.com/watch?v=2OGbamEjYhc
Then, here is a good tutorial that explains how to implement the FFMpeg library in an Android project (starting from "Integration of pre-built C/C++ libraries to an Android project"): https://proandroiddev.com/android-ndk-how-to-integrate-pre-built-libraries-in-case-of-the-ffmpeg-7ff24551a0f
Here is how I finally managed to extract audio samples from an audio file using FFMpeg: https://gist.github.com/mregnauld/2538d98308ad57eb75cfcd36aab5099a
For the step #2, you'll just have to make minor updates so you can target the following ABIs:
armeabi-v7a
arm64-v8a
x86
x86_64
Hope this helps.
Related
I want to generate supercut of video max duration 30 seconds. I am new to laravel could anyone help me with this
PHP does not provide any built in functionality to achieve video manipulating. So using a third party is required.
PHP-FFMPEG library is a video/audio library. It does many common video-editing tools including taking shots of particular frame(s).
notice that you need to have FFMpeg binary on your machine as documentation sais:
This library requires a working FFMpeg install. You will need both FFMpeg and FFProbe binaries to use it. Be sure that these binaries can be located with system PATH to get the benefit of the binary detection, otherwise you should have to explicitly give the binaries path on load.
after configuration completed you can take frame shots easily:
$video
->filters()
->extractMultipleFrames(FFMpeg\Filters\Video\ExtractMultipleFramesFilter::FRAMERATE_EVERY_10SEC, '/path/to/destination/folder/')
->synchronize();
$video
->save(new FFMpeg\Format\Video\X264(), '/path/to/new/file');
Good Luck
I am using Plugin.MediaManager NuGet package to provide cross-platform video player for my app. However, it does not support playing RTSP video streams. Is there any other library that supports this?
I have looked around and the most common ones are platform-specific libraries such as KXmovie and Managed Media Aggregation but I am a little intimidated by the thought of having to port and/or recompile them.
The best case is if there is a Xamarin.Forms compatible NuGet package available. Failing that, an iOS library that requires binding, but not recompiling. As a last resort, something that needs to be compiled and linking manually, but works out of the box.
OK so the resounding conclusion is that one does not exist with Xamarin bindings. I will start with this project on GitHub and see if I can compile and generate the bindings myself.
A bit late, but there is now. LibVLCSharp supports RTSP (and many other stuff).
I am going to build pjsip on window7,and I almost be ready to compile the project,but something confused me is one of the steps show in this page:link
It is said about SDL2,describe is here:
SDL sources comes with VS project settings, under VisualC sub-directory
So,what should I do indict by the description?
Another question,should I build SDL2 from source?
You should build SDL only if you want to compile PJSIP with video enabled.
It is used for rendering only. If you want you can build it from source.
Another option for rendering is DirectShow but it's broken for ages.
I developed a renderer driver for PJSIP to render to a bitmap and than I render this bitmap in C#. It's easier and works fast.
Also for video you will need ffmpeg at least for codecs.
My opinion: start without video. It's not easy to make it work and might be you won't need it at all.
Anyone know of a vorbis decoder library that can be used on Windows Phone 7?
The lack of native code interop make re-using any of the native code implementations difficult (impossible?) but if there are tricks to do that, I'm open to that as well.
There is a managed implementation for mono called csvorbis, it includes a sample which outputs a wav file this didn't need many changes to work with XNA's SoundEffect class. I did a whole track at once, this took a few seconds in the emulator so you may need to stream it using DynamicSoundEffect for better results. The mooncodecs folder has a codec for the desktop version based on csvorbis which may be worth a look aswell.
Ogg Vorbis is not a supported codec on Windows Phone 7 and the platform supports no way of adding support for custom codecs.
The options available are to write your own decoder/converter in managed code or to convert the original source files.
I suspect the second option will be easier.
I am writing a video using OpenCV on Linux machine. I want to read the same video using OpenCV on a Windows machine. I am not able to do this using the standard codecs provided in openCV.
Can anybody suggest how I can read/write videos across the two platforms?
The OpenCV Wiki directly addresses this issue. See http://opencv.willowgarage.com/wiki/VideoCodecs and specifically the heading "Compatibility list."
Unfortunately the only codecs supported on all three platforms (Linux, Windows & OSX) are 'DIB' 'I420' and 'IYUV' which are all uncompressed video codecs and thus make for really huge file sizes.
The wiki also lists some codecs to try that may work on any two platforms but not on all three.
If you decide to use uncompressed video files, you can convert them to something with a smaller filesize once they are on your windows machine using a program like VirtualDub.
Edit: FYI, On Windows I have OpenCV output in Motion-JPEG and then I use VirtualDub in directstream copy mode to resave the file which corrects a bug with the movie's index. These M-JPEG video files then play by default on Mac and Windows.
If I am trying to read video into OpenCV, I often will first convert my video to Cinepak, (using virtual dub, quicktime etc.) and then feed it into OpenCV. I use Cinepak because for some reason Cinepak encoders seem more prevalentthan MJPEG encoders.
I don't think the problem is with OpenCV, I think it is with codecs, as you mentioned. I also don't think OpenCV comes with codecs... double check that you have the proper codecs installed under Windows.
Did you look at the documentation on video codecs?