How to build pjsip on windows - windows

I am going to build pjsip on window7,and I almost be ready to compile the project,but something confused me is one of the steps show in this page:link
It is said about SDL2,describe is here:
SDL sources comes with VS project settings, under VisualC sub-directory
So,what should I do indict by the description?
Another question,should I build SDL2 from source?

You should build SDL only if you want to compile PJSIP with video enabled.
It is used for rendering only. If you want you can build it from source.
Another option for rendering is DirectShow but it's broken for ages.
I developed a renderer driver for PJSIP to render to a bitmap and than I render this bitmap in C#. It's easier and works fast.
Also for video you will need ffmpeg at least for codecs.
My opinion: start without video. It's not easy to make it work and might be you won't need it at all.

Related

How do I play an MP3 file using Lazarus on macOS

I'd like to be able to play an MP3 file programmatically, using Lazarus on macOS.
Lazarus 2.0 (fpc 3.0.4) on macOS is working great for me, but one thing I cannot manage to do is to play an MP3 file programmatically.
I managed to compile and run the OALSoundManager demo project, but only WAV files can be played that way.
I spent several hours following various leads from the freepascal forum, but I still could not manage to do the basic play operations:
Load an MP3 file
Start playing it.
Get the current playing position (e.g. during OnTimer).
Be notified when it stops.
I'm OK with using any common library. Of course the less dependencies the better.
Once I can play the file I can figure out the rest, but it would be great if the example also showed:
Start playing from a given time position
Pause/Restart
Thank you very much for any help!
You might be able to use Castle Engine and OpenAL.
You can install Castle Engine from with-in Lazarus. In the main menu under "Package" -> "Online Package Manager" you will be able to filter and install "castle".
Then you should be able to open the example project:
https://github.com/castle-engine/castle-engine/blob/master/examples/audio/alplay.lpr
Goodluck,

Extract raw audio frames from OGG music file with Android NDK

In my Android app, I would like to be able to process audio on the fly from an OGG file by extracting audio samples, process them and redirect them to the audio output.
I know how to make the last 2 steps using Android NDK, but I don't know how to extract audio samples to get them in an array of floats or shorts.
I tried to make this code work that, apparently, can extract raw audio samples on the fly.
The problem is: I don't manage to add FFMpeg in my project. I tried many tutorials (like this one), but it seems pretty difficult since I work on Windows. So after a while, I found Prebuild FFMpeg for Android, that seems interesting since it's available for armeabi-v7a, arm64-v8a, x86 and x86_64 architectures, but again, I don't understand how to add it in my project.
I also took a look at libogg, libvorbis and vorbisfile, but I have no idea how to add them in my project.
So, does anyone have a working example on how to extract audio samples from an OGG file on the fly?
Thanks for your help.
Here is how I finally managed to proceed:
Here is a very detailed tutorial that explains clearly how to compile FFMpeg for Android (please take a close look at the comments): https://www.youtube.com/watch?v=2OGbamEjYhc
Then, here is a good tutorial that explains how to implement the FFMpeg library in an Android project (starting from "Integration of pre-built C/C++ libraries to an Android project"): https://proandroiddev.com/android-ndk-how-to-integrate-pre-built-libraries-in-case-of-the-ffmpeg-7ff24551a0f
Here is how I finally managed to extract audio samples from an audio file using FFMpeg: https://gist.github.com/mregnauld/2538d98308ad57eb75cfcd36aab5099a
For the step #2, you'll just have to make minor updates so you can target the following ABIs:
armeabi-v7a
arm64-v8a
x86
x86_64
Hope this helps.

Run time AR image target creation for Unity

Is there an SDK that supports run time image target creation for Unity?
I have tried User Defined Target from Vuforia but it does not give me the option to save the image for future use.
8th Wall SDK has image target support. You can provide an image at runtime by simply provide the RGB pixels to the engine configure call.
8th Wall calls into your phone available native support. So if you have ARKit, it uses ARKit image detection. If you have ARCore, it will call into ARCore (arcore 1.2 has image detection support but it just got released so it will take the 8th wall team a bit of time to roll out its support). The nice thing is you write your code once and it will just work.
Note: I worked on this product as my day job and was involved with this specific feature. Feel free to add comment to this answer if you would like more info.
Creating your own with tools like OpenCV is an option, and relatively easy. You can do this using either packages like OpenCV for Unity on the Unity Asset Store or EmguCV (paid).
This requires more work and a bit more understanding of computer vision, but there's useful tutorials available and I believe both packages provide examples that would cover what you're after.

How do I use OGG with SDL_Mixer?

I can't seem to get SDL_Mixer to initialize with OGG support enabled. I know that I must link with libogg, libvorbis and libvorbisfile but it still won't work. I have .dylibs, .frameworks and .as of these three libraries and I've tried them all.
I'm copying the dylibs/frameworks into the Frameworks folder of the app package in the build phases tab.
I have Runpath Search Paths set to #executable_path/../Frameworks in the build settings tab.
But Mix_Init(MIX_INIT_OGG) keeps returning the error OGG support not available.
I'm using the latest Homebrew versions of all of the mentioned libraries. I'm not sure what else to try.
I have a finished game with 300MB of wavs as the music.
Update
I’ve managed to mix some Objective C with C++ and get some sound playing with AVAudioPlayer but it’s horrendous code. I’m having to cast to void * to make sure my music player class is compatible with my C++ code base. The garbage collector is so annoying. All it does is get in your way. You have to fight it with bridge casts.
I’d really like to use SDL_Mixer or a simple C library.
I got the Objective C music player fully implemented but I just can’t stand Objective C. I found a brilliant library called stb_vorbis. I’ve used a few of the other stb libraries so I knew this one would be great. Playing the audio data from this library is actually pretty easy. All I really had to do was call stb_vorbis_get_samples_short_interleaved inside of SDL2’s audio callback. I didn’t have to do any mixing because I only need to play music. The implementation was so simple that it actually worked perfectly the first time I ran it! This because stb_vorbis and SDL2 are both C libraries so they just make sense (unlike Objective C which should be burned).
TL;DR
If you just want to play music and either can’t use SDL_Mixer or don’t want to use SDL_Mixer then you can just feed the data from stb_vorbis into SDL2’s audio callback.

Xamarin.Forms video streaming library that supports RTSP video feeds

I am using Plugin.MediaManager NuGet package to provide cross-platform video player for my app. However, it does not support playing RTSP video streams. Is there any other library that supports this?
I have looked around and the most common ones are platform-specific libraries such as KXmovie and Managed Media Aggregation but I am a little intimidated by the thought of having to port and/or recompile them.
The best case is if there is a Xamarin.Forms compatible NuGet package available. Failing that, an iOS library that requires binding, but not recompiling. As a last resort, something that needs to be compiled and linking manually, but works out of the box.
OK so the resounding conclusion is that one does not exist with Xamarin bindings. I will start with this project on GitHub and see if I can compile and generate the bindings myself.
A bit late, but there is now. LibVLCSharp supports RTSP (and many other stuff).

Resources