So I'm porting an app from Windows to Mac, and part of the app deals with creating movie files. On Windows, there's a group of functions like ICOpen and ICConfigure, which signify to the video compression driver to open up a configuration box for the selected codec. Is there anything like that for QuickTime on Mac?
You will need to use straight Quicktime-C calls for this, but the function you want is called MovieExportDoUserDialog.
See the QTMovieExportSettings article on CocoaDev for the full details.
Related
I am using libVLCSharp (Xamarin bindings to the libvlc library) for video playback. I am adding a feature to transcode / compress downloaded videos to the local filesystem. Normally I'd use mobile-ffmpeg for this, but since internally VLC uses ffmpeg with different build configurations, ffmpeg transcoding does not work - android it crashes the app, ios returns a 'Invalid data found when processing input' error message when attempting to convert. The code works fine when libvlcsharp is uninstalled and there is only one instance of the ffmpeg library being used.
So, I'm looking for video compression / transcode options for Xamarin.iOS.
I've looked into the docs for libvlcsharp and it sounds like we can transcode, but it's somewhat vague on how to do that:
https://github.com/videolan/libvlcsharp/blob/3.x/docs/how_do_I_do_X.md
How do I do transcoding?
Pretty similarly to how you would do it from the CLI. Read https://wiki.videolan.org/Transcode/ and try media options with Media.AddOption.
In my case, I'd like to transcode non-interactively / in the background of the app. There is a 'dummy' option in the command line interface for doing this, so I was wondering how we'd do this from libvlc / libvlcsharp.
So far, my code from these docs looks like this:
using(var vlc = new LibVLC())
{
Media media = new Media(vlc, source, FromType.FromLocation);
media.AddOption($":sout=#transcode{{vcodec=h264,venc=x264{{cfr=40}},vb=2048,scale=auto,acodec=mp4a,ab=96,channels=2,samplerate=44100}}:std{{access=file,mux=ts,dst={destination}}}");
media.AddOption("dummy");
//How do I start the transcoding non interactively??
}
As replied on the libvlc discord server, when you are transcoding a video, it is not displayed at all and you just don't need to create a VideoView, the MediaPlayer alone is enough.
The rest is finding the correct transcoding options, and you already found the correct wiki page for that.
My first attempt was to get the window title from the process object using:
System.Diagnostics.Process.GetProcessesByName("vlc").MainWindowTitle()
This works as long as the file being played does not have the "title" metadata set, if it does the window title is not the filename but whatever was set for the title.
This has to be done on a GUI (client) instance of VLC separate from my application, so the HTTP and other interfaces VLC provides is out of the question, and I believe that also means using libvlc is also out of the question (I don't want to embed VLC in my app).
Is this possible? Does VLC provide no way to interact with it when running in it's normal client mode? Are there any other tricks I'm not thinking of, like finding out through windows a list of files it's accessing or something along those lines? Given a list of files VLC is reading from I could figure out which one is a video file... Anything like this with windows diagnostics or anything similar?
Thank you
I need to do some system-wide audio processing in my app.
I have installed Soundflower and selected it as my default output device in order to get the system audio. I know that Soundflower merely copies the mix buffer to a ThruBuffer and passes it to the apps so they can get it in their AudioDeviceIOProc callback.
What I don't understand is how to route the audio back to the Built-In output device after I've done the audio processing. I have the Soundflower device as the default, and it produces silence as I try to route the audio to the default output unit. Maybe what I need is to create a Multi-Output device in my program but I'm not sure how to do that.
You can create a multi-output device on osx - they're called "aggregate devices". You can do it manually in Audio MIDI Setup app and use that device in your app, or do it programmatically in your app.
If you do do it in app, example code seems to be rare. I cribbed the info I needed from this blog post.
NB the post is very old, I had to go to the Internet Archive Wayback Machine to find it.
i am looking for differnt solutions to capture video stream from monitor screen and send it to vidoestreaming server to broadcast in web. it must occuring in "live".
i'd not like to use external services like "procaster" for broad.
OS: Windows.
it will be great to know the ideas and expirience people have to accomplish that.
Thanks all.
Recently, I build a GoLang project called ScreenStreamer, is a tool to stream current active window or full screen (Linux's or Windows's) to other device, like phone or another PC, as MJPEG over http or FLV over rtmp, it's very realtime (delay < 100ms). It works on Windows and Linux.
After building it, you can run it as:
# enter the project root directory
cd ./src/ScreenStreamer
# run it
./mjpeg or .\mjpeg.exe
# use a web browser or other video player, open http://host:port/mjpeg
./rtmp or .\rtmp.exe
# use a video player, open rtmp://host:port/live/screen
Screenshot:
Windows SDK includes Push Source Filters Sample, which in turn contains CPushSourceDesktop filter/class.
CPushSourceDesktop: Copy of current desktop image (GDI only)
It captures desktop image and pushes it into DirectShow pipeline. From there on you can process it using video compression codec and stream it to remote location. A decent screen image compression codec is included with Windows Media subsystem, network streaming will have to be a custom or third party component. Alternatively, it is possible to make the capture class a virtual camera and have Windows Media Encoder broadcast it (or, it already has a simila feature built in).
Alternatively, you can check VNC (or one of the clones) source code and see how it hooks windows and captures image updates, then compresses them and makes it available for remote applications.
Note that you will have to specifically capture non-GDI images (such as coming from video/gaming applications, which use hardware acceleration and non-RGB surffaces).
I am trying to make a simple application which will store the sound said by user , say on click of record button and will play it back to him/her , say on click of play button.
Can anyone suggest me some appropriate way to do this ??
Thanks,
Miraaj
You can use QuickTime Kit's capture APIs to record a movie of the audio, and QTMovie (from the same framework) to convert it to a more conventional format for audio files and to play back both the intermediate file and the converted file.
There used to be a QuickTime Kit Programming Guide, but it didn't cover capturing and is now gone from developer.apple.com. You should file a bug against the docs.
This answer will work in a Cocoa (Mac) app. If you meant to ask about the iPhone, you should re-tag your question, as the solution will be completely different for a Cocoa app vs. a Cocoa Touch (iPhone) app.
I used direct sound to create an entire internet phone application a few years ago. Your question is far simpler, you won't have to deal with the circular buffer as critically. Direct sound is pretty main stream and you can find a lot of help with it in forums, and it's free!