I've seen answers that says things like "60fps is maximum, no point to go higher". But my application is to display a 120fps left-right time-alternating stereoscopic signal to my DLP-Link 3D projector.
I know this is possible, because I did manage to play a 120fps test video at that speed using AVPlayerView. Now I want to convert Side-By-Side videos to such output signal format. To that end, I'm using AVPlayerLayer with CoreVideo DisplayLink (CVDisplayLink).
Q: How can I achieve 120fps refresh rate with a custom subclass of NSView on Mac?
Related
I'd like to render the live image data on a GL surface (as shown in various Project Tango samples), and at the same time record (encode) it via a MediaCodec.
(On an Android Lollipop device, I've accomplished that using the camera2 interface and multiple surface targets, which works fine, but thus far Tango is pre-Lollipop...)
From other answers, it appears that you have to use the C API to access the image data.
The C API provides two camera frame functions -- TangoService_connectTextureId() and TangoService_connectOnFrameAvailable(). However, the documentation states "Use either TangoService_connectTextureId() or TangoService_connectOnFrameAvailable() but not both."
Why not both?
How do I best render and retrieve the image data?
The Pythagoras release now allows for simultaneous use of color and color texture callbacks now. That said, you want to use the connectOnFrameAvailable if you want to process the image, you'd end up doing extra unnecessary work if you try and peel it out of the texture.
What is the most performant means for getting text via NSAttributedString:drawAtPoint into a RGBA32 CVPixelBufferRef?
Just to clarify my objective...
I'm being handed CVPixelBufferRef objects #60fps via a CVDisplayLink whilst a movie is playing. These are getting wrapped in CMSampleBuffers for output. I am using Apple's "AVGreenScreenPlayer" sample code as my base to work from.
I have an NSAttributedString object which represents a string (eg. #"ABC"). I want to draw this onto a small background (possibly) and then draw this resulting text w/background into the CVPixelBufferRef, into a corner of the video that is playing.
While the use of a CIFilter would likely be the most performant, I need access to a video frame containing the video+overlay result as a CVPixelBuffer or vImageBuffer.
For MacOSX 10.10.3. - Objective C.
Is it possible to use the UISlider in a different way then just for audio volume adjustment? I am thinking of creating a button so that when you press it you hear a sound. Now, if the user changes the slider to value 0.3 a different sound can be played, same thing with if the slider value is 0.6. Or does anyone have any other idea how to make something happen like that. The slider would just indicate the level of intensity the sound is played (each different value has a different audio file).
I'd appreciate any advice.
Or would a Stepper be a better way to go?
I have a subclass of CAOpenGLLayer that is using, manipulating and displaying frames from an AVPlayer. I want to add playback controls to this layer, but I need to forward events from the playback controls to the AVPlayer. I have found a few examples of how to add familiar playback controls, like AVPlayerView, but all of them require me to pass in an actual movie file when what I want is just the interface that I can write custom delegates for. Any ideas?
There are currently no built in, ready to use, playback controls that you can attach your own targets and actions to. You'll need to roll out your own UI if you are rendering things by yourself.
If you are feeling sneaky though, you might be able to add your layer (in a view) to the AVPlayerView, remove the output that AVPlayerView adds from your player item, add your own video output, and let the AVPlayerLayer take control of the playback.
I am creating an HTTP Live Streaming Client for Mac that will control video playback on a large screen. My goal is to have a control UI on the main screen, and full screen video on the secondary screen.
Using AVFoundation, I have successfully been able to open the stream and control all aspects of it from my control UI, and I am now attempting to duplicate the video on a secondary screen. This is proving more difficult than I imagined...
On the control screen, I have an AVPlayerLayer that is displaying the video content from an AVPlayer. My goal was to create another AVPlayerLayer, and send it the same player so that both players are playing the same video at the same time in two different views. However, that is not working.
Digging deeper, I found this in the AVFoundation docs:
You can create arbitrary numbers of player layers with the same AVPlayer object. Only the most-recently-created player layer will actually display the video content on-screen.
This is actually useless to me, because I need the video showing correctly in both views.
I can create a new instance of AVPlayerItem from the same AVAsset, then create a new AVPlayer and add it to a new AVPlayerLayer and have video show up, but they are no longer in sync because they are two different players generating two different audio streams playing different parts of the same stream.
Does anyone have any suggestions on how to get the same AVPlayer content into two different views? Perhaps some sort of CALayer mirroring trick?
AVSyncronizedLayer may help. I'm using it differently (to syncronize two different media objects rather than the same one) but in principle it should be possible to load the same item twice and then use an AvSyncronized layer to keep them synced.
I see that this topic got very old, but I think it still would be helpful. You wrote that
I have an AVPlayerLayer that is displaying the video content from an AVPlayer. My goal was to create another AVPlayerLayer, and send it the same player so that both players are playing the same video at the same time in two different views. However, that is not working.
But, it's working. I just tried it in my project. Here's my code of layer initializations:
AVPlayerLayer *playerLayer = [AVPlayerLayer new];
[playerLayer setPlayer:_testPlayer];
playerLayer.frame = CGRectMake(0, 0, _videoView.frame.size.width, _videoView.frame.size.height);
playerLayer.contentsGravity = kCAGravityResizeAspect;
playerLayer.videoGravity = AVLayerVideoGravityResizeAspect;
_defaultTransform = playerLayer.affineTransform;
[_videoView.layer insertSublayer:playerLayer atIndex:0];
AVPlayerLayer *testLayer_1 = [AVPlayerLayer playerLayerWithPlayer:_testPlayer];
testLayer_1.frame = CGRectMake(100, 100, 200, 200);
testLayer_1.contentsGravity = kCAGravityResizeAspect;
testLayer_1.videoGravity = AVLayerVideoGravityResizeAspect;
[_videoView.layer insertSublayer:testLayer_1 atIndex:1];
And here's what I got:
As you can see, there're two AVPlayerLayers playing the same AVPlayerItem in the very perfect sync
Apple's docs now state this:
You can create arbitrary numbers of player layers with the same AVPlayer object, but you should limit the number of layers you create to avoid impacting playback performance.
link to docs
This does indeed work in my app as well.