I have a subclass of CAOpenGLLayer that is using, manipulating and displaying frames from an AVPlayer. I want to add playback controls to this layer, but I need to forward events from the playback controls to the AVPlayer. I have found a few examples of how to add familiar playback controls, like AVPlayerView, but all of them require me to pass in an actual movie file when what I want is just the interface that I can write custom delegates for. Any ideas?
There are currently no built in, ready to use, playback controls that you can attach your own targets and actions to. You'll need to roll out your own UI if you are rendering things by yourself.
If you are feeling sneaky though, you might be able to add your layer (in a view) to the AVPlayerView, remove the output that AVPlayerView adds from your player item, add your own video output, and let the AVPlayerLayer take control of the playback.
Related
I've seen answers that says things like "60fps is maximum, no point to go higher". But my application is to display a 120fps left-right time-alternating stereoscopic signal to my DLP-Link 3D projector.
I know this is possible, because I did manage to play a 120fps test video at that speed using AVPlayerView. Now I want to convert Side-By-Side videos to such output signal format. To that end, I'm using AVPlayerLayer with CoreVideo DisplayLink (CVDisplayLink).
Q: How can I achieve 120fps refresh rate with a custom subclass of NSView on Mac?
The following is the way I load a video (in the actual code, the variables are member variables of the player class). I do not want the video to be played right away which is the reason I use prepareMedia(). When the application is ready to play the video, I call player.play().
However, my player view (I add EmbeddedMediaPlayerComponent to JPanel which is set as ContentPane of a JFrame) still shows the old video after running the following code with a new "videoPath" value. The player view shows the new video only after I call player.play().
EmbeddedMediaPlayerComponent mediaPlayerComponent = new EmbeddedMediaPlayerComponent();
MediaPlayer player = mediaPlayerComponent.getMediaPlayer();
player.prepareMedia(videoPath);
Is there any way I can get the player to show the new video image (or at least removing the old video image) without starting to play it? I tried calling methods such as repaint() from mediaPlayerComponent, stop() from player, in the overrided MediaPlayerEventAdpater methods such as mediaFreed(), but nothing I tried so far work.
It is a feature of VLC/LibVLC that the final frame of the video is displayed when the video ends, so you have to find a workaround.
A good solution is to use a CardLayout with two views, one for the media player component (or the Canvas used for the video surface) and another view simply with a blank (black) JPanel.
The idea then is to listen for the video starting/stopping/finishing and show the appropriate view in your card layout.
If you add a MediaPlayerEventListener and implement the playing, stopped, finished and error events you should cover all cases.
For example: in the "playing" event you switch your card layout to show the video view, in the "stopped", "finished" and "error" events you switch your card layout to show the blank view.
The view doesn't have to be black of course, you can do whatever you want like show an image.
Also note that the media player events will NOT be delivered on the Swing Event Dispatch Thread, so you will need to use SwingUtilities#invokeLater to properly switch your view.
If you look in soundcloud you'll notice that when you play a song it plays in the main content of the page as well as a 'footer' player. I'm trying to achieve something similar, with jplayer or soundmanager. My main content of my pages are ajaxed while the footer stays consistent to support the continious player of the website.
But my question is, how do you play music from the ajaxed footer player while animating the main content player and having seek functions on both?
With jPlayer I assume you would need to listen for $.jPlayer.event.play, $.jPlayer.event.pause, $.jPlayer.event.ended, $.jPlayer.event.seeked in your main player and update secondary interface.
You may need to create secondary interface yourself (play/pause buttons, seek bar, etc) and add event handlers for them passing corresponding parameters to the main player.
I am creating an HTTP Live Streaming Client for Mac that will control video playback on a large screen. My goal is to have a control UI on the main screen, and full screen video on the secondary screen.
Using AVFoundation, I have successfully been able to open the stream and control all aspects of it from my control UI, and I am now attempting to duplicate the video on a secondary screen. This is proving more difficult than I imagined...
On the control screen, I have an AVPlayerLayer that is displaying the video content from an AVPlayer. My goal was to create another AVPlayerLayer, and send it the same player so that both players are playing the same video at the same time in two different views. However, that is not working.
Digging deeper, I found this in the AVFoundation docs:
You can create arbitrary numbers of player layers with the same AVPlayer object. Only the most-recently-created player layer will actually display the video content on-screen.
This is actually useless to me, because I need the video showing correctly in both views.
I can create a new instance of AVPlayerItem from the same AVAsset, then create a new AVPlayer and add it to a new AVPlayerLayer and have video show up, but they are no longer in sync because they are two different players generating two different audio streams playing different parts of the same stream.
Does anyone have any suggestions on how to get the same AVPlayer content into two different views? Perhaps some sort of CALayer mirroring trick?
AVSyncronizedLayer may help. I'm using it differently (to syncronize two different media objects rather than the same one) but in principle it should be possible to load the same item twice and then use an AvSyncronized layer to keep them synced.
I see that this topic got very old, but I think it still would be helpful. You wrote that
I have an AVPlayerLayer that is displaying the video content from an AVPlayer. My goal was to create another AVPlayerLayer, and send it the same player so that both players are playing the same video at the same time in two different views. However, that is not working.
But, it's working. I just tried it in my project. Here's my code of layer initializations:
AVPlayerLayer *playerLayer = [AVPlayerLayer new];
[playerLayer setPlayer:_testPlayer];
playerLayer.frame = CGRectMake(0, 0, _videoView.frame.size.width, _videoView.frame.size.height);
playerLayer.contentsGravity = kCAGravityResizeAspect;
playerLayer.videoGravity = AVLayerVideoGravityResizeAspect;
_defaultTransform = playerLayer.affineTransform;
[_videoView.layer insertSublayer:playerLayer atIndex:0];
AVPlayerLayer *testLayer_1 = [AVPlayerLayer playerLayerWithPlayer:_testPlayer];
testLayer_1.frame = CGRectMake(100, 100, 200, 200);
testLayer_1.contentsGravity = kCAGravityResizeAspect;
testLayer_1.videoGravity = AVLayerVideoGravityResizeAspect;
[_videoView.layer insertSublayer:testLayer_1 atIndex:1];
And here's what I got:
As you can see, there're two AVPlayerLayers playing the same AVPlayerItem in the very perfect sync
Apple's docs now state this:
You can create arbitrary numbers of player layers with the same AVPlayer object, but you should limit the number of layers you create to avoid impacting playback performance.
link to docs
This does indeed work in my app as well.
I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:
change some camera parameters on the fly (gain, gamma, etc.)
tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)
Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:
Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?
The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...
Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ...
Thanks in advance
I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.
For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.
The CVDisplayLink is configured using the following code:
CGDirectDisplayID displayID = CGMainDisplayID();
CVReturn error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
NSLog(#"DisplayLink created with error:%d", error);
displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);
and it calls the following function to trigger the retrieval of a new camera frame:
static CVReturn renderCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext)
{
return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}
The CVDisplayLink is started and stopped using the following:
- (void)startRequestingFrames;
{
CVDisplayLinkStart(displayLink);
}
- (void)stopRequestingFrames;
{
CVDisplayLinkStop(displayLink);
}
Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.
Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.
Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.
If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.
Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.
Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.