Displaying AVPlayer content on two views simultaneously - calayer

I am creating an HTTP Live Streaming Client for Mac that will control video playback on a large screen. My goal is to have a control UI on the main screen, and full screen video on the secondary screen.
Using AVFoundation, I have successfully been able to open the stream and control all aspects of it from my control UI, and I am now attempting to duplicate the video on a secondary screen. This is proving more difficult than I imagined...
On the control screen, I have an AVPlayerLayer that is displaying the video content from an AVPlayer. My goal was to create another AVPlayerLayer, and send it the same player so that both players are playing the same video at the same time in two different views. However, that is not working.
Digging deeper, I found this in the AVFoundation docs:
You can create arbitrary numbers of player layers with the same AVPlayer object. Only the most-recently-created player layer will actually display the video content on-screen.
This is actually useless to me, because I need the video showing correctly in both views.
I can create a new instance of AVPlayerItem from the same AVAsset, then create a new AVPlayer and add it to a new AVPlayerLayer and have video show up, but they are no longer in sync because they are two different players generating two different audio streams playing different parts of the same stream.
Does anyone have any suggestions on how to get the same AVPlayer content into two different views? Perhaps some sort of CALayer mirroring trick?

AVSyncronizedLayer may help. I'm using it differently (to syncronize two different media objects rather than the same one) but in principle it should be possible to load the same item twice and then use an AvSyncronized layer to keep them synced.

I see that this topic got very old, but I think it still would be helpful. You wrote that
I have an AVPlayerLayer that is displaying the video content from an AVPlayer. My goal was to create another AVPlayerLayer, and send it the same player so that both players are playing the same video at the same time in two different views. However, that is not working.
But, it's working. I just tried it in my project. Here's my code of layer initializations:
AVPlayerLayer *playerLayer = [AVPlayerLayer new];
[playerLayer setPlayer:_testPlayer];
playerLayer.frame = CGRectMake(0, 0, _videoView.frame.size.width, _videoView.frame.size.height);
playerLayer.contentsGravity = kCAGravityResizeAspect;
playerLayer.videoGravity = AVLayerVideoGravityResizeAspect;
_defaultTransform = playerLayer.affineTransform;
[_videoView.layer insertSublayer:playerLayer atIndex:0];
AVPlayerLayer *testLayer_1 = [AVPlayerLayer playerLayerWithPlayer:_testPlayer];
testLayer_1.frame = CGRectMake(100, 100, 200, 200);
testLayer_1.contentsGravity = kCAGravityResizeAspect;
testLayer_1.videoGravity = AVLayerVideoGravityResizeAspect;
[_videoView.layer insertSublayer:testLayer_1 atIndex:1];
And here's what I got:
As you can see, there're two AVPlayerLayers playing the same AVPlayerItem in the very perfect sync

Apple's docs now state this:
You can create arbitrary numbers of player layers with the same AVPlayer object, but you should limit the number of layers you create to avoid impacting playback performance.
link to docs
This does indeed work in my app as well.

Related

Qt5 Separating a video widget

I am trying to find a way to create and separate a video widget into two parts, in order to process stereo videos:
The first one would play a part of the video;
The second one would play the other part of the video.
I currently do not know where to start. I am searching around qt multimedia module, but I do not know how to achieve this behavior.
Does anyone have an idea?
I was also thinking to build two video widgets and run them into two threads but they have to be perfectly synchronized. The idea was to cut the video into two ones with ffmpeg and affecting each one to a video widget. However I do not think it would be easy to achieve this (each frame would have to be sync).
Thanks for your answers.
If your stereo video data is encoded in some special format that needs decoding on the codec/container format, I think that the QMultiMedia stuff in Qt is too basic for this kind of use case as it does not allow tuning into "one stream" of a multi-stream transport container.
However, if you have alternating scan-lines, alternating frames or even "side-by-side" or "over-and-under" image per frame encoded in a "normal" video stream, then all you will have to do is intercept the frames as they are being decoded and separate the frame into two QImages and display them.
That is definitely doable!
However depending on your video source and even the platform, you might want to select different methods. For example if you are using a QCamera as the source of your video you could use the QVideoProbe or QViewFinder approaches. Interrestingly the availability of those methods on different platforms vary, so definitely figure out that first.
If you are decoding video using QMediaPlayer, QVideoProbe will probably be the way to go.
For an inttroduction to how you can grab frames using the different methods, please look at some of the examples from the official documentation on the subject.
Here is a short example of using the QVideoProbe approach:
videoProbe = new QVideoProbe(this);
// Here, myVideoSource is a camera or other media object compatible with QVideoProbe
if (videoProbe->setSource(myVideoSource)) {
// Probing succeeded, videoProbe->isValid() should be true.
connect(videoProbe, SIGNAL(videoFrameProbed(QVideoFrame)),
this, SLOT(processIndividualFrame(QVideoFrame)));
}
// Cameras need to be started. Do whatever your video source requires to start here
myVideoSource->start();
// [...]
// This is the slot where the magic happens (separating each single frame from video into two `QImage`s and posting the result to two `QLabel`s for example):
void processIndividualFrame(QVideoFrame &frame){
QVideoFrame cloneFrame(frame);
cloneFrame.map(QAbstractVideoBuffer::ReadOnly);
const QImage image(cloneFrame.bits(),
cloneFrame.width(),
cloneFrame.height(),
QVideoFrame::imageFormatFromPixelFormat(cloneFrame.pixelFormat()));
cloneFrame.unmap();
QSize sz = image.size();
const int w = sz.width();
const int h2 = sz.height() / 2;
// Assumes "over-and-under" placement of stereo data for simplicity.
// If you instead need access to individual scanlines, please have a look at [this][2].
QImage leftImage = image.copy(0, 0, w, h2);
QImage rightImage = image.copy(0, h2, w, h2);
// Assumes you have a UI set up with labels named as below, and with sizing / layout set up correctly
ui->myLeftEyeLabel.setPixmap(QPixmap::fromImage(leftImage));
ui->myRightEyeLabel.setPixmap(QPixmap::fromImage(leftImage));
// Should play back rather smooth since they are effectively updated simultaneously
}
I hope this was useful.
BIG FAT WARNING: Only parts of this code has been tested or even compiled!

VLCJ EmbeddedMediaPlayerComponent player still shows old video image after preparing a new video

The following is the way I load a video (in the actual code, the variables are member variables of the player class). I do not want the video to be played right away which is the reason I use prepareMedia(). When the application is ready to play the video, I call player.play().
However, my player view (I add EmbeddedMediaPlayerComponent to JPanel which is set as ContentPane of a JFrame) still shows the old video after running the following code with a new "videoPath" value. The player view shows the new video only after I call player.play().
EmbeddedMediaPlayerComponent mediaPlayerComponent = new EmbeddedMediaPlayerComponent();
MediaPlayer player = mediaPlayerComponent.getMediaPlayer();
player.prepareMedia(videoPath);
Is there any way I can get the player to show the new video image (or at least removing the old video image) without starting to play it? I tried calling methods such as repaint() from mediaPlayerComponent, stop() from player, in the overrided MediaPlayerEventAdpater methods such as mediaFreed(), but nothing I tried so far work.
It is a feature of VLC/LibVLC that the final frame of the video is displayed when the video ends, so you have to find a workaround.
A good solution is to use a CardLayout with two views, one for the media player component (or the Canvas used for the video surface) and another view simply with a blank (black) JPanel.
The idea then is to listen for the video starting/stopping/finishing and show the appropriate view in your card layout.
If you add a MediaPlayerEventListener and implement the playing, stopped, finished and error events you should cover all cases.
For example: in the "playing" event you switch your card layout to show the video view, in the "stopped", "finished" and "error" events you switch your card layout to show the blank view.
The view doesn't have to be black of course, you can do whatever you want like show an image.
Also note that the media player events will NOT be delivered on the Swing Event Dispatch Thread, so you will need to use SwingUtilities#invokeLater to properly switch your view.

Cocoa video playback control delegates

I have a subclass of CAOpenGLLayer that is using, manipulating and displaying frames from an AVPlayer. I want to add playback controls to this layer, but I need to forward events from the playback controls to the AVPlayer. I have found a few examples of how to add familiar playback controls, like AVPlayerView, but all of them require me to pass in an actual movie file when what I want is just the interface that I can write custom delegates for. Any ideas?
There are currently no built in, ready to use, playback controls that you can attach your own targets and actions to. You'll need to roll out your own UI if you are rendering things by yourself.
If you are feeling sneaky though, you might be able to add your layer (in a view) to the AVPlayerView, remove the output that AVPlayerView adds from your player item, add your own video output, and let the AVPlayerLayer take control of the playback.

Should I use NSOperation or NSRunLoop?

I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:
change some camera parameters on the fly (gain, gamma, etc.)
tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)
Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:
Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?
The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...
Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ...
Thanks in advance
I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.
For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.
The CVDisplayLink is configured using the following code:
CGDirectDisplayID displayID = CGMainDisplayID();
CVReturn error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
NSLog(#"DisplayLink created with error:%d", error);
displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);
and it calls the following function to trigger the retrieval of a new camera frame:
static CVReturn renderCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext)
{
return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}
The CVDisplayLink is started and stopped using the following:
- (void)startRequestingFrames;
{
CVDisplayLinkStart(displayLink);
}
- (void)stopRequestingFrames;
{
CVDisplayLinkStop(displayLink);
}
Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.
Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.
Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.
If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.
Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.
Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.

Why is movie jumpy when I play simultaneously in 2 Cocoa views?

I have a Cocoa app that has two views. Both of these views are a subclass of QTMovieView. I want to play the same movie in both views (one view is a smaller preview of the larger view). Right now I'm doing:
QTMovie *movie = [[QTMovie alloc] initWithFile:path error:nil];
[largeView setMovie:movie];
[smallView setMovie:movie];
When I do this the movie is jumpy, it doesn't play smoothly. If I just set the movie to one or the other it seems to play just fine. I've tried multiple movies and they all do the same thing. Any ideas? Is there a better way to do this?
You are probably overtaxing a resource - CPU and/or disk I/O are your most likely culprits.
If you open two Quicktime windows can you play the movies at the same time without stuttering? Is your CPU maxed out (especially on a single-core machine)? Can you stream from a second source and fix the stuttering? You might try playing one of the movies off a USB hard drive or flash drive if you don't have two hard drives.
I'm not a quicktime or Objective-C programmer, but I'd start by looking at mirroring the content instead of opening two movie instances. Maybe you can capture a section of the screen, shrink the contents and dump it to a smaller preview window.

Resources