I have setup a WMF session (built an IMFTopology object with a source pointing to a webcam and a standard EVR for screen output), assigned it to an IMFMediaSession and started a preview. All is working great.
Now, I stop the session (waiting for the actual stop), change the source's resolution (setting an appropriate IMFMediaType via its IMFMediaTypeHandler) and then build a new topology with that new source and a newly created IMFActivate object for the EVR. Also changing the output window's size to match the new frame size.
When I start that new session there's no image (or the image is garbled, or cut off at the bottom - depends on the change in resolution). It is almost as if the new topology is trying to re-use the previously setup EVR and it is not working correctly.
I tried setting that new media type on the EVR when generating a new one, tried to force the new window size on the EVR (via a call to SetWindowPos()), tried to get that output node by previously assigned streamID and set its preferred input format... Nothing worked - I get the same black (or garbled) image when I start the playback.
The only time the "new" session plays correctly is when I chose back the original source format. Then it continues as if nothing bad happened.
Why is that? How do I fix this?
Not providing the source code as there's no easy way to just provide the relevant parts. Generally my code closely follows the sample from MSDN's article on creating a Media Session for playing back a file.
According to MS's documentation the IMFMediaSession is managing the start/stop of the source so I'm relying on that when I'm changing the source's video format (otherwise the application fails).
If you want to build a real new topology, you need to release all MediaFoundation objects (source, sink, topology, and so on).
If not, it can be a little bit complicated.
Related
I'm getting a double image from the depth sensor, both in the Explorer and via Unity SDK. The object (in this instance my hand) is about 40-100cm away which should be within the specs. Does anybody know if this is a "normal" behavior? See the screenshot here (build KOT49H.160920 and yes, the cameras are clean :)
The point cloud data callback provides a shared buffer.
You have to copy this buffer before processing it. Because, the buffer will be overwritten with new depth data and than will again invoke the callback.
It seems to me that your are processing the buffer directly. This will lead to corrupt data and you are rendering mixed new and old data!
I am using wxWidgets to design GUI in windows. The requirements is, if the user has modified the frame size then I have to store the modified size and use the modified size for next session. I am able to store the size, but still I am getting older size not the modified size in next session. My window has several children(check, text, label). These controls are put in panel using sizers. Every time the best size is queried and recalculated and SetClientSize(size) is called. Is this the reason why the modified size is not reflected?
First, don't save and restore the frame size yourself, use wxPersistentTLW which does it for you instead, see the overview for more information and the "widgets" sample for an example of using it to preserve the frame geometry.
Second, the layout mechanism in wxWidgets is totally deterministic, so restoring the same frame size as during the last run should definitely result in the same positions and sizes being used for the children. If this isn't the case (I'm not really sure about it, you don't actually say what the problem is), most likely explanation is that your size saving/restoring code doesn't work correctly -- and that simply getting rid of it and using the built-in support for this should fix the problem (whatever it is).
I have a custom UISlider and use the currentPlaybackTime to change values of an MPMoviePlayerController object.
The problem is when i scrub at a fast rate using the slider, it doesn't respond as fast as i would like..
Is there any better way to have a fast interactive scrubber for ipad? targeting from OS 3.2
Well there are two issues, only one you can control directly.
Multimedia-content is commonly compressed using some kind of delta-compression, hence quick and exact seeking is not a trivial task to cope with. As that is common and since you can not directly change that, you will have to live with it.
the only way to increase responsiveness for seeking on the content-side (when encoding) is reducing the gop-size - that is, less p-frames between the i-frames.
when using a slider or a similar control, you could, instead of directly connecting the current playback position with it, handle any manual changes in an indirect fashion. You could run a timer based job that, whenever the slider/scrubber has been moved, tries to adjust the playback position towards that new value. Once the player is seeking, prevent the scrubber from getting feedback from the current playback location but allow it once the player is in playing state again. That way the user does not directly experience the clunky seek feedback.
I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:
change some camera parameters on the fly (gain, gamma, etc.)
tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)
Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:
Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?
The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...
Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ...
Thanks in advance
I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.
For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.
The CVDisplayLink is configured using the following code:
CGDirectDisplayID displayID = CGMainDisplayID();
CVReturn error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
NSLog(#"DisplayLink created with error:%d", error);
displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);
and it calls the following function to trigger the retrieval of a new camera frame:
static CVReturn renderCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext)
{
return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}
The CVDisplayLink is started and stopped using the following:
- (void)startRequestingFrames;
{
CVDisplayLinkStart(displayLink);
}
- (void)stopRequestingFrames;
{
CVDisplayLinkStop(displayLink);
}
Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.
Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.
Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.
If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.
Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.
Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.
I would like to write a program that can mirror a portion of the main display into a new window. Ideally this new window could then be displayed on an external monitor. I have seen this uiltity for a flightsim that does this on a pc (a multifunction display extractor).
CLick here for a screenshot of the program (MFD Extractor)
This would be a live window ie. constantaly updated video display not just a static graphic.
I have looked at screen magnifiers or vnc clients for ideas but I think I need to write something from scratch. I have tried to do some reading on osx programing but where do I start in terms of gaining access to the display? I somehow need to extract the graphics from a particular program. Is it best to go near the final output stage (the individual pixels sent to the display) or somewhere nearer the window management stage.
Any ideas or pointers would be much appreciated. I just need somewhere to start from.
Regards,
There are a few ways to do this:
Quartz Display Services will let you get access to the video memory for a screen.
Quartz Window Services (a.k.a. CGWindow) will let you create an image of everything that lies below a window. If you create a borderless, transparent, empty, high-level window whose frame occupies an entire screen, everything below it will be everything on that screen. (Of course, you could create a smaller window in order to copy a section of the screen.)
There's also a way to do it using OpenGL that I never fully understood. That technique is demonstrated by a couple of code samples, OpenGLScreenSnapshot and OpenGLCaptureToMovie. It's more or less obsoleted by CGWindow, though.
Each of those will get you an image that you can then show or write to a file or something.
To show an image, use NSImageView or IKImageView. If you want to magnify it, IKImageView has a zoomFactor property, but if you want nearest-neighbor scaling (like Pixie, DigitalColor Meter, or xScope), I think you'll need to write a custom view for that (but even that isn't all that hard).