How to capture the backbuffer that has been rendered in another thread - windows

I'm making a function that captures the screen within a game.
I implemented it like this.
void somecapture()
{
d3dDevice->GetBackBuffer(0, D3DBACKBUFFER_TYPE_MONO, &backbuffer);
d3dDevice->CreateOffscreenPlainSurface(width, height, format, D3DPOOL_SYSTEMMEM, &surface, NULL);
d3dDevice->GetRenderTargetData(backbuffer, surface);
surface->LockRect(&rect, NULL, 0);
... loop rect.pBits memcpy...
surface->UnlockRect();
}
The function works fine, but there is a problem.
The function is called from a sub-thread, not from the main thread.
Captured images created as a result are often in an incomplete rendering state.
I tried using GetFrontBufferData, but the purpose was to capture only the game screen, but I couldn't use it because other overlayed windows were also taken.
I would like to know how to capture the screen rendered in sub-thread.
Or is there a way to get the screenshot without performance hit on the main thread?

Related

Rendering to CAMetalLayer from dedicated render thread / loop

In Windows World, a dedicated render thread would loop something similar to this:
void RenderThread()
{
while (!quit)
{
UpdateStates();
RenderToDirect3D();
// Can either present with no synchronisation,
// or synchronise after 1-4 vertical blanks.
// See docs for IDXGISwapChain::Present
PresentToSwapChain();
}
}
What is the equivalent in Cocoa with CAMetalLayer? All the examples deal with updates being done in the main thread, either using MTKView (with it's internal timer) or using CADisplayLink in the iOS examples.
I want to be in control of the whole render loop, rather than just receiving a callback at some non-specified interval (and ideally blocking for V-Sync if it's enabled).
At some level, you're going to be throttled by the availability of drawables. A CAMetalLayer has a fixed pool of drawables available, and calling nextDrawable will block the current thread until a drawable becomes available. This doesn't imply you have to call nextDrawable at the top of your render loop, though.
If you want to draw on your own schedule without getting blocked waiting on a drawable, render to an off-screen renderbuffer (i.e., a MTLTexture with dimensions matching your drawable size), and then blit from the most-recently-drawn texture to a drawable's texture and present on whatever cadence you prefer. This can be useful for getting frame timings, but every frame you draw and then don't display is wasted work. It also increases the risk of judder.
Your options are limited when it comes to getting callbacks that match the v-sync cadence. Your best is almost certainly a CVDisplayLink scheduled in the default and tracking run loop modes, though this has caveats.
You could use something like a counting semaphore in concert with a display link if you want to free-run without getting too far ahead.
If your application is able to maintain a real-time framerate, you'll normally be rendering a frame or two ahead of what's going on the glass, so you don't want to literally block on v-sync; you just want to inform the window server that you'd like presentation to match v-sync. On macOS, you do this by setting the layer's displaySyncEnabled to true (the default). Turning this off may cause tearing on certain displays.
At the point where you want to render to screen, you obtain the drawable from the layer by calling nextDrawable. You obtain the drawable's texture from its texture property. You use that texture to set up the render target (color attachment) of a MTLRenderPassDescriptor. For example:
id<CAMetalDrawable> drawable = layer.nextDrawable;
id<MTLTexture> texture = drawable.texture;
MTLRenderPassDescriptor *desc = [MTLRenderPassDescriptor renderPassDescriptor];
desc.colorAttachments[0].texture = texture;
From here, it's pretty similar to what you do in an MTKView's drawRect: method. You create a command buffer (if you don't already have one), create a render command encoder using the descriptor, encode drawing commands, end encoding, tell the command buffer to present the drawable (using a -presentDrawable:... method), and commit the command buffer. Whatever was drawn to the drawable's texture is what will end up on-screen when it's presented.
I agree with Warren that you probably don't really want to sync your loop with the display refresh. You want parallelism. You want the CPU to be working on the next frame while the GPU is rendering the most current frame (and the display is showing the last frame).
The fact that there's a limit on how many drawables may be in flight at once and that nextDrawable will block waiting for one will prevent your render loop from getting too far ahead. (You'll probably use some other synchronization before that, like for managing a small pool of buffers.) If you want only double-buffering and not triple-buffering, you can set the layer's maximumDrawableCount to 2 instead of its default value of 3.

How to overlay TWO images and save them into AN image? (Flash CS5)

I am a total noob with Flash. And i am no programmer. Just good with photoshop (image designing).
Here is my problem. I found a simple drawing application and modified it, only the interface, not the codings.
It provides a 'save button' that enables to save the drawing (drawn on MovieClip) into diskdrive. And then i modified it, put another layer on top of the MovieClip a Graphic. But then when i try to save it, it only saves the MovieClip as a .png image. What i want is that it saves the MovieClip along with the Graphic layered on top of it into one .png image. How can i do that?
Maybe it'll be more helpful if I provide the code to the 'save button'?
** /* Save */
private function export():void
{
var bmd:BitmapData = new BitmapData(600, 290);
bmd.draw(board);
var ba:ByteArray = PNGEncoder.encode(bmd);
private function completeHandler(event:Event):void {
var loader:URLLoader = URLLoader(event.target);
trace("completeHandler: " + loader.data);
}
private function saveSuccessful(e:Event):void
{
saveDialog = new SaveDialog();
addChild(saveDialog);
saveDialog.closeBtn.addEventListener(MouseEvent.MOUSE_UP, closeSaveDialog);
}
private function closeSaveDialog(e:MouseEvent):void
{
removeChild(saveDialog);
}
private function save(e:MouseEvent):void
{
export();
}**
EDIT: i have put 'bmd.draw(topLayer);' under the first draw() call but then when i published a preview it says "Access of undefined property topLayer". i checked its property first it mentions this 'Instance of: topLayer' and it is a Graphic.
Check the codes. There should be a construction fileReference.save(someOtherName) somewhere, probably with different name for fileReference, but it will be declared nearby as FileReference=new FileReference(). Then, track that someOtherName above, it should be an output of PNGEncoder.encode() of yet another variable, which should be of type BitmapData. Find out what is drawn on that bitmap data, there will be a line bitmapData.draw(someMovieClip). Find out if that someMovieClip is only the layer that's drawn upon in your program. You can add a similar line right after that one to draw your shape (you should have its name so you could reference it in code), this will draw your Graphic over the thing that you draw.
In case the entire graphics drawn by you can be fit within a single screen, just take a screenshot of your application in progress, load it up into Photoshop and have fun with your graphics be correctly on top of whatever it saves. Or, use an existing saved image as a background layer, place a screenshot as foreground, clear the areas that are not your graphics and have some more fun.
EDIT: Okay, there it is: You have export() function (which is incomplete in your copypasting, BTW), with all the relevant part I've mentioned. There is a draw() call, a PNGEncoder.encode() call, and a BitmapData object. You should add another line of code after the first draw() call with something like this:
bmd.draw(yourGraphic);
YourGraphic is the name of the graphic you have manually added above the MovieClip, the one you can edit in its properties on stage. Should do.
REPLY: i have put bmd.draw(topLayer); under the first draw() call but then when i published a preview it says "Access of undefined property topLayer". i checked its property first it mentions this 'Instance of: topLayer' and it is a Graphic.

NSWindow Flip Animation - Like iWork

I'm attempting to implement window-flipping identical to that in iWork -
https://dl.dropbox.com/u/2338382/Window%20Flipping.mov
However, I can't quite seem to find a straightforward way of doing this. Some tutorials suggest sticking snapshot-images of both sides of the window in a bigger, transparent window and animate those. This might work, but seems a bit hacky, and the sample code is always bloated. Some tutorials suggest using private APIs, and since this app may be MAS-bound, I'd like to avoid that.
How should I go about implementing this? Does anyone have any hints?
NSWindow+Flipping
I've rewritten the ancient code linked below into NSWindow+Flipping. You can grab these source files from my misc. Cocoa collection on GitHub, PCSnippets.
You can achieve this using CoreGraphics framework. Take a look at this:
- (void) flipWithDuration: (float) duration forwards: (BOOL) forwards
{
CGSTransitionSpec spec;
CGSTransitionHandle transitionHandle;
CGSConnection cid = CGSDefaultConnection;
spec.type = CGSFlip;
spec.option = 0x80 | (forwards ? 2 : 1);
spec.wid = [self windowNumber];
spec.backColor = nil;
transitionHandle = -1;
CGSNewTransition (cid, &spec, &transitionHandle);
CGSInvokeTransition (cid, transitionHandle, duration);
[[NSRunLoop currentRunLoop] runUntilDate:
[NSDate dateWithTimeIntervalSinceNow: duration]];
CGSReleaseTransition (cid, transitionHandle);
}
You can download sample project: here. More info here.
UPDATE:
Take a look at this project. It's actually what You need.
About this project:
This category on NSWindow allows you to switch one window for
another, using the "flip" animation popularized by Dashboard widgets.
This was a nice excuse to learn something about CoreImage and how to
use it in Cocoa. The demo app shows how to use it. Scroll to the end
to see what's new in this version!
Basically, all you need to do is something like:
[someWindow flipToShowWindow:someOtherWindow forward:YES];
However, this code makes some assumptions: — someWindow (the initial
window) is already visible on-screen. — someOtherWindow (the final
window) is not already visible on-screen. — Both windows can be
resized to the same size, and aren't too large or complicated — the
latter conditions being less important the faster your CPU/video card
is. — The windows won't go away while the animation is running. — The
user won't try to click on the animated window or do something while
the animation is running.
The implementation is quite straightforward. I move the final to the
same position and size as the initial window. I then position a larger
transparent window so it covers that frame. I render both window
contents into CIImages, hide both windows, and start the animation.
Each frame of the animation renders a perspective-distorted image into
the transparent window. When the animation is done, I show the final
window. Some tricks are used to make this faster; the flipping window
is setup only once; the final window is hidden by setting its alpha to
0.0, not by ordering it out and later ordering it back in again, for instance.
The main bottleneck is the CoreImage filter, and the first frame
always takes much longer to render — 4 or 6 times what it takes for
the remaining frames. I suppose this time is spent with setup and
downloading to the video card. So I calculate the time this takes and
draw a second frame at a stage where the rotation begins to show. The
animation begins at this point, but, if those first two frames took
too long, I stretch the duration to make sure that at least 5 more
frames will get rendered. This will happen with slow hardware or large
windows. At the end, I don't render the last frame at all and swap the
final window in instead.

Should I use NSOperation or NSRunLoop?

I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:
change some camera parameters on the fly (gain, gamma, etc.)
tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)
Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:
Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?
The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...
Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ...
Thanks in advance
I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.
For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.
The CVDisplayLink is configured using the following code:
CGDirectDisplayID displayID = CGMainDisplayID();
CVReturn error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
NSLog(#"DisplayLink created with error:%d", error);
displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);
and it calls the following function to trigger the retrieval of a new camera frame:
static CVReturn renderCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext)
{
return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}
The CVDisplayLink is started and stopped using the following:
- (void)startRequestingFrames;
{
CVDisplayLinkStart(displayLink);
}
- (void)stopRequestingFrames;
{
CVDisplayLinkStop(displayLink);
}
Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.
Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.
Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.
If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.
Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.
Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.

OpenGL texture loading issue

This is a very vague problem, so please feel free to clarify anything about this project.
I'm working on a very large application, and recently a very perplexing bug has cropped up regarding the texturing. Some of the textures that we are loading are being loaded - I've stepped through the code, and it runs - but all OpenGL renders for those textures is a weird Pink/White striped texture.
What would you suggest to even begin debugging this situation?
The project is multithreaded, but a mutex makes sure all OpenGL calls are not interrupted by anything else.
Some textures are being loaded, some are not. They're all loaded in the exact same way.
I've made sure that all textures exist
The "pink/white" textures are definitely loaded in memory - they become visible shortly after I load any other texture into OpenGL.
I'm perplexed, and have no idea what else can be wrong. Is there an OpenGL command that can be called after glTexImage that would force the texture to become useable?
Edit:
It's not about the commands failing, it's mainly a timing issue. The pink/white textures show up for a while, until more textures are loaded. It's almost as if the textures are queued up, and the queue just pauses at some time.
Next Edit: I got the glIntercept log working correctly, and this is what it outputted (before the entire program crashed)
http://freetexthost.com/1kdkksabdg
Next Edit: I know for a fact the textures are loaded in OpenGL memory, but for some reason they're not rendering in the program themselves.
If your texture is colored incorrectly most likely you're loading the wrong order of RGB. Make sure in your glTexImage2D you're using the right enums for your image format. Make sure the number of components is correct and that you're getting the order of your RGB pixels in the format argument right.
Although probably not related to your textures showing up wrong, OpenGL doesn't support multithreaded draws so make sure you're not doing any drawing work on a different thread than the one that owns the context.
Edit: Do you have a reference renderer so you can verify the image pixels are being loaded as expected? I would strongly recommend writing a small routine to load then immediately save the pixels to a file so you can be sure that you're getting the right texture results.
Check your texture coordinates. If they are set wrong, you can see just one or two texels mapped to entire primitives. Remember, OpenGL is a state machine. Check if you're changing the texture coordinate state at the wrong time. You may be setting the texture coordinates at a later point in your code, then when you get back to redrawing these elements the state is acceptable for mapping your texture to the code.
If it is merely a timing issue where the texture loading OpenGL calls aren't executed in time, and your threading code is correct, try adding a call to glFlush() after loading the textures. glFlush() causes all pending OpenGL commands to execute.
When you say:
The project is multithreaded, but a mutex makes sure all OpenGL calls are not interrupted by anything else.
This doesn't sound like strong enough protection to me: remember that OpenGL is a state machine with a large amount of internal state. You need to make sure the OpenGL state is what you expect it to be when you are making your calls, and that certain sequences of calls don't get interrupted with by calls from other threads.
I'm no expert on OpenGL thread-safety, but this seems to me where your problem might lie.
Check the size and compression of those images you use as textures. i think opengl texture size has to be a power of 2 ...
You can't load a texture in a thread and use it in other different thread because you will see a beautiful white texture. To do this possible you must load the OpenGL context in between differents threads before use any OpenGL function.
If you are using GLIntercept to check your code, ensure to enable:
ThreadChecking = True;
in the gliConfig.ini file.
Viewing the log it seems that quite a few OpenGL calls are being main outside the main context.
It is possible to load textures in another thread without getting a white texture. The problem is that - once you initialized the OpenGL window - the OpenGL context is "bound" to this thread. You have to deactivate the context in the main thread while you're loading textures and before you start loading them, you have to activate the context in this thread.
You can use this class:
Context.h:
#ifndef CONTEXT_H
#define CONTEXT_H
#include <GL/glut.h>
class Context
{
public:
static Context* getInstance();
void bind();
void unbind();
private:
Context();
Context(const Context&);
~Context();
static Context *instance;
HGLRC hglrc;
HDC hdc;
class Guard
{
public:
~Guard()
{
if (Context::instance != 0) {
delete Context::instance;
}
}
};
friend class Guard;
};
#endif
Context.cpp:
#include "Context.h"
Context* Context::getInstance()
{
static Guard guard;
if(Context::instance == 0) {
Context::instance = new Context();
}
return Context::instance;
}
void Context::bind()
{
wglMakeCurrent(this->hdc, this->hglrc);
}
void Context::unbind()
{
wglMakeCurrent(NULL, NULL);
}
Context::Context()
{
this->hglrc = wglGetCurrentContext();
this->hdc = wglGetCurrentDC();
}
Context::~Context()
{
}
Context *Context::instance = 0;
And that's what you have to do:
int state = 0;
void main()
{
// Create the window.
glutCreateWindow(TITLE);
// Set your loop function.
glutDisplayFunc(&loop);
// Initialize the singleton for the 1st time.
Context::getInstance()->bind();
glutMainLoop();
}
void loop()
{
if (state == 0) {
Context::getInstance()->unbind();
startThread(&run);
} else if (state == 1) {
// Rebind the context to the main thread (just once).
Context::getInstance()->bind();
state = 2;
} else if (state == 2) {
// Draw your textures, lines, etc.
} else {
// Draw something (but no textures).
}
}
void run()
{
Context::getInstance()->bind();
// Load textures...
Context::getInstance()->unbind();
state = 1;
}

Resources