Can an OGRE engine render into any window? - windows

I'm making a small plugin-kind-of graphics engine interface which uses OGRE internally. The idea is that a person creating a program in Windows or Linux, would be able to use my plugin for doing any graphics rendering they need to do.
In-fact there's already a Windows app using GDI & D3D calls to do drawing, which I need to modify so that it can use OGRE to do the drawing.
What puzzles me is that the app is programmed in VC++ and hence has Windows-style menus and client area for drawing. But since OGRE creates its own window for rendering, will it be possible for me to send a handle of the client area of the app's window to OGRE and will OGRE do all the drawing in the client area of the window?
I'm new to Windows programming and under a bit of a time constraint, so had to ask here.

maybe this can help:
Ogre::String winHandle;
#ifdef WIN32
// Windows code
winHandle += Ogre::StringConverter::toString((unsigned long)(this->parentWidget()->winId()));
#else
// Unix code
QX11Info info = x11Info();
winHandle = Ogre::StringConverter::toString((unsigned long)(info.display()));
winHandle += ":";
winHandle += Ogre::StringConverter::toString((unsigned int)(info.screen()));
winHandle += ":";
winHandle += Ogre::StringConverter::toString((unsigned long)(this->parentWidget()->winId()));
#endif
Ogre::NameValuePairList params;
params["parentWindowHandle"] = winHandle;
mOgreWindow = mOgreRoot->createRenderWindow( "QOgreWidget_RenderWindow",
this->width(),
this->height(),
false,
&params );
QX11Info is Qt class, used to get handle.
Handle is inserted to Ogre::NameValuePairList as name:"parentWindowHandle" value: your handle and ten sent as argument to OgreRoot::createRenderWindow(). I tried this code with Qt and it worked. If it wont work try to use externalWindowHandle as parameter name.
source: http://www.ogre3d.org/tikiwiki/QtOgre

Related

CGDisplayCreateImageForRect: how to ignore a specific NSWindow

I'm looking to make a sample "lens" app which show what's is visible under the current mouse location. I've used CGDisplayCreateImageForRect to get a portion of the screen under the mouse location.
Now I would to attach a transparent window at the same location of the mouse and show this lens directly under the mouse position; however under this location there is...my transparent window with the result zoom... ops!
Is there a way to exclude a particular window from the snapshot or another method to get the current image at mouse position by ignoring something behind it?
You can't do it with that function. You can use the CGWindowList API to do it: either CGWindowListCreateImage() or CGWindowListCreateImageFromArray(). These let you specify criteria to select the windows to include or an explicit list of windows.
It's not clearly documented how to obtain the window ID of one of your own windows. The supported way is probably to query information about all on-screen windows using CGWindowListCopyWindowInfo() and then use the properties to identify yours. That said, I believe that the NSWindow property windowNumber does in fact correspond the Core Graphics window ID.
#ken-thomases point me to the right direction. The function I've used to include all windows and exclude my single one is CGWindowListCreateImageFromArray().
The code below is a small example:
// Get onscreen windows
CGWindowID windowIDToExcude = (CGWindowID)[myNSWindow windowNumber];
CFArrayRef onScreenWindows = CGWindowListCreate(kCGWindowListOptionOnScreenOnly, kCGNullWindowID);
CFMutableArrayRef finalList = CFArrayCreateMutableCopy(NULL, 0, onScreenWindows);
for (long i = CFArrayGetCount(finalList) - 1; i >= 0; i--) {
CGWindowID window = (CGWindowID)(uintptr_t)CFArrayGetValueAtIndex(finalList, i);
if (window == windowIDToExcude)
CFArrayRemoveValueAtIndex(finalList, i);
}
// Get the composite image
CGImageRef ref = CGWindowListCreateImageFromArray(myRectToGrab, finalList, kCGWindowListOptionAll);

Is it possible to embed one application in another application in Windows?

I'm writing a Windows application in Visual C++ 2008 and I want to embed the calculator (calc.exe) that comes with Windows in it. Does anyone know if that is possible, and if it is, can you give me hints on how I could achieve that?
Yes, it's possible to embed the calc in your own application but it will still run in it's own process space. There may also be some restrictions imposed by UAC but that will depend on how calc is launched. All you need to do is change the parent of the main calc window and change it's style to WS_CHILD.
void EmbedCalc(HWND hWnd)
{
HWND calcHwnd = FindWindow(L"CalcFrame", NULL);
if(calcHwnd != NULL)
{
// Change the parent so the calc window belongs to our apps main window
SetParent(calcHwnd, hWnd);
// Update the style so the calc window is embedded in our main window
SetWindowLong(calcHwnd, GWL_STYLE, GetWindowLong(calcHwnd, GWL_STYLE) | WS_CHILD);
// We need to update the position as well since changing the parent does not
// adjust it automatically.
SetWindowPos(calcHwnd, NULL, 0, 0, 0, 0, SWP_NOSIZE | SWP_NOZORDER);
}
}
Microsoft has various technologies to support embedding, most famously OLE which is a COM-based technology. This is, for example, how you can embed an Excel spreadsheet in your application. However, I'm fairly certain that calc does not implement any of the required interfaces for that to happen.
So that only leaves you with hacky solutions, like trying to launch it yourself and play games with the window hierarchy, or try to present it to users and then copy the results out through the clipboard, etc. This is all technically possible, but not a good idea. In fact, it's probably more difficult than just writing your own calculator app... depending on what you want to to enable users to do. If you explain why you want to do this someone may have some better solutions to propose.

Multiple OpenGL windows on Windows, sharing context

I'm trying to setup multiple OpenGL (3.3) windows on the same program. I've created 2 windows, with the second one having the shared context of the first one (using hglrc[i] = wglCreateContextAttribsARB(hdc[n_windows], hglrc[0], ctxattribs) while the first one has 0 instead of hglrc[0]), with a simple loop like:
for(unsigned i = 0; i < n_windows; ++i)
{
wglMakeCurrent(hdc[i], hglrc[i]);
glClearColor((float)rand() / RAND_MAX, (float)rand() / RAND_MAX, (float)rand() / RAND_MAX, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
SwapBuffers(hdc[i]);
}
However only one window renders, and when I move a window to another screen, the Window that wasn't rendering now renders, and the other one stops rendering.
It's the first time I'm trying to open several OpenGL windows on the same application, with a shared context, so I might be doing something wrong. My code works perfectly with one window, and my old faithful gDEBugger doesn't show any error. Any idea on what I might be doing wrong?

Is it possible to have a QWidget without a display?

I have a console-only win32 application (which can be launched as a windows service) where I want to create QWidget objects and interact with them.
These QWidget will not be rendered locally (because there is no display) but I need to access painting operations (to create screenshot of them for example) and I need to intercept mouse and keyboard operations.
The aim of the service is to provide a display of the widget to a remote application. So the image, mouse and keyboard are meant to be redirected over the network.
Is it possible to have QWidget objects in a console application? What is the preferred method to intercept painting, mouse and keyboard operations in that case?
If this is possible, is it still possible with QAxWidget objects?
Have a peek at Qt for embedded Linux.. Qt is designed so you can do this, but it is non-trivial.
I do suspect you're not on the right track, though, if you have a console-mode service that needs a keyboard, mouse and a graphical UI. The need to interact with a user tells me that it should not be a service, and the need for a mouse suggests that it shouldn't be a console app either.
You can create a QApplication without a Gui using one of the provided constructors:
QApplication::QApplication(int&, char**, bool GuiEnabled)
In order to do GUI operations you'll still need a GUI available. For example, on Linux it will still require that X be running and available. I believe there are certain restrictions on what can and can't happen but can't find the blog post on http://labs.qt.nokia.com that provides details.
At the moment i myself am trying to do something similar. The approach i have taken is creating subclass of QGraphicsScene and overriding QGraphicsScene::sceneChanged event. Then it goes as follows (pseudocode):
QApplication app;
MyGraphicsScene* mgs = new MyGraphicsScene();
MyWidget* mw = new MyWidget();
mgs->addWidget(mw);
Now each time change happens your sceneChanged will be invoked. There you can get snapshot of scene as QImage. In my case i move pixel data to texture and render it as overlay of my game:
void QEOverlay::sceneChanged(QList<QRectF> list)
{
//loop through all screen areas that changed
foreach(QRectF rect, list)
{
//generate a rectangle that is a while number, and fits withing the screen
if(rect.x() < 0) rect.setX(0);
if(rect.y() < 0) rect.setY(0);
if(rect.right() > _width) rect.setRight(_width);
if(rect.bottom() > _height) rect.setRight(_height);
rect = QRectF(Round(rect.x()),Round(rect.y()),Round(rect.width()),Round(rect.height()));
//Create an image, create a qpainter, fill the image with transparent color, and then render a part of the scene to it
QImage image(rect.width(),rect.height(),QImage::Format_ARGB32);
QPainter p(&image);
image.fill(0);
render(&p,image.rect(),rect);
if(tex.lock())
{
for (u32 y = 0; y < image.height(); y++)
{
for (u32 x = 0; x < image.width(); x++)
{
QRgb pix = image.pixel(x, y);
tex.color(x + rect.left(), y + rect.top(), Color(qRed(pix), qGreen(pix), qBlue(pix), qAlpha(pix)));
}
}
tex.unlock();
}
}
}
There is issue with this approach. You still need to redirect keyboard and mouse input events to your subclass. That does not work out for me very well, there are certain issues like mouse click not focusing QLineEdit or elements in QWebView.

Adding Windows Form to a DirectX application?

I'm working on a directx application and was wondering how I could add a regular window to the application, one which has text boxes, command buttons and all.
For a window to exist it will require a windows forms message pump :
A Win32 message pump typically looks something this and was the heart of win32 programming.
MSG msg;
while(GetMessage(&msg, hwnd, 0, 0))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
// do stuff
}
Today, the C# language tends to abstract the message pump away but you can still get to it.
protected override void WndProc(ref Message m)
{
base.WndProc(ref m);
// do stuff
}
You need a call to application.run to launch a windows forms with active message pump. see :
http://msdn.microsoft.com/en-us/library/ms157900.aspx
Hey Ed: This is something you might be looking for :
http://www.directxtutorial.com/Tutorial9/A-Win32/dx9A3.aspx . It talks about creating a window from directx
As far as I know you would have to create a parent window to hold both the "window" that DirectX image is rendered to and the regular window with controls.

Resources