Is it possible to have a QWidget without a display? - windows

I have a console-only win32 application (which can be launched as a windows service) where I want to create QWidget objects and interact with them.
These QWidget will not be rendered locally (because there is no display) but I need to access painting operations (to create screenshot of them for example) and I need to intercept mouse and keyboard operations.
The aim of the service is to provide a display of the widget to a remote application. So the image, mouse and keyboard are meant to be redirected over the network.
Is it possible to have QWidget objects in a console application? What is the preferred method to intercept painting, mouse and keyboard operations in that case?
If this is possible, is it still possible with QAxWidget objects?

Have a peek at Qt for embedded Linux.. Qt is designed so you can do this, but it is non-trivial.
I do suspect you're not on the right track, though, if you have a console-mode service that needs a keyboard, mouse and a graphical UI. The need to interact with a user tells me that it should not be a service, and the need for a mouse suggests that it shouldn't be a console app either.

You can create a QApplication without a Gui using one of the provided constructors:
QApplication::QApplication(int&, char**, bool GuiEnabled)
In order to do GUI operations you'll still need a GUI available. For example, on Linux it will still require that X be running and available. I believe there are certain restrictions on what can and can't happen but can't find the blog post on http://labs.qt.nokia.com that provides details.

At the moment i myself am trying to do something similar. The approach i have taken is creating subclass of QGraphicsScene and overriding QGraphicsScene::sceneChanged event. Then it goes as follows (pseudocode):
QApplication app;
MyGraphicsScene* mgs = new MyGraphicsScene();
MyWidget* mw = new MyWidget();
mgs->addWidget(mw);
Now each time change happens your sceneChanged will be invoked. There you can get snapshot of scene as QImage. In my case i move pixel data to texture and render it as overlay of my game:
void QEOverlay::sceneChanged(QList<QRectF> list)
{
//loop through all screen areas that changed
foreach(QRectF rect, list)
{
//generate a rectangle that is a while number, and fits withing the screen
if(rect.x() < 0) rect.setX(0);
if(rect.y() < 0) rect.setY(0);
if(rect.right() > _width) rect.setRight(_width);
if(rect.bottom() > _height) rect.setRight(_height);
rect = QRectF(Round(rect.x()),Round(rect.y()),Round(rect.width()),Round(rect.height()));
//Create an image, create a qpainter, fill the image with transparent color, and then render a part of the scene to it
QImage image(rect.width(),rect.height(),QImage::Format_ARGB32);
QPainter p(&image);
image.fill(0);
render(&p,image.rect(),rect);
if(tex.lock())
{
for (u32 y = 0; y < image.height(); y++)
{
for (u32 x = 0; x < image.width(); x++)
{
QRgb pix = image.pixel(x, y);
tex.color(x + rect.left(), y + rect.top(), Color(qRed(pix), qGreen(pix), qBlue(pix), qAlpha(pix)));
}
}
tex.unlock();
}
}
}
There is issue with this approach. You still need to redirect keyboard and mouse input events to your subclass. That does not work out for me very well, there are certain issues like mouse click not focusing QLineEdit or elements in QWebView.

Related

Is there a way to return all Widgets from a parent Window with X11/Xt?

I'm writing an application using X11, Xt, and Motif directly with C. I want to have the ability to list all the children widgets from a parent Window. Is there a way to do this?
I found the following snippet to recursively parse a Motif widget tree here, but I only have an Xlib Window struct, so I want to be able get the children Widgets of that Window, then pass that Widget to something akin to that recursive tree parser.
My current code looks something like this:
int main() {
Display* display;
int screen_num = 0;
display = XOpenDisplay(NULL);
Window window = XRootWindow(display, screen_num);
dumpWidgetTree((Widget)window);
return 0;
}
I tried simply casting Window to Widget, but that just caused a segfault, as expected.
You can get most of the widgets from the window tree. You can use the 'XtWindowToWidget' to translate windowID to widget. This approach will work for the widget in the current app, and will not be able to access windowless widgets (a.k.a. Gadgets).

Detect if compositor is running

I want my UI to change design depending on whether the screen is composited (thus supporting certain effects) or not. Is it possible to
Reliably query whether the X server is running a compositing window manager
Get notified when compositing is switched on/off?
Solution:
To elaborate on Andrey Sidorov's correct answer for people not so familiar with the X11 API, this is the code for detecting a EWMH-compliant compositor:
int has_compositor(Display *dpy, int screen) {
char prop_name[20];
snprintf(prop_name, 20, "_NET_WM_CM_S%d", screen);
Atom prop_atom = XInternAtom(dpy, prop_name, False);
return XGetSelectionOwner(dpy, prop_atom) != None;
}
EWMH-compliant compositors must acquire ownership of a selection named _NET_WM_CM_Sn, where n is the screen number
To track compositor you'll need to check if selection is _NET_WM_CM_S0 is owned by anyone (assuming you are on screen 0) using XGetSelectionOwner. If not owned, acquire ownership yourself and monitor SelectionClear events to detect when compositor is started.

CGDisplayCreateImageForRect: how to ignore a specific NSWindow

I'm looking to make a sample "lens" app which show what's is visible under the current mouse location. I've used CGDisplayCreateImageForRect to get a portion of the screen under the mouse location.
Now I would to attach a transparent window at the same location of the mouse and show this lens directly under the mouse position; however under this location there is...my transparent window with the result zoom... ops!
Is there a way to exclude a particular window from the snapshot or another method to get the current image at mouse position by ignoring something behind it?
You can't do it with that function. You can use the CGWindowList API to do it: either CGWindowListCreateImage() or CGWindowListCreateImageFromArray(). These let you specify criteria to select the windows to include or an explicit list of windows.
It's not clearly documented how to obtain the window ID of one of your own windows. The supported way is probably to query information about all on-screen windows using CGWindowListCopyWindowInfo() and then use the properties to identify yours. That said, I believe that the NSWindow property windowNumber does in fact correspond the Core Graphics window ID.
#ken-thomases point me to the right direction. The function I've used to include all windows and exclude my single one is CGWindowListCreateImageFromArray().
The code below is a small example:
// Get onscreen windows
CGWindowID windowIDToExcude = (CGWindowID)[myNSWindow windowNumber];
CFArrayRef onScreenWindows = CGWindowListCreate(kCGWindowListOptionOnScreenOnly, kCGNullWindowID);
CFMutableArrayRef finalList = CFArrayCreateMutableCopy(NULL, 0, onScreenWindows);
for (long i = CFArrayGetCount(finalList) - 1; i >= 0; i--) {
CGWindowID window = (CGWindowID)(uintptr_t)CFArrayGetValueAtIndex(finalList, i);
if (window == windowIDToExcude)
CFArrayRemoveValueAtIndex(finalList, i);
}
// Get the composite image
CGImageRef ref = CGWindowListCreateImageFromArray(myRectToGrab, finalList, kCGWindowListOptionAll);

Simulated MouseEvent not working properly OSX

Back in 2010, Pierre asked this question (his accepted answer doesn't work for me).
I'm having the same problem: I am able to successfully move the mouse around (and off!?!) the screen programmatically from my Cocoa Application, however bringing the mouse to the location of my dock doesn't show it (and some other applications aren't registering the mouse moved event, eg. games that remove the mouse)
The method I am using is thus:
void PostMouseEvent(CGMouseButton button, CGEventType type, const CGPoint point)
{
CGEventRef theEvent = CGEventCreateMouseEvent(NULL, type, point, button);
CGEventSetType(theEvent, type);
CGEventPost(kCGSessionEventTap, theEvent);
CFRelease(theEvent);
}
And then when I want to move the mouse I run:
PostMouseEvent(0, kCGEventMouseMoved, mouseLocation);
Note that this code DOES generate mouseover events for things such as links.
Now that's it's 2013, is it possible to fix this issue?
Thanks for your time!
I would both warp the cursor and generate the mouse-move event. I know from experience, for example, that warping the cursor, while it doesn't generate an event itself, modifies the subsequent mouse move event to include the moved distance in its mouse delta. I don't know if your synthesized move event will include the proper delta values on its own.
OK, so evidently MacOSX needs the mouse to be at exactly the edge of the screen for the dock to show!
Because I keep my dock on the left-side of the screen (due to many programs keeping vital buttons at the bottom of their windows), all I had to do was say
if (mouseLocation.x < 0)
{
mouseLocation.x = 0;
}
And it worked!
I am also using KenThomases' idea to warp the cursor as well.
(this answer is marked correct as it allows me to show the dock - however there are still some applications that are not responding to mouse input)

Mac OS X: CGGetLastMouseDelta and moving the mouse programatically

I'm developing an extension to MATLAB's PsychToolbox that allows for better control of the mouse during psychophysical experiments (specifically, preventing the screen boundaries from limiting drag operations... it should feel like you can move the mouse "infinitely" in all directions). Since MATLAB doesn't support the creation of additional threads (and that would be needlessly complicated for this situation anyway), I can't make use of either the Carbon or Cocoa event managers.
CGGetLastMouseDelta is almost perfect for what I need to do (it gets me the amount the mouse has moved "since the last mouse movement event received by the application" ignoring screen boundaries), however there is one slight problem. When moving the mouse programatically (using either CGWarpMouseCursorPosition or CGDisplayMoveCursorToPoint), no events are generated. Therefore, CGGetLastMouseDelta doesn't seem to be aware that the mouse has moved at all. In other words, if I move the mouse 50 pixels over and 50 pixels down programatically, CGGetLastMouseDelta returns (0, 0) afterwards for the mouse delta. This is undesirable behavior in my context, and requires ugly workarounds. I've tried moving the mouse by posting events through the event system, as follows (this is a "mexFunction", MATLAB's way of calling C code):
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) {
CGEventRef event;
CGPoint offset;
CGPoint currentLocation;
CGPoint newLocation;
if (nrhs != 2)
mexErrMsgTxt("The global x and y coordinates (and only those) must be supplied.");
event = CGEventCreate(NULL);
currentLocation = CGEventGetLocation(event);
CFRelease(event);
offset = CGPointMake((CGFloat) mxGetScalar(prhs[0]), (CGFloat) mxGetScalar(prhs[1]));
newLocation = CGPointMake(currentLocation.x + offset.x, currentLocation.y + offset.y);
event = CGEventCreateMouseEvent(NULL, kCGEventMouseMoved, newLocation, kCGMouseButtonLeft);
CGEventPost(kCGHIDEventTap, event);
CFRelease(event);
}
This happily moves the mouse, but doesn't seem to change the behavior of CGGetLastMouseDelta at all. Does anybody know the exact specifications regarding what is returned by CGGetLastMouseDelta (and when?). Apple's documentation on on this stuff (the Quartz reference) is as usual close to useless (or at least, lacking in necessary details).
Thanks!
A good idea might be to use CGAssociateMouseAndMouseCursorPosition(0) to disconnect mouse movement from the cursor. Then you don't get the problem with screen boundaries.
Option (1) Generate your own event which specifies that you caused the mouse to move.
Option (2) Call your mouse moved event handler function from the I moved the mouse routine.

Resources