Win32 API - Ctrl A + Ctrl C - winapi

uint x = 0x00000001;
uint y = 0x00FF00FD;
NativeMethods.PostMessage(hwnd, NativeMethods.WM_LBUTTONDOWN, x, y);
NativeMethods.PostMessage(hwnd, NativeMethods.WM_LBUTTONUP, x, y);
Using the above statements I'm able to click in a selected area on an external window application. Now, I need to send a ctrl a + ctrl c to the window.
Can you please tell me how to do this using Win32 api.

Wouldn't you be better served by sending a WM_GETTEXT to get the actual text in the window? Applications that rely on mouse/keyboard emulation are brittle at best.

Related

Differentiate between Ctrl+Click and Right Click in Qt

I currently receive mouse events through a MouseArea in QML. I can distinguish between left, right and middle mouse press. On Mac, holding Control and Left Clicking is interpreted as a Qt.RightButton press. Is there anyway to distinguish between a standard Right Click and a (Ctrl + Left) Right Click? I need to do this to handle a custom UI interaction (I understand it's an Anti-pattern).
Note: In particular, I need to distinguish between a Ctrl + Left Click and a Ctrl + Right Click. As a result, simply detecting that Control is pressed while a right click event is received isn't good enough.
function button( mouse )
{
if( mouse.button == Qt.LeftButton )
{
console.log("left button is pressed");
return 0;
} else if( mouse.button == Qt.MidButton )
{
console.log("mid button");
return 1;
} else if ( mouse.button == Qt.RightButton )
{
console.log("right button pressed");
}
}
On a Mac, holding Control and Left Clicking to do a Right Click is a system setting, and is a choice the user has - it has nothing to do with Qt, and attempting to detect it breaks your application, since the user depends on Ctrl+Left to act as Right, everywhere, and there's 0 reason for you to change that - just as there's zero reason to prevent the user from e.g. swapping left and right buttons globally (independent and unknown to your application), etc.
The mode of thinking you need is that Ctrl+Left is not available on a default Mac system, and indeed: why would it? If a cross-platform application uses Ctrl+Left/Right on Windows, I'd fully expect it to use Cmd+Left/Right on Mac, and would be quite confused if it did something else. Ctrl on Windows/Unix becomes Cmd on Mac - that's what everyone expects, that's how essentially everything works on Mac. That's what you want - and then there's no problem.
Resolved (in a way...): this is a bug in Qt 5.12 which was fixed upstream already.

Stick C app to all desktops

I wrote a simple resource display program, that displays in a very small window some stats, like amount of free RAM. I want it to be visible on any desktop when I switch between them, how to achieve that?
UPDATE:
Thanks to n.m. I am on the right track (hopefully), here is what I have got so far:
unsigned int ints[2];
ints[0] = 0xFFFFFFFF;
ints[1] = 2;
XChangeProperty(d, w, XInternAtom(d, "_NET_WM_DESKTOP", 1),
XA_ATOM,
32,
PropModeReplace,
(unsigned char*)ints,
2);
It compiles, but it does not do anything, i.e. the window is still only visible on the desktop it was originally started. What's wrong with my code?
X11 or Xlib by themselves have no notion of desktops or switching between desktops. It's all in your window manager. Usually a window informs the WM about it needs through window properties.
Modern Freedesktop-compliant window managers use _NET_WM_DESKTOP property. Set it to 0xFFFFFFFD before mapping the window.
Edit the correct incantation is
unsigned long prop = 0xFFFFFFFF; // note long! even if long is 64 bit
XChangeProperty(d, w, XInternAtom(d, "_NET_WM_DESKTOP", 1),
XA_CARDINAL, // note CARDINAL not ATOM
32,
PropModeReplace,
(unsigned char*)&prop,
1); // note 1
XMapWindow(d, w); // map after changing the property
You can use the xprop command line utility to verify that the property is set correctly.

How do I allow the Leap Motion to control my cursor in an OS X application?

So I have a game for Mac OS X built in cocos2D.
I'm using gestures to simulate keyboard commands to control my character and it works really well.
I submitted my game to the AirSpace store and it got rejected on the grounds that the Leap should be used to control my menus as well which is fair enough I guess.
Thing is for the life of me I cannot figure out how this is done. There are no examples out there to show me how to implement it and nothing in the SDK example that makes it clear either.
Does anyone have any examples they'd care to share, I only need it to hijack my cursor and allow a click when held over a button. I really didn't think something so complex would be needed on simply for basic menu navigation.
If this is a Mac only game you should have access to the Quartz event api. This is the easiest way to generate mouse events in OS X...
I would recommend simply tracking the palm (hand) position, and moving the cursor based on that.
This is how I do my palm tracking:
float handX = ([[hand palmPosition] x]);
float handY = (-[[hand palmPosition] y] + 150);
The "+ 150" is the number of millimetres above the Leap device, to use as the '0' location. From there you can move the cursor based on the hand offset from 0.
The function I use to move the cursor (using Quartz):
- (void)mouseEventWithType:(CGEventType)type loc:(CGPoint)loc deltaX:(float)dX deltaY:(float)dY
{
CGEventRef theEvent = CGEventCreateMouseEvent(NULL, type, loc, kCGMouseButtonLeft);
CGEventSetIntegerValueField(theEvent, kCGMouseEventDeltaX, dX);
CGEventSetIntegerValueField(theEvent, kCGMouseEventDeltaY, dY);
CGEventPost(kCGHIDEventTap, theEvent);
CFRelease(theEvent);
}
and an example function call:
[self mouseEventWithType:kCGEventMouseMoved loc:CGPointMake(newLocX, newLocY) deltaX:dX deltaY:dY];
This call will move the mouse. Basically you just pass the new location of the mouse, and the corresponding deltas, relative to the last cursor position.
I can provide more examples such as examples for getting mouse location, clicking the mouse, or even a full mouse moving program...
EDIT 1:
To handle click and drag with Quartz, you can call the same function as above only pass in kCGEventLeftMouseDown.
The catch is that in order to drag you cannot call the kCGEventMouseMoved you must instead pass kCGEventLeftMouseDragged while the drag is happening.
Once the drag is done you must pass a kCGEventLeftMouseUp.
To do a single click (no drag) you simply call mouse down and then up right after, without any drag...

What does ( WM_NCLBUTTONDOWN ) do?

i was studying the example of how to make round-shape form in Visual Basic 6, and i stopped at the code :
Public Const WM_NCLBUTTONDOWN = &HA1
I only know that this is a message to the windows passed as Const ...
What i want to know is :
what is &HA1 ?
what does Const WM_NCLBUTTONDOWN do ? what message does it send to Windows ?
anything else about it .
Please, thanks
You are working with messages that Windows sends to a window to tell your code that something interesting happened. You'll find this constant used in the WndProc() method of a form, the method that runs when Windows sends a message.
The WM_NCLBUTTONDOWN message is one of those messages. WM = Window Message. NC = Non Client, the part of the window that's not the client area, the borders and the title bar. L = Left button, you can figure out BUTTONDOWN.
These messages are declared in a Windows SDK file. You'll have it on your machine, the VS2008 version of that file is located in C:\Program Files\Microsoft SDKs\Windows\v6.0A\Include\WinUser.h. Open it with a text editor or VS to see what's inside. Search for the message identifier to find this line:
#define WM_NCLBUTTONDOWN 0x00A1
The Windows SDK was written to work with C programs. #define is equivalent to Const in VB.NET. The 0x prefix means 'hexadecimal' in the C language, just like &H does in VB.NET. The Windows calculator is helpful to convert hexadecimal values to decimal and back, use View + Programmer. You'll see the reason that &H is used in a VB.NET program, these constants started out in hexadecimal in the core declaration. But Private Const WM_NCLBUTTONDOWN = 161 will work just as well (10 x 16 + 1).
So within WndProc() you'd use a Select Case or If statement to detect the message. And you can do something special when the user clicks the left mouse button on the window title bar. If you ignore it then MyBase.WndProc(m) runs and the normal thing happens: Windows starts a modal loop that lets the user move the window. It is actually very rare that you want to stop or alter that behavior, users are pretty fond of that default behavior since all windows in Windows behave that way. The only message whose behavior you'd typically want to customize is WM_NCHITTEST. Very useful to give a borderless window border-like behavior. But that's another story.
That's a hexadecimal integer literal
It declares a constant; it doesn't actually do anything.
The WM_NCLBUTTONDOWN message is posted when the user presses the left mouse button while the cursor is within the nonclient area of a window. This message is posted to the window that contains the cursor
&HA1 means the hexadecimal number A1, i.e., 161 (though you'll usually see Windows message constants represented in hex). More commonly you'll see this as 0xA1 (or 0x00A1) since that's how the hex number would be represented in C or C++ (the Windows API was originally written for C).
You won't be sending WM_NCLBUTTONDOWN to Windows; it's the other way around. Windows will be sending you WM_NCLBUTTONDOWN.
If you want to know what WM_NCLBUTTONDOWN is for, the documentation is just a Web-search away.

Is it possible to have a QWidget without a display?

I have a console-only win32 application (which can be launched as a windows service) where I want to create QWidget objects and interact with them.
These QWidget will not be rendered locally (because there is no display) but I need to access painting operations (to create screenshot of them for example) and I need to intercept mouse and keyboard operations.
The aim of the service is to provide a display of the widget to a remote application. So the image, mouse and keyboard are meant to be redirected over the network.
Is it possible to have QWidget objects in a console application? What is the preferred method to intercept painting, mouse and keyboard operations in that case?
If this is possible, is it still possible with QAxWidget objects?
Have a peek at Qt for embedded Linux.. Qt is designed so you can do this, but it is non-trivial.
I do suspect you're not on the right track, though, if you have a console-mode service that needs a keyboard, mouse and a graphical UI. The need to interact with a user tells me that it should not be a service, and the need for a mouse suggests that it shouldn't be a console app either.
You can create a QApplication without a Gui using one of the provided constructors:
QApplication::QApplication(int&, char**, bool GuiEnabled)
In order to do GUI operations you'll still need a GUI available. For example, on Linux it will still require that X be running and available. I believe there are certain restrictions on what can and can't happen but can't find the blog post on http://labs.qt.nokia.com that provides details.
At the moment i myself am trying to do something similar. The approach i have taken is creating subclass of QGraphicsScene and overriding QGraphicsScene::sceneChanged event. Then it goes as follows (pseudocode):
QApplication app;
MyGraphicsScene* mgs = new MyGraphicsScene();
MyWidget* mw = new MyWidget();
mgs->addWidget(mw);
Now each time change happens your sceneChanged will be invoked. There you can get snapshot of scene as QImage. In my case i move pixel data to texture and render it as overlay of my game:
void QEOverlay::sceneChanged(QList<QRectF> list)
{
//loop through all screen areas that changed
foreach(QRectF rect, list)
{
//generate a rectangle that is a while number, and fits withing the screen
if(rect.x() < 0) rect.setX(0);
if(rect.y() < 0) rect.setY(0);
if(rect.right() > _width) rect.setRight(_width);
if(rect.bottom() > _height) rect.setRight(_height);
rect = QRectF(Round(rect.x()),Round(rect.y()),Round(rect.width()),Round(rect.height()));
//Create an image, create a qpainter, fill the image with transparent color, and then render a part of the scene to it
QImage image(rect.width(),rect.height(),QImage::Format_ARGB32);
QPainter p(&image);
image.fill(0);
render(&p,image.rect(),rect);
if(tex.lock())
{
for (u32 y = 0; y < image.height(); y++)
{
for (u32 x = 0; x < image.width(); x++)
{
QRgb pix = image.pixel(x, y);
tex.color(x + rect.left(), y + rect.top(), Color(qRed(pix), qGreen(pix), qBlue(pix), qAlpha(pix)));
}
}
tex.unlock();
}
}
}
There is issue with this approach. You still need to redirect keyboard and mouse input events to your subclass. That does not work out for me very well, there are certain issues like mouse click not focusing QLineEdit or elements in QWebView.

Resources