detect if a click is real or emulated - windows

I wonder how I can detect if a click was made by the user using mouse vs clicks made by bot (emulated)
Most of my research suggests that it is impossible but I have seen many games that successfully blocked emulated clicks while having no impact on the real ones.
I can't seem to find any tutorials or articles on how to do this.
Based on my research all emulated clicks will likely be using either SendInput or SendMessage function. Both functions are defined inside User32.dll
So is it possible or safe to find the call stack of the event and block the event if I find User32.dll in the stack? How can I do that in Unity?

Pattern matching is good enough to filter out really simple click bots. Those typically click in precisely the same location, and the interval would be regular within 1/10th of a second. I'd start there, and then develop more advanced algorithms as you target and learn how cheaters are interacting with your game client. Like others said, it depends on your game.

Related

Why does the browser zoom have to be at 100% for TestComplete and Telerick?

I was wondering why there is this limitation for both tools. I understand that it's needed to correctly identify web page elements and their position on web pages. My question is what is the underlying functionality that creates this limitation?
I should also say that I am asking this question because I see that TestComplete can identify objects by their name so why can't it use that?
TestComplete gets access to objects on a web page via internal browsers' APIs. These APIs return all information about an object including its position on the page without taking into account the zoom level. I suppose that TestComplete could try recalculating the coordinates of an object, but I doubt that it is possible to do this in absolutely the same way as a browser does and there will be a difference anyway.
TestComplete needs to get an object's coordinates in order to work with the object due to the way it works: it simulates user actions over application. So, to click a button, TestComplete moves the mouse pointer to the corresponding point at the screen and invokes a mouse click event. This differs from the approach used by some other tools (e.g. Selenium) which just trigger objects' native events and not simulate a human user's mouse/keyboard activity.

Generate and post Multitouch-Events in OS X to control the mac using an external camera

I am currently working on a research project for my university. The goal is to control a Mac using the Microsoft Kinect camera. Another student is writing the Kinect driver (which will be mounted somewhere on the ceiling or the wall behind the Mac and which outputs the position of all fingers on the Macs screen).
It is my responsibility to use that finger-positions and react on them. The goal is to use one single finger to control the mouse and react on multiple fingers the very same way, like they are on the trackpad.
I thought that this is going to be easy and straight forward, but its not. It is actually very easy to control the mouse cursor using one finger (using CGEvent), but unfortunately there is no public API for creating and posting Multitouch-Gestures to the system.
I've done a lot of research, including catching all CGEvents using an event tap at the lowest possible position and trying to disassemble them, but no real progress so far.
Than I stumbled over this and realized, that even the lowest position for an event tap is not deep enough:
Extending Functionality of Magic Mouse: Do I Need a kext?
When I got it right, the built-in Trackpad (and the MagicMouse and the MagicTrackpad) communicates over a KEXT-Kernel-Extension with the private MultitouchSupport-framework, which is generating and posting the incoming data in some way to the OS.
So I would need to use private APIs from the MultitouchSupport.framework to do the very same thing like the Trackpad does, right?
Or would I need to write a KEXT-Extension?
And if I need to use the MultitouchSupport-framework:
How can I disassemble it to get the private APIs? (I know class-dump, but that only works on Objective-C-frameworks, which this framework is not)
Many thanks for any response!
NexD.
"The goal is to use one single finger to control the mouse and react on multiple fingers the very same way" here if I understand what you are trying to do is you try to track fingers from Kinect. But the thing is Kinect captures only major body joints. But you can do this with other third party libraries I guess. Here is a sample project I saw. But its for windows. Just try to get the big picture there http://channel9.msdn.com/coding4fun/kinect/Finger-Tracking-with-Kinect-SDK-and-the-Kinect-for-XBox-360-Device

Window docking advice for Mac

I'm from a Windows programming background when writing tools, but have been programming using Carbon and Cocoa for the past year. I have introduced myself to Mac by, I admit it, hiding from UI programming. I've been basically wapping my OpenGL code in a view, then staying in my comfort zone using my platform agnostic OpenGL C++ code as usual.
However, now I want to start porting one of my more sophisticated applications to Mac OS.
Typically I use the standard Visual Studio dockable MDI approach, which is excellent, but very Windows-like. From using a Mac primarily now for a while, I don't tend to see this sort of method used for Mac UIs. Even Xcode doesn't support the idea of drag and drop/dockable views, unfortunately. I see docked views with splitter panels, but that's about it.
The closest thing I've seen to the Visual Studio approach is Photoshop CS4, which is pretty nice.
So what is the general consensus on this? Is there are more Mac-like way of achieving the same thing that I haven't seen? If not, I'm happy to write a window manager in Cocoa myself, so that I can finally delve in an learn what looks like an excellent API.
Note, I don't want to use QT or any other cross-platform libraries. The whole point is that I want to make a Mac app look like a Mac app, leave the Windows app looking like a Windows app. I always find the cross-platform libraries tend to lose this effect, and when I see a native Mac UI, with fancy Cocoa transitions and animations, I always smile. It's also a good excuse for me to learn Cocoa.
That being said, if there is an Open Source Cocoa library to do this, I'd love to know about it! I'd love to see how someone else achieves this, and would help smooth the Cocoa learning curve.
Cheers,
Shane
UPDATE: I forgot to mention a critical point. I support plugins, which can have their own UI to display various plugin specific information. I don't know which plugins will be loaded and I don't know where their UI will live, if I don't support docking. I'd love to hear people's thoughts on this, specifically: How do I support a plugin view architecture, if the UI can't change? Where do I put the plugin views?
Coming from a Windows background, you feel the need to have docking windows, but is it really essential to the app? Apple's philosophy (in my opinion) is that the designer knows better than the user how things should look and work. For example, iTunes is a pretty sophisticated app, but it doesn't let you change the UI around, change the skin, etc., because Apple wants to keep it consistent. They offer the full view, the mini player, and a handful of different viewing options, but they don't let you pull the source list off into a separate window, or dock it in other positions. They think it should be on the left, so there it stays...
You said you "want to make a Mac app look like a Mac app", and as you pointed out, Mac apps don't tend to have docking windows. Therefore, implementing your own docking windows is probably a step in the wrong direction ;)
+1 to Ken's answer.
From a user perspective unless its integral to the app like it is in Adobe CS or Eclipse i want everything as concise as possible and all the different options and displays out of my way so i can focus on the document.
I think you will find with mac users that those who have the "user skill" to make use of rearranging panels will in most cases opt for hot key bindings instead, and those who dont have that level of "skill" youre just going to confuse.
I would recommend keeping it as simple as possible.
One thing that's common among many Mac apps is the ability to hide all the chrome and focus on your content. That's the point behind the "tic tac" toolbar control in the top right corner of many windows. A serious weakness of many docking UIs is that they expect you to have the window take up most of the screen, because the docked panels can obscure content. Even if docked panels are collapsable, the space left by them is often just wasted and filled with white space. So, if you build a docking panel into your interface, you should expect it to be visible most of the time. For example, iTunes' source list is clearly designed to be visible all the time, but you can double-click a playlist to open it in a new window.
To get used to the range of Mac controls, I'd suggest you try doing some serious work with some apps that don't have a cross-platform UI; for example, the iWork apps, Interface Builder or Preview. Take note of where controls appear and why—in toolbars, in bottom bars, in inspectors, in source lists/sidebars, in panels such as IB's Library or the Font and Color panels, in contextual HUDs. Don't forget the menu bar either. Get an idea of the feel of controls—their responsiveness, modality, sizing, grouping and consistency. Try to develop some taste—not everything is perfect; just try iCal if you want to have something to make fun of.
Note that there's no "one size fits all" for controls, which can be an issue with docking UIs. It's important to think about workflow: how commonly used the control would be, whether you can replace it with direct manipulation, whether a visible indication of its state is necessary, whether it's operable from the keyboard and mouse where appropriate, and so forth. Figure out how the control's placement and behavior lets the user work more efficiently.
As a simple example of example of a good versus bad control placement and behavior in otherwise-decent applications, compare image masking in OmniGraffle and Keynote. In OmniGraffle, this uses the Image inspector where you have to first click on an unlabeled button ("Natural size") in order to enable the appropriate controls, then adjust size and position away in a low-fidelity fashion with an image thumbnail or by typing percentages into fields. Trying to resize the frame directly behaves in a bizarre and counterintuitive fashion.
In Keynote, masking starts with a sensibly named menu item or toolbar item, uses a HUD which pops up the instant you click on a masked image and allows for direct manipulation including a sensible display of the extent of the image you're masking. While you're dragging a masked image around, it even follows the guides. Advanced users can ignore the HUD entirely, just double-clicking the image to toggle mask editing and using the handles for sizing. It should be easy to see, with a few caveats (e.g. the state of "Edit Mask" mode should be visible in the HUD rather than just from the image; the outer border of the image you're masking should be more effectively used) Keynote is substantially better at this, in part because it doesn't use an inspector.
That said, if you do have a huge number of options and the standard tabbed inspector layout doesn't work for you, check out the Omni Group's OmniInspector framework. Try to use it for good, and hopefully you'll figure out how to obsess over UI as much as you do over graphics now :-)
(running in slow motion, reaching out in panic) Nnnnnoooooooo!!!!!
:-) Seriously, as I mentioned in reply to Ken's excellent answer, trying to force a "Windowsism" on an OS X UI is definitely a bad idea. In my opinion, the biggest problem with Windows UI is third-party developers inventing new and inconsistent ways of presenting UI, rather than being consistent and following established conventions. To a Mac user, that's the sign of a terrible application. It's that way for a reason.
I encourage you to rethink your UI app's implementation from the ground up with the Mac OS in mind. If you've done your job well, the architecture and model (sans platform-specific implementation) should clearly translate to any platform.
In terms of UI, you've been using a Mac for a year, so you should have a pretty good idea of "the norm". If you have doubts, it's best to post a question specifically detailing what you need to present and your thoughts on how you might do it (or asking how if you have no idea).
Just don't whack your app with the ugly stick by forcing it to behave as if it were running in Windows when it's clearly not. That's the kiss of death for an app to Mac users.

How do you create a second taskbar to use on multiple monitors?

I recently got myself a second monitor and I have been looking at software which offers the possibility to extend the taskbar to the second monitor. Softwares such as UltraMon and MultiMon offer such possibility.
I'd be interested to know what is the method they are using to replicate the tasbar? More precisely:
Is the second taskbar completely generated and managed by the software or is it some sort of extension/modification of how Windows behave?
How are the additionnal buttons on the window handle added? Is there some sort of templating system similar to what Stardock does?
How can you replicate the taskbar feel?
How can you remove open software icons from the main taskbar in order to move them to the software's taskbar?
Would creating a second start button actually be some sort of image of the said button, and the software would require to do POSSIBLE calls to the Windows API? (by possible, I mean I have no idea if such calls exists)
Finally, I'd be interested to know what field of knowledge is required to program such software.
I'd be glad to receive any pointers to articles or information that would lead to answers. If you have in depth knowledge that you'd gladly share, I'd really appreciate it.
Thanks to all for your replies.
They completely re-create the experience. DisplayFusion uses the Desktop Window Manager API to capture live thumbnails. Scott Hanselman has a very good rundown on just how close they got and where they're different.
I would imagine there is a lot of ugly code required to get it as close as they've gotten it.

GUI Design - Multiple forms vs Simulated MDI (Tabs) vs PageControl

which of the following styles do you prefer?
An application which to perform tasks opens new forms
An application which keeps the various "forms" in different tabs
An application which is based on a PageControl and shows you the right tab depending on what you want to do.
Something else
Also do you have any good links for gui design?
From a programmers point of view, the PageControl solution quickly gets out of hand. Possibly too much code and certainly to many components on one form. (Originally this question was tagged Delphi, so I go from there.)
From a users point of view, the "opens new window" paradigm often is confusing. We people tend to think that we are able to multitask and handle many open windows and tasks, but we are not (we task switch at a loss of time like computers and add loss of accuracy).
Obviously this really depends on the type of application. But I would tend to a paradigm as Chrome and Firefox show in their latest incarnations:
keep the various forms in different tabs
let the user detach a tab into its own form (dock and undock via drag%drop)
add a good way of navigation
I implement something like an SDI as main screen of an application too. Look at something like "outlook style". Navigation, list of objects, object details in different panes, some additional panes like a cockpit. And then open a new window/form for certain tasks (some modal, some non modal), but short lived. After the email is written, it is sent and closes the window. But I have, if I am capable of doing so, the possibility to work on multiple emails at the time.
Look at the problem. If it has dashboard character, take "outlook style" or so. If the users are a wide spread, heterogeneous, non computer savvy crowd, use SDI or forms on tabs. If you write for programmers, you might go for multiple forms, just because we tend to think that we can handle it. And it works for multiple screens (hopefully).
MDI is the worst choice possible, in my opinion. There's nothing I hate more than having to resize a bunch of windows, or tile them or whatever.
Tabs are bad, too, especially if you have more than one row of them (or if you have one row but still have more tabs than will fit, and have to use some funky scrollbar or "more" button with them).
I would rather see the programmer think about the problem and just show me what I need to see based on what I'm doing as a user. Implementing the different user interfaces in your programs as user controls (as opposed to discrete forms) and then showing them or hiding them based on the current context is the way to go.
The Tabbed form is a good idea if you use a frame for each tab content. This keeps you out of trouble from getting too much code in one single form unit. Try to do the same as Google Chrome. I personally create a menu with the options that are actually frames that loads only when the user asks for it, so there will never be many tabs visible unless the user needs them all opened.

Resources