how to hook a specific API on Windows with SetWindowsHookEx? - windows

I am trying to hook an API (say, MessageBox()) in other processes (I may not know the process ID) on Windows, I know that I have to use the SetWindowsHookEx() function. But still, I have three questions:
1) Can SetWindowsHookEx() function makes the hook global, i.e., not limited to current process? (When ther applications call this API, it is hooked?)
2) If I want to replace the to-be-hooked API with my own function, how should I do?
3) I read many materials, and I found the term "hook procedure" or "hook function". How should I comprehend this? Currently, I take it as the function that I will use to replace the API (say again, MessageBox).

This is not what SetWindowsHookEx is for. SetWindowsHookEx is for hooking into windows messages, not APIs (for example if you want to know when a window changes size or gets created).
Hooking API calls is more complicated, more messy. There is no built-in way to do it; you usually want to find a library to help you such as Detours.

You can use Deviare API Hook for this. With this library you can hook any API in 10 lines of code even with .NET
The difference with Detours is that you don't have to write the code that is inserted in each process. You can hook all the processes you want just attaching them. Then, you receive the calls in your own process.

Related

Can the Pebble timeline be used via Pebble.js?

It's unclear whether the Timeline features are supported when using only the pebble.js approach (e.g. no C code). Can anyone comment?
Yes, you can use Pebble.js to call regular PebbleKitJS functions (e.g., Pebble.getTimelineToken).
As far as pushing pins to the timeline, pins are pushed via the timeline Web API, which means you can use the ajax function to make a request to the API from within Pebble.js.

Getting a notification when a local file is accessed in windows

I'd like to get notified when a specific file get accessed (AFAIK, most generally for a Userland code - by CreateFile() / NtCreateFile())
I already know about FileSystemWatcher which should do the same within the .NET environment, But I'm working in plain C + WinAPI.
As for the type of notification , raising a specified Event would be perfect, but sending a callback to be called , will also work.
See FindFirstChangeNotification function in WinAPI and related links.
Alternatively, when functionality of that function is not enough, you can use a filesystem filter driver (write yours or use our CallbackFilter product).

AutoUnlock a Windows User Session

Recently, I have been working on a CredentialProvider in order to unlock automatically (the trigger can be any event, so let’s say the end of a timer) a Windows Vista (or more recent version) user session.
For that I read some useful articles on the subject, the change between GINA and this new architecture. http://msdn.microsoft.com/en-us/magazine/cc163489.aspx.
I think, like everyone in the process of creating a custom CredentialProvider, I didn’t start from scratch but from the sample code provided by Microsoft. And then I tried to change the behaviour (things like logging) in the different functions.
So in the end I can use the custom CredentialProvider, enter the SetUsageScenario methods but still I cannot reach the Set or GetSerialization method. From what I’ve understood in the technical documentation on CredentialProvider (still provided by Microsoft) theses two methods should be called automatically. Is there something I missed ?
Also, my original idea was to get an authentication package using Kerberos in order to perform an implicit user authentication. I got this idea by seeking information on other SO or MSDN threads like
Is this approach the good one ?
Thank you very much for your time answering my questions. Any clarifications are welcomed, even if they don’t directly resolve my problems :-)
First of all - you need to set autologon flag to true in your implementation of the ICredentialProviderCredential::SetSelected(BOOL *pbAutoLogon) and ICredentialProvider::GetCredentialCount methods.
Next, you need to call ICredentialProviderEvents::CredentialsChanged when your timer is hit.
LogonUI will recreate your credentials, and because autologon is set to true it will call your GetSerialization() method.
SetSerialization and GetSerialization functions are called from your provider by LogonUI. After user enters username/password and presses ENTER button, LogonUI calls GetSerialization function and provides a pointer, as one of the four parameters, that will point in future to CREDENTIAL_PROVIDER_CREDENTIAL_SERIALIZATION structure created and filled by you, and then this structure will be sent from LogonUI to Winlogon to perform authentication. I don't know how to make LogonUI to call GetSerialization from your credential provider code and as far as I know you can't call GetSerialization by your own because where will you pass your filled CREDENTIAL_PROVIDER_CREDENTIAL_SERIALIZATION structure if no one requested it, but only LogonUI can path it to Winlogon?
There is a document called "Credential Provider Technical Reference", there you can read some details about credential providers. In the Shell samples folder there is a strange folder called "Autologon", maybe it will help you! Good Luck!

How to integrate WinRT asynchronous tasks into existing synchronous libraries?

We have a long established, greatly multiplatform codebase that is currently being ported to WinRT. One of the challenges we're facing is how to handle WinRT's asynchronous style.
For example, we are unsure how to handle WinRT's async file operations. Unsurprisingly, our codebase's API is synchronous. A typical example is our File::Open function which attempts to open a file and return with success or failure. How can we call WinRT functions and yet keep the behavior of our functions the same?
Note that we are unfortunately constrained by legacy: we cannot simply go and change the API to become asynchronous.
Thanks!
I assume you wish to reimplement the library to support WinRT apps while not changin the definitions of the APIs so that existing applications remain compatible.
I think that if you don't include the await keyword when calling an async method you will not do an async operation, it should execute in a synchronous way. But it really doesn't work if the method returns a value (in my experience).
I've been using this code to make a file operation synchronous:
IAsyncOperation<string> contentAsync = FileIO.ReadTextAsync(file);
contentAsync.AsTask().Wait();
string content = contentAsync.GetResults();
If you want to share your code with a platform that does not support async/await - you are probably better off having a different API for the old platform and the new one with switches like
#if SILVERLIGHT
#elif NETFX_CORE
#elif WPF
#endif
Eventually the async APIs are likely to show up in older platforms and you could actually wrap the non-async calls into Tasks to make them async if they don't. Forcing async method to work synchronously is bound to bite you back rather quickly. Your WinRT app might become unresponsive for a few seconds for example and get killed by the OS. Or you could get deadlocks waiting for tasks to complete and blocking the thread they try to complete on.

How can I write an automated unit test of a GUI in Xcode?

I want to write a unit test of just the GUI part of my Cocoa application.
In the textbook unit test, there's a test framework and test case that calls the unit under test. All the code below that unit is mocked out. So, both the input and the output are controlled and monitored; only the code in the unit under test is tested.
I want to do the same thing where the unit under test is my GUI:
1) Set up some kind of framework where I can write code that will manipulate and inspect GUI controls.
2) Connect my GUI controls to mocks of my actual code, not to the real instances.
3) Run the test, which manipulates the controls and then checks the mock object to see whether the correct methods were called with the correct parameters and checks the GUI to see whether the responses from the mock object causes the correct changes in the widgets.
Anyone doing this? If so, how? Any ideas on how I could do this?
Thanks,
Pat
(Edit) To give a very specific example, I want to:
1) Write a test case that will select the menu item 'MyMenu' -> 'MyItem'. In this test case, I want to check to see that the method [AppDelegate doMyItem] gets called precisely once and that no other methods in AppDelegate get called.
2) Generate a mock object of AppDelegate. (I know how to do this)
3) Somehow (handwaving here) link my application so that a mock instance of AppDelegate is linked in instead of the real one.
4) Run the test. Watch it fail because 1) I haven't created MyMenu yet. 2) I haven't created MyItem yet. 3) I haven't done the IB work to connect MyItem to [AppDelegate doMyItem], or 4) because I haven't written the 'doMyItem' method yet.
5) Fix the above four issues (one at a time if I'm feeling really pedantic that day).
6) Run the test again and watch it succeed.
Does this make the question clear?
Two principles, two links:
Make the view as dumb as possible, with the passive view pattern: this makes GUI easier to test
Trust but verify: Trust Cocoa implementation of buttons, menus, ... But verify that target and action are correctly connected, that bindings are as expected.
Here are a couple of popular ways of doing this in general (should work with most if not all cocoa compatible languages).
1 - create a callback interface. One of the inputs when creating your GUI elements is an implementation of this interface. When there's a user interaction, the GUI element calls an update function on that interface. Have a real implementation and a test implementation.
2 - Use event-handlers. Register all of your GUI elements with one or more event-handlers, and have the GUI generate events on user interaction. Have an event handler interface with two implementations, again one for real use and one for testing.
Edit: whoops, missed requirement #1. Never done this with OSX specific controls, but in general there are two approaches.
1 - create a script or app that generates user-like input. Has the drawback of not being easy to actually inspect the GUI. You instead need to generate good test cases to make sure that everything that should be there is, and nothing extra is there.
2 - create an interface with a test implementation that replaces the rendering and interface layer. This is easier with libraries like SDL or directFB and less so with with things like the OSX API, win32 API, etc.
Edit: responding to edit in question.
In the case of your example, using a seperate testing app and event handlers here's how it'd look:
Your test application is a simple app or script that starts up your GUI and then generates mouse / keyboard events based on input files. As I've said, never done this in OSX (only QNX). With any luck you'll be able to generate mouse and keyboard events with the API, but you'll have to ask someone else if it's possible.
So create an input for your test-case. The test app will parse this to know what to do. It may be simple XML like this:
<testcase name="blah"><mouseevent x="120" y="175" type="click"/></testcase>
or whatever the mouse sequence may actually be.
When your script executes that command it will click the mouse on that button. Your event handler will pick up on this. But now you should be running your app with a --test flag or somesuch so that it's actually using the test event handler. Instead of doing whatever your app normally does, the test event handler can do some custom action. For instance it may do some of the normal actions (you still need the GUI to respond) and then send a message (via socket, pipe, whatever) to your test app.
Your test app will pick up this message and compare it to what it expects to see. So now maybe your testcase XML looks like this:
<testcase name="blah">
<mouseevent x="120" y="175" type="click"/>
<response>doMyItem() called</response>
</testcase>
If the response generated from the event handler is different, then the test case has failed. You can print out the actual response to help in debugging.
Have you looked into the accessibility framework? It should let one application inspect the UI of another application and generate user-like interaction events.
Accessibility Overview

Resources