How do you make an application "Open" for TestComplete - testcomplete

How do you make an application "Open" for TestComplete.
How would you send a string to the TestComplete log? What about an Image.
Describe the levels of visibility of an application under test to TestComplete

Almost all types of applications are "Open" for TestComplete without any special preparations. Even if some preparations are required, all of them are described in detail in the really great product documentation. In a common case, TestComplete can "see" almost all internal visual and even non-visual objects with their native properties and methods.
As for sending a message or an image to the Test Log - you need to make use of the corresponding method of the Log object: Log.Message or Log.Picture.

Look out for the plugins available if your application is not natively "Open" to access the objects.

Related

Common asserts in any automation project

Can anyone briefly explain what are the common asserts to consider in any automation project please. Whether it might be an in-house or public web application. For example presently i am using selenium (java) to automate an eCommerce web application. As this is my first website to automate, i am running out of ideas where i can verify things expect few which i know mentioned below:
1.Verify each page Title
2.Verify a button, text, link, image, custom text etc
Apart from these is there any thing else i can verify? please feel free to correct my question and if you have worked on various automation projects which areas did you add asserts to verify or validate something on a webpage.
basically, you do automation to decrease the execution time of regression cycles by automating the Test Cases relate to the functionality of the application. so, first develop test cases, using test design techniques like ECP, BVA etc.
Each test case must have an Assertion called expected result or functionality (otherwise it won't be called a Test case).
This assertion can be anything like,
Whether login successful after giving valid credentials
Showing an error message after entering wrong credentials etc.
Selenium helps us to automate web interactions (navigations, clicks, enter texts etc.) and don't perform any assertions for you.
Assertions are available by frameworks like JUnit, TestNG (in Java) with Assertions class. There is built-in support from programming languages like assert keyword in python & Java (http://docs.oracle.com/javase/7/docs/technotes/guides/language/assert.html)
So, whatever you mentioned in your question like common assertions (Verify each page Title etc.), those are just web interactions. they don't decide whether a Test is PASS or FAIL. It is you who define the criteria whether a Test is PASS/FAIL.
For example, there is a test case related to successful login.
here, you automate web interactions like navigate to login page, enter credentials, click Submit button.
Then to validate whether you successfully logged in or not, you look for a web element in the Home Page of the user logged in (like, welcome user) in normal scenario. In Automation, you try to find the text welcome user using webelement. Then you use Assertions provided by frameworks, to assert whether the expected message is present in the webpage like
Assertions.assertEqual(expected_message, actual_message); // just an example.
If expected_message and actual_message is same, then the method don't throw any exception, which results in marking the testcase as PASS by the framework
If expected_message and actual_message is NOT same, then AssertionError is raised by the method assertEqual, which results in marking the test case as FAIL by the framework.

How to implement global VB6 error handler?

The global VB6 error handler product referred to in the following link claims to "install a small callback hook into the VBE6 debugger":
http://www.everythingaccess.com/simplyvba/globalerrorhandler/howitworks.htm
I would like to implement this product myself because I would like more control over what it is doing. How is the above product likely to be achieving what it does?
The product you are looking at is a COM component. From the documentation that is available on the web site, it sounds like the COM component implements particular component classes. The first thing to do, if you already have the product, would be to fire up SysInternals procmon, run regsvr32 on the DLL, and figure out what component classes are implemented from the registry entries that are created. Once you know this, MSDN may be able to tell you what interfaces correspond to those component classes.
Microsoft developed a framework called Active Scripting that allows you to host a script engine and inject debugging capabilities. If one assumes that VB6 produces an exe that ties into that framework, you might be able to do:
Create a COM component that implements IApplicationDebugger
Implement IApplicationDebugger::onHandleBreakPoint to be able to respond to errors in the VB code
Read MSDN KB Q222966 to find out how to call back to VB from onHandleBreakPoint
It looks like the product injects the ErrEx class using IActiveScript::AddNamedItem. To provide the same behaviour, Implement IActiveScriptSite::GetItemInfo on the same COM component to return a pointer to an instance of (and the associated TypeInfo for) a COM component that implements the same interface as ErrEx. In your implementation of ErrEx.EnableGlobalErrorHandler you would do the following:
CoCreateInstance inproc Process Debug Manager
Cast reference to IRemoteDebugApplication
Register an instance of your IApplicationDebugger component using IRemoteDebugApplication::ConnectDebugger
I glossed over calling IActiveScript::AddNamedItem because I have no idea how you get a pointer to IActiveScript from a running process. Also, I don't know if creating a new instance of the Process Debug Manager will work, or if you somehow have to hook into an existing instance.
I apologize for the confusing explanation, missing information, and glossing over large parts of the process, but this is going waaay back...
You will want to read the Active Scripting APIs article at MSDN.

Disable Images, ActiveX Etc in VB6 WebBrowser control using DLCTL_NO_

Like the title says, i want to disable images, and ActiveX Controls in the vb6 webbrowser control using DLCTL_NO_RUNACTIVEXCTLS and DLCTL_NO_DLACTIVEXCTLS
Microsoft talk about it here: http://msdn.microsoft.com/en-us/library/aa741313.aspx
But i dont see any way to access IDispatch::Invoke from the vb6 application.
Any help would be greatly appreciated.
I do not think VB6 let you to add the ambient properties. Try host the ActiveX in another container (e.g. an ActiveX host written by yourself - but I do not know how much time you want to invest to declare VB-friendly OLE interfaces and implement them - or use another ActiveX like the one at http://www.codeproject.com/KB/atl/vbmhwb.aspx instead.
You don't access IDispatch::Invoke in VB6, you just write you method and IDispatch is automagically implemented.
Public Function DlControl() As Long
DlControl = DLCTL_NO_DLACTIVEXCTLS Or ...
End FUnction
Then just open Tools->Procedure Attributes and for DlControl function open Advanced and assign Procedure ID to -5512 (DISPID_AMBIENT_DLCONTROL). That's first part.
Second part is to set the client site to you custom implementation of IOleClientSite. You'll need a custom typelib, try Edanmo's OLELIB for declaration of these interfaces.
Here is a delphi sample how to hook your implementation of IOleClientSite. Apparently you'll alse have to call OnAmbientPropertyChange at some point.

How can I write an automated unit test of a GUI in Xcode?

I want to write a unit test of just the GUI part of my Cocoa application.
In the textbook unit test, there's a test framework and test case that calls the unit under test. All the code below that unit is mocked out. So, both the input and the output are controlled and monitored; only the code in the unit under test is tested.
I want to do the same thing where the unit under test is my GUI:
1) Set up some kind of framework where I can write code that will manipulate and inspect GUI controls.
2) Connect my GUI controls to mocks of my actual code, not to the real instances.
3) Run the test, which manipulates the controls and then checks the mock object to see whether the correct methods were called with the correct parameters and checks the GUI to see whether the responses from the mock object causes the correct changes in the widgets.
Anyone doing this? If so, how? Any ideas on how I could do this?
Thanks,
Pat
(Edit) To give a very specific example, I want to:
1) Write a test case that will select the menu item 'MyMenu' -> 'MyItem'. In this test case, I want to check to see that the method [AppDelegate doMyItem] gets called precisely once and that no other methods in AppDelegate get called.
2) Generate a mock object of AppDelegate. (I know how to do this)
3) Somehow (handwaving here) link my application so that a mock instance of AppDelegate is linked in instead of the real one.
4) Run the test. Watch it fail because 1) I haven't created MyMenu yet. 2) I haven't created MyItem yet. 3) I haven't done the IB work to connect MyItem to [AppDelegate doMyItem], or 4) because I haven't written the 'doMyItem' method yet.
5) Fix the above four issues (one at a time if I'm feeling really pedantic that day).
6) Run the test again and watch it succeed.
Does this make the question clear?
Two principles, two links:
Make the view as dumb as possible, with the passive view pattern: this makes GUI easier to test
Trust but verify: Trust Cocoa implementation of buttons, menus, ... But verify that target and action are correctly connected, that bindings are as expected.
Here are a couple of popular ways of doing this in general (should work with most if not all cocoa compatible languages).
1 - create a callback interface. One of the inputs when creating your GUI elements is an implementation of this interface. When there's a user interaction, the GUI element calls an update function on that interface. Have a real implementation and a test implementation.
2 - Use event-handlers. Register all of your GUI elements with one or more event-handlers, and have the GUI generate events on user interaction. Have an event handler interface with two implementations, again one for real use and one for testing.
Edit: whoops, missed requirement #1. Never done this with OSX specific controls, but in general there are two approaches.
1 - create a script or app that generates user-like input. Has the drawback of not being easy to actually inspect the GUI. You instead need to generate good test cases to make sure that everything that should be there is, and nothing extra is there.
2 - create an interface with a test implementation that replaces the rendering and interface layer. This is easier with libraries like SDL or directFB and less so with with things like the OSX API, win32 API, etc.
Edit: responding to edit in question.
In the case of your example, using a seperate testing app and event handlers here's how it'd look:
Your test application is a simple app or script that starts up your GUI and then generates mouse / keyboard events based on input files. As I've said, never done this in OSX (only QNX). With any luck you'll be able to generate mouse and keyboard events with the API, but you'll have to ask someone else if it's possible.
So create an input for your test-case. The test app will parse this to know what to do. It may be simple XML like this:
<testcase name="blah"><mouseevent x="120" y="175" type="click"/></testcase>
or whatever the mouse sequence may actually be.
When your script executes that command it will click the mouse on that button. Your event handler will pick up on this. But now you should be running your app with a --test flag or somesuch so that it's actually using the test event handler. Instead of doing whatever your app normally does, the test event handler can do some custom action. For instance it may do some of the normal actions (you still need the GUI to respond) and then send a message (via socket, pipe, whatever) to your test app.
Your test app will pick up this message and compare it to what it expects to see. So now maybe your testcase XML looks like this:
<testcase name="blah">
<mouseevent x="120" y="175" type="click"/>
<response>doMyItem() called</response>
</testcase>
If the response generated from the event handler is different, then the test case has failed. You can print out the actual response to help in debugging.
Have you looked into the accessibility framework? It should let one application inspect the UI of another application and generate user-like interaction events.
Accessibility Overview

Session 0 Isolation

Vista puts out a new security preventing Session 0 from accessing hardware like the video card, and the user no longer logs into session 0. I know this means that I cannot show the user a GUI, however, does that also mean I can't show one at all? The way my code is set up right now, it would be more work to make it command line only, however if I can use my existing code and just programmatically manage the GUI it would take a lot less code.
Is this possible?
The article from MSDN says this:
• A service attempts to create a user interface (UI), such as a dialog box, in Session 0. Because the user is not running in Session 0, he or she never sees the UI and therefore cannot provide the input that the service is looking for. The service appears to stop functioning because it is waiting for a user response that does not occur.
Which makes me think it is possible to have an automated UI, but someone told me that you couldn't use SendKeys with a service because it was disabled in Session 0.
EDIT: I don't actually need to show the user the GUI
You can show one; it just doesn't show up.
There is a little notification in the taskbar about there being a GUI window and a way to switch to it.
Anyway, there actually is a TerminalServices API command to switch active session that you could call if you really needed it to show up.
You can write a separate process which provides the UI for your service process. The communication between your UI and service process can be done in various ways (search the web for "inter process communication" or "IPC").
Your service can have a GUI. It's simply that no human will ever see it. As the MSDN quote suggests, a service can display a dialog box. The call to MessageBox won't fail; it just won't ever return — there won't be anyone to press its buttons.
I'm not sure what you mean by wanting to "manage the GUI." Do you actually mean pretending to send input to the controls, as with SendInput? I see no reason that it wouldn't be possible; you'd be injecting input into your own program's queue, after all, and SendInput's Vista-specific warnings don't say anything about that. But I think you'd be making things much more complicated than they need to be. Revisit the idea to alter your program to have no UI at all. (That's not the same as having a console program. Consoles are UI.)
Instead of simulating the mouse messages necessary to click a button, for instance, eliminate the middle-man and simply call directly the function that the button-click event would have called.

Resources