Coded UI Test is slow waiting for UI thread - visual-studio

I've added Coded UI Tests to my ASP.NET MVC solution in Visual Studio 2013. I was dismayed to see how slowly the tests run; each page just sits there for up to a minute or more before the test machinery wakes up and starts filling in the form fields.
After some experimentation (including turning off SmartMatch), I've discovered that simply calling
Playback.PlaybackSettings.WaitForReadyLevel = WaitForReadyLevel.Disabled;
solves the problem. But, as expected, the test frequently fails because the UI thread isn't ready for the test machinery to interact with the controls on the form.
Calling
Playback.PlaybackSettings.WaitForReadyLevel = WaitForReadyLevel.UIThreadOnly;
makes the test run reliably, if slowly.
Any thoughts or suggestions? Any hope that someone might have some insight into the magic baked into the WaitForReady machinery? Are there any other settings related to WaitForReady that I can fiddle with besides WaitForReadyLevel?

After a bit of experimentation, I've worked out what appears to be a combination of settings that allows my Coded UI Tests to reliably run at full speed -- faster than I could interact with the website by hand.
Note: The relevant "documentation" (if you call a blog "documentation") can be found here:
Playback configuration settings
Retrying failed playback actions.
The trick requires several modifications to the default playback settings:
Setting WaitForReadyLevel = WaitForReadyLevel.Disabled allows the test to run at full speed. But it also disables the (slow!) magic that waits until it's safe to interact with controls on the page.
Setting a MaximumRetryCount and attaching an error handler deals with most of the errors that result from disabling the "wait for ready" magic. Because I've baked a 1 second Sleep into the retry logic, this value is effectively the number of seconds I'm willing to wait for the page to load and become responsive.
Apparently, failure to find the control under test is not one of the errors handled by the error handler/retry mechanism. If the new page takes more than a few seconds to load, and the test is looking for a control that doesn't exist until the new page loads, the test fails to find the control and the test fails. Setting ShouldSearchFailFast = false solves that problem by giving you the full timeout time for your page to load.
Setting DelayBetweenActions = 500 appears to work around a problem that I see occasionally where the UI misses a button click that occurs immediately after a page has loaded. The test machinery seems to think that the button was clicked, but the web page doesn't respond to it.
The "documentation" says that the default search timeout is 3 minutes, but it's actually something greater than 10 minutes, so I explicitly set SearchTimeout to 1 second (1000 ms).
To keep all of the code in one place, I've created a class that contains code used by all of the tests. MyCodedUITests.StartTest() is called by the [TestInitialize] method in each of my test classes.
This code really should be executed only once for all of the tests (rather than once per test), but I couldn't figure out a way to get the Playback.PlaybackSettings calls to work in the [AssemblyInitialization] or [ClassInitialization] routines.
/// <summary> A class containing Coded UI Tests. </summary>
[CodedUITest]
public class UI_Tests
{
/// <summary> Common initialization for all of the tests in this class. </summary>
[TestInitialize]
public void TestInit()
{
// Call a common routine to set up the test
MyCodedUITests.StartTest();
}
/// <summary> Some test. </summary>
[TestMethod]
public void SomeTest()
{
this.UIMap.Assert_HomePageElements();
this.UIMap.Recorded_DoSomething();
this.UIMap.Assert_FinalPageElements();
}
}
/// <summary> Coded UI Test support routines. </summary>
class MyCodedUITests
{
/// <summary> Test startup. </summary>
public static void StartTest()
{
// Configure the playback engine
Playback.PlaybackSettings.WaitForReadyLevel = WaitForReadyLevel.Disabled;
Playback.PlaybackSettings.MaximumRetryCount = 10;
Playback.PlaybackSettings.ShouldSearchFailFast = false;
Playback.PlaybackSettings.DelayBetweenActions = 500;
Playback.PlaybackSettings.SearchTimeout = 1000;
// Add the error handler
Playback.PlaybackError -= Playback_PlaybackError; // Remove the handler if it's already added
Playback.PlaybackError += Playback_PlaybackError; // Ta dah...
}
/// <summary> PlaybackError event handler. </summary>
private static void Playback_PlaybackError(object sender, PlaybackErrorEventArgs e)
{
// Wait a second
System.Threading.Thread.Sleep(1000);
// Retry the failed test operation
e.Result = PlaybackErrorOptions.Retry;
}
}

Coded UI searches for controls on the screen and that search is quite fast if successful. However if the search fails then Coded UI has another try using a "smart match" method and that can be slow. The basic way of avoiding Coded UI falling back to using smart matching is to remove or simplify search items that may change from run to run.
This Microsoft blog gives lots of explanation of what happens and how to fix it. The example therein shows a speedup from 30 seconds to 8 seconds by changing a search string from
Name EqualsTo “Sales order‬ (‎‪1‬ - ‎‪ceu‬)‎‪ - ‎‪‪‪Sales order‬: ‎‪SO-101375‬‬, ‎‪‪Forest Wholesales”
to
Name Contains “Sales order‬ (‎‪1‬ - ‎‪ceu‬)‎‪ - ‎‪‪‪Sales order‬: ‎‪SO”

Seems like it is captured from microsoft dynamics tool. Please check the length of the string captured from inspect tool. You will be finding some hidden character. just order‬ (‎‪1‬ - ‎‪ceu‬)‎‪. Else just move the cursor from "(" to ")". You will be finding cursor is not moving sometime when pressing right arrow key.

Related

ValidateCacheOutput function is never called while programmatically invalidating cached page

I have a problem with programmatically invalidating cached page.
I coded a page for publishing RSS-feed, and the page is cached in a preset time interval, say in 3 minutes. However, when ever a new UMM message is raised and saved to the database, the page shall be re-cached.
For this task, I used a solution illustrated in MS document about "Programmatically Invalidating Cached Pages". Although I coded following function:
public static void ValidateCacheOutput(HttpContext context, Object data, ref HttpValidationStatus status)
{
if (((bool)context.Application["IsNewUMMRaised"]) == true)
status = HttpValidationStatus.Invalid;
else
status = HttpValidationStatus.Valid;
}
and added following code to the start of Page_load function.
Response.Cache.AddValidationCallback(new HttpCacheValidateHandler(ValidateCacheOutput), null);
Following code is executed when a UMM is saved to database.
Application["IsNewUMMRaised"] = true;
The problem is that the above event handler function is never called when the web page is accessed, i.e. the page is re-cached in a preset time interval, even when application variable IsNewUMMRaised is set to true. I wonder why why it works this way, and how my code shall be modified so that the above event handler function is called when the page is accessed.
Sorry the problem seems being caused by Response.End() function which is called in the Page_load function. After this code line is removed, the event handler function is called properly when the page is accessed.

Firefox Extension: responding to an http-on-modify-request observed in the parent with a message to the child frame responsible for the load

I'm trying to enhance an existing Firefox extension which relies on nsIContentPolicy to detect and abort certain network loads (in order to block the resulting UI action, i.e. tab navigation). Then handle loading that resource internally. Under rare circumstances, only after handling the load, it turns out we shouldn't have interrupted the load at all, so we flag it to be ignored and re-start it.
Under e10s/multi-process, that means the parent (where the content policy is running) must send a message to the child (handling the UI for the content) to restart the load. Today, that's done by:
function findMessageManager(aContext) {
// With e10s off, context is a <browser> with a direct reference to
// the docshell loaded therein.
var docShell = aContext && aContext.docShell;
if (!docShell) {
// But with e10s on, context is a content window and we have to work hard
// to find the docshell, from which we can find the message manager.
docShell = aContext
.QueryInterface(Ci.nsIInterfaceRequestor)
.getInterface(Ci.nsIWebNavigation)
.QueryInterface(Ci.nsIDocShellTreeItem).rootTreeItem;
}
try {
return docShell
.QueryInterface(Ci.nsIInterfaceRequestor)
.getInterface(Ci.nsIContentFrameMessageManager);
} catch (e) {
return null;
}
};
Which is crazy complex, because e10s is crazy complex. But it works; it generates some object in the parent, upon which I can call .sendAsyncMessage(), and then the addMessageListener() handler in my frame/child script receives it, and does what it needs to do.
I'd like to switch from nsIContentPolicy to http-on-modify-request as it presents more information for making a better determination (block and handle this load?) earlier. Inside that observer I can do:
var browser = httpChannel
.notificationCallbacks.getInterface(Ci.nsILoadContext)
.topFrameElement;
Which gives me an object which has a .messageManager which is some kind of message manager, and which has a .sendAsyncMessage() method. But when I use that .sendAsyncMessage(), the message disappears, never to be observed by the child.
Context: https://github.com/greasemonkey/greasemonkey/issues/2280
This should work in principle, although the docshell tree traversal may do different things in e10s and non-e10s, so you have to be careful there. In e10s rootTreeItem -> nsIContentFrameMessageManager should give you the MM equivalent to a frame script and topFrameElement.frameLoader.messageManager should give you the <browser>'s MM, which pretty much is the parent side counterpart to it.
Potential sources of confusion:
e10s on vs. off
process MM vs. frame MM hierarchy
listening in the wrong frame for the message (registering in all frames might help for debugging purposes)
This is the function I use to find the content message manager:
function contentMMFromContentWindow_Method2(aContentWindow) {
if (!gCFMM) {
gCFMM = aContentWindow.QueryInterface(Ci.nsIInterfaceRequestor)
.getInterface(Ci.nsIDocShell)
.QueryInterface(Ci.nsIInterfaceRequestor)
.getInterface(Ci.nsIContentFrameMessageManager);
}
return gCFMM;
}
So maybe get the content window that triggered that request, and then use this function.

Navision 5 and COM Interop

While trying to understand how Navision 5 can communicate with an external application through COM interop, i found the following example:
http://msdn.microsoft.com/en-us/library/aa973247.aspx
The second case implemented is exactly what I want to do. I tested the code (with minor modifications - added some attributes [ComVisible(true)] on the events interface and class) and with these modifications it worked as stated in the example.
However, I cannot understand why we do not get an exception on invoking the COMTimer.Elapsed through the following.
protected virtual void OnElapsed(EventArgs e)
{
Elapsed();
}
Who is hooked to this event? The only "hook" i can see is the mTimer.Elapsed += new ElapsedEventHandler(mTimer_Elapsed); that refers to the Elapsed event of the mTimer.
Normally, Elapsed would be null in the OnElapsed function.
I would appreciate your help. Thanks in advance.
Interesting problem.
WithEvents property on automation creates the handler and attaches it to Elapsed delegate, so this one is not NULL - hence no exception
However, when WithEvents is No, and Timer.Start() is invoked, as you rightly say no exception bubbles up, even though (in theory) Elapsed delegate is null.
The simple explanation to this would be, that NAV attaches empty delegate regardless of WithEvents property. To support that, if you put code in Timer::Elapsed() trigger, then take off WithEvents, and bring it back - the code will still be there (i.e. trigger still exists in unchanged form), which makes me lean towards conclusion that it exists always (i.e. empty delegate).
But of course it's NAV so it couldn't be that simple.
I created a test codeunit from above MSDN example, but made a small change to the automation:
/// <summary>
/// Whenever the internal timer elapses, an event must be fired
/// </summary>
private void mTimer_Elapsed(object sender, ElapsedEventArgs e)
{
OnElapsed(EventArgs.Empty);
throw null;
}
This, in theory, should throw NULL whenever mTimer_Elapsed is invoked - nothing however bubbles up in NAV. I went a bit further and changed this:
///<summary>
/// Invoke the Changed event; called whenever the internal timer elapses
/// </summary>
protected virtual void OnElapsed(EventArgs e)
{
throw new InvalidCastException("test");
//Elapsed();
}
Again, nothing happens in NAV.
Note, that both changes behave as expected if the COM Timer is consumed from within .NET project. This makes me think, that NAV Interop must be capturing exceptions from the automation and handle them internally.
I would however pop that question in Mibuso - someone there will probably know better.

Usage of relative=up in selenium

Can any one explain me the usage of
selenium.selectFrame("relative=up");
sample code:
selenium.selectFrame("frame");
String Error_MSG_1 = selenium.getText("//div");
selenium.selectFrame("relative=up"); -----> here if I remove this
statement it throws an exceptions
if (selenium.isTextPresent("error message")) {
assertEquals("","");
}
//Close error pop-up
selenium.click(Close_popup);
If your web applications implement iframes, often times, while testing, say, a text string, you can clearly see it being displayed in the browser, but upon playback, the selenium script may fail. This is because the script may not be placing the right iframe into context. selenium.selectFrame(...) is used to set the right frame in which the assertion/verification is to be performed.
Specifically, selenium.selectFrame(“relative=up”) is used to move one iFrame up a level. In a related manner, you can use selenium.selectFrame(“relative=top”) to select the top level iFrame.

Eclipse RCP: Display.getDefault().asyncExec still blocking my GUI

I have a simple viewPart offering some text fields to enter parameters for a selenium test. After filling out these fields the user may start the test which approx. needs 30-45 minutes to run. I want my GUI to be alive during this test giving users the chance to do other things. I need a progress monitor.
I tried to put the selenium test in a job containing Display.getDefault().asyncExec to run it. But my GUI freezes after some seconds giving the busyindicator. The selenium does not update any other view but the progress monitor.
Is there another way to ensure that the job wont block my GUI?
Best,
Mirco
Everything executed in (a)syncExec is using the display thread and therefore blocking your UI until it returns. I suggest you use Eclipse Jobs. This will use the progress indicator that the workbench already offers out of the box.
I would suggest to split your code into code that updates the UI and the code that executes other business. Execute all of it in a separate thread, and when you need to retrieve or set some action to the UI then use the "Display.getDefault().asyncExec".
Thread thread = new Thread("Testing") {
// some shared members
public void run() {
someBusiness();
// or use syncExec if you need your thread
// to wait for the action to finish
Display.getDefault().asyncExec(new Runnable() {
#Override
public void run() {
// UI stuff here
// data retrieval
// values setting
// actions trigging
// but no business
}
});
someBusiness();
};
thread.start();

Resources