How to close program via TestComplete after failed Keword Test - testcomplete

So lets say I am doing a Test Complete keyword test. If something fails in it the text stops . Actually what I have founded out is that if i have 8 checkpoints if the 4th one fails the rest will always fail after it. So i get a "test execution was interrupted" error. Thats fine but it doesnt finish out the test and close the application. The reason this is an issue is because any tests after it will fail because the application is still left open. I could rewrite these tests so that the application is open when they start but is there a way to kill and application after your tests fail? If the tests pass the application is closed.

You need to organize your tests with test items. In this case, you create at least 3 test items: the first one starts the application, the second performs the test and the third closes the application. If an error occurs during execution of the second test, this second test execution is ended and TestComplete runs the third finalization test item.
Information on test items can be found in the Tests and Test Items help topic. Please note that you need to specify the Test Item value in the Stop on error column for the needed test item (the second one in the above example). Information on this and other columns can be found here. The column is hidden by default and you need to add it: right-click the header of the test items list and select Field Chooser. After this, drag the needed column to the header from the Field Chooser dialog.
Find more information on this solution in Stopping Tests on Errors and Exceptions.
Alternative solution is using the OnLogError or OnStopTest event handlers. Find description of how to handle standard TestComplete events in the Creating Event Handlers for TestComplete Events help topic.

Perhaps I'm oversimplifying, but could it be the setting for the test playback? Pls check the following page and let me know if it helps: http://support.smartbear.com/viewarticle/28751/.
If that doesn't work feel free to repost in the SmartBear Forum: http://community.smartbear.com/
The support team is monitoring the forum and I'm sure they'll be happy to help.

Related

Cypress test fails but works when tested manually

I am trying to test the ability to retrieve results from a database. When running a test that logs in, navigates to the search tab and searches a word, I am getting a 401. If I go to the website, use the same log in information and do the exact same steps, it works perfectly. Here's the last few steps of the test:
cy.contains('Search').click(); //open tab
cy.contains('Search by').click();
cy.contains('Name').click();
cy.get('.search-field').type('Jane');
cy.get('.search-btn').click();
There's a drop down menu for what you want to search by, a textfield for the search word and a search button. I can't share all of the code but I can see from the video that logging in and all the following steps are performed as supposed to. What kind of things can cause a cypress test to return different results as opposed to manually performing the action? I added wait(2000)'s in between the steps but it made no difference.
Maybe there is an event that needs to be triggered before the page is able to search.
Take a look at the element you are typing into in the devtools, under the Event Listeners tab.
For example, the StackOverflow search box has an event for s-popover:show listed there - if testing that you would .trigger('s-popover:show') to fire that event and display the instruction tooltip.
So try something like this
cy.get('.search-field')
.type('Jane')
.trigger('change') // or .trigger('input')
Cypress deletes localStorage in between tests. In this case that deleted the authorization token which is why I was getting the 401. I installed this package: https://www.npmjs.com/package/cypress-localstorage-commands
After the log in I use
cy.saveLocalStorage();
And before making search
cy.restoreLocalStorage();

How to create a Checkpoint in UFT

Strange enough that I have to ask such a simple question.
I started automating with UFT and I suppose the correct way to check if for instance my login has worked is to add a checkpoint on the next page.
But how do I do that?
All info I get from google is on how to add an already existing checkpoint to may page. But I don't have any.
Here is how I go about automating:
I add manually the relevant objects to the object repository
I create parameters for my action
I create the code that does the steps on the page
one action per page seems to be fine for me
But in the Object Repository of UFT 14.53, there is no button to add a Checkpoint.
A workaround for me would be to just add another Object and check it's existence and forget about checkpoints. Until I hopefully get an answer here, I will try to do just that.
In UFT there are typically two ways to verify that things are working as expected.
Flow (implicit) - In order to verify that progress in the application is successful (e.g. login) one usually just keeps working with the app, assuming that if the previous step failed, the objects needed for the next steps won't exist and the test will fail due to ObjectNotFound errors
State (explicit) - In order to see that objects have a specific state, checkpoints are usually used. Checkpoints are typically added during a record session, I'm not sure if there's a way to add them directly to the repository. An alternative to checkpoints, which works better with keyword driven testing (no recording), is to use the built in CheckProperty method.

Common asserts in any automation project

Can anyone briefly explain what are the common asserts to consider in any automation project please. Whether it might be an in-house or public web application. For example presently i am using selenium (java) to automate an eCommerce web application. As this is my first website to automate, i am running out of ideas where i can verify things expect few which i know mentioned below:
1.Verify each page Title
2.Verify a button, text, link, image, custom text etc
Apart from these is there any thing else i can verify? please feel free to correct my question and if you have worked on various automation projects which areas did you add asserts to verify or validate something on a webpage.
basically, you do automation to decrease the execution time of regression cycles by automating the Test Cases relate to the functionality of the application. so, first develop test cases, using test design techniques like ECP, BVA etc.
Each test case must have an Assertion called expected result or functionality (otherwise it won't be called a Test case).
This assertion can be anything like,
Whether login successful after giving valid credentials
Showing an error message after entering wrong credentials etc.
Selenium helps us to automate web interactions (navigations, clicks, enter texts etc.) and don't perform any assertions for you.
Assertions are available by frameworks like JUnit, TestNG (in Java) with Assertions class. There is built-in support from programming languages like assert keyword in python & Java (http://docs.oracle.com/javase/7/docs/technotes/guides/language/assert.html)
So, whatever you mentioned in your question like common assertions (Verify each page Title etc.), those are just web interactions. they don't decide whether a Test is PASS or FAIL. It is you who define the criteria whether a Test is PASS/FAIL.
For example, there is a test case related to successful login.
here, you automate web interactions like navigate to login page, enter credentials, click Submit button.
Then to validate whether you successfully logged in or not, you look for a web element in the Home Page of the user logged in (like, welcome user) in normal scenario. In Automation, you try to find the text welcome user using webelement. Then you use Assertions provided by frameworks, to assert whether the expected message is present in the webpage like
Assertions.assertEqual(expected_message, actual_message); // just an example.
If expected_message and actual_message is same, then the method don't throw any exception, which results in marking the testcase as PASS by the framework
If expected_message and actual_message is NOT same, then AssertionError is raised by the method assertEqual, which results in marking the test case as FAIL by the framework.

Simple Coded UI login test on remote server

I used Microsoft Test Manager to create a test for a log on page (not locally hosted) and recorded this. All steps were executed without errors.
Then I've created a Coded UI test project in Visual Studio 2013.
I added a Coded UI Test with the existing recording.
I ran the test and I got the message below:
"FailedToPerformActionOnHiddenControlException: Cannot perform 'SetProperty of Text with value " on the username field.
I received this message when using the existing recording, but also if I record using the recording function in Visual Studio.
Has someone got experience with Coded UI Tests and maybe give me an example as to how to set it up.
If I understand correctly, you're basically trying to have Coded UI input a value into an HtmlInput control that's a text box on a web page. The error message "FailedToPerformActionOnHiddenControlException: Cannot perform 'SetProperty of Text with value" normally means that the value you're trying to submit isn't able to be taken by the control, either because a value isn't accepted (example, numeric values in a name field) or because it's more characters than can be accepted by the field. It could also be that the recording isn't defining the object properly, and that the property Text isn't an option on the control that was found.
So, confirm these three things:
1. The control you're tying to send value to is actually an HtmlInput. You can't send the Text property to, for example, an `HtmlDiv`.
2. The value you're submitting is a valid value for your control (you're not trying to submit numbers, for example, into a field that won't accept them).
3. The value you're submitting is within the length limit for the field.
If I were a betting man, I would say that you probably don't have the object defined properly. Take a look at the SearchProperties of the control in question first and make sure that it matches the HTML on the page itself.
Do I need to set a standard environment for testing on an other server.
I develop the Coded UI test project in a workspace on a development environment, but I need the tests to be run on a acceptation environment (the recorded tests are executed on the acceptation environment).
I read it's easy to setup this environment though, but what is best practise?
Is it better to execute it on the same environment?
Here's a code snippet:
[TestMethod]
public void TestLogonToAccount()
{
// To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items.
this.UIMap.Enterusername();
this.UIMap.Enterpassword();
this.UIMap.Clickonlogin();
this.UIMap.ClickonCentral();
this.UIMap.Searchforemailaddress();
this.UIMap.Clickonlogin1();
}
I forgot an important part regarding the login recording test in MTM.
I just tried to play the steps again and got this error:
Playback of the selected sections of the action recording could not be completed The playback failed to find the control with the
given search properties. Additional Details: TechnologyName: 'MSAA'
Name: '' ClassName: 'MozillaWindowClass' ControlType: 'Window'

Adding a dialog child workflow to an on-demand workflow

I was wondering if anybody ran into the same issue as I am facing now.
What I'm trying to do is have a workflow that checks the condition of a field (optionset) of a form. If the field has option 1, 2 or 3 then create new record with certain shared attributes, otherwise start a child workflow. The child workflow is a "Dialog" process, not a "workflow" process which informs the user that the record was not created and why. For some reason I cannot select the dialog workflow from the dropdown list of available child workflows...
Both the parent workflow and the "dialog" workflow process are based on the same entity.
If anybody has any ideas on how I could debug this or any clues in general I would greatly appreciate your feedback.
Thanks for taking the time to read this post!
It is not possible to call a dialog from a workflow (see here).
Workflows are generally triggered by events.
Imagine the ramifications - which user would receive the dialog and what if no-one was logged in?
One option is to drive everything with JavaScript
Trigger on change of the option set
Create the records
Start the workflow
Start the dialog
See the section under the heading "Opening a Dialog Process by Using a URL" on MSDN here
Rather use the URL than showModalDialog or showModelessDialog.
What might work even better is to call an Action from JavaScript. The Action can run synchronously and create all records, start child workflows and dialogs.
A synchronous workflow can stop an event and return an error message to the user, but cannot return success messages - it look like this will not meet your requirements, but Gareth Tucker has an example here.

Resources