Is it possible to do conditional steps in Microsoft Test Manager - microsoft-test-manager

I am trying to use MTM to test the website I develop. The problem is that there are three different possible things that could happen after entering your username and password and clicking the login button.
1. Redirected to the register security questions page.
2. Redirected to the answer security questions page.
3. Redirected to the homepage.
I would like to set up the test case to handle these different possibilities. Is that possible?

If you absolutely need to handle the test in one test case, you could put all of the test steps for the different outcomes into the test case as multi-line test steps and just skip the test steps that didn't apply while testing.
The alternative is you could put the first part of the test into "shared steps" and create separate test cases based on the different possibilities.
More on shared steps:
https://msdn.microsoft.com/en-us/library/dd286655.aspx

Related

Ruby - Program navigator module

I have a Ruby program that uses a webdriver (Watir) to walk a page and perform tests alongside a BDD suite called RSpec.
I'm trying to optimize it for a slow server by improving its ability to navigate efficiently. Thus far It has been creating a new browser session for each test package, then closing it afterwards. This is very inefficient because it hits the login page again for every instance.
Of course, I don't want to hard-code navigation instructions into the tests because adding new spec files may change the order they are executed in, and not every page of the webapp has the main navigation bar, so navigation may need to change based on the page the last spec left the browser on.
I need some kind of master library or module that will take what page the program is at and what page it wants to go to, then bring the browser to that page so it can begin testing. What is the best way to do this?
I'm not fantastically experienced so I'd love input from more seasoned developers. Should I have each page be a class? Should I just stick with closing browsers after each test packet? Should I manually code brute-force methods (gotoPage1FromPage2)?
Okay, that last one was a joke. Seriously though, what is the best way to do this?
You are exactly correct about the difficulties of maintaining state in your tests. Shutting down a browser between each session is the best way to make sure that you always know the state of the browser at all times for a test. Saucelabs goes so far as to spin up a new virtual machine for each of the tests they run. Ideally you decrease test time by running multiple tests in parallel.
I'm not certain I know what you mean by "test package" or how many times that means you are starting a new browser and logging in, but... another thing to consider investigating is whether you can set a cookie or use oauth to log in without having to use the navigation. I've worked at places that allowed admin logins for their staging environments by passing a parameter in an url.
Your tests should be clear in their intention, which typically means your Page Object implementation does not know about what comes before or after the actions you are taking. You should be able to look at the RSpec code and reproduce exactly what it is testing. Abstracting methods for taking you from one place to another magically in the background is not a good idea.
Best practice used to be having methods from one Page Object return new Page Objects. So users could write methods like this in their tests: LoginPage.new.login.view_account.edit_address. Many of us have been bitten by this approach. Plus it isn't as easy to read as doing something like this:
LoginPage.new.login
HomePage.new.view_account
AccountPage.new.edit_address
This doesn't prevent you from using #visit methods as needed to navigate between Page Objects.

Sometimes same test scenario not working when doing multiple recordings in Jmeter

I have recorded script for atlassian confluence system. Purpose of this recording is to perform load test on confluence. Below scenarios are recorded
Login
Browse a Space
Create a wiki page
Edit a wiki page
Commenting on page
Logout
I have modified the script and those scenarios are worked fine when i run the script. Then when i record the script again and did the same modifications the edit action is not working as before. I have tried page editing action on multiple environments and sometimes it works and all the time it's not works. Why this is happening?
This is very high level information that you have shared. Please share more info about what kind error do you get in case of failures. You might be able to trace down your problem & solution by looking into following points.
Is your login request successful (valid login session)
Are you using cookie manager
Have you used assertions to verify valid/invalid responses
Have you used debug sampler and View Tree Results listener
Please use above to narrow down your problem and then share it over here.

how to pass through the captcha while testing an application on an automation tool like QTP

how to avoid(alternative) captcha while testing an application on any tool.
I am testing an application on QTP .It's having captcha on the login screen Since captcha is an image therefore the tool is unable to read it for repeated iterations. Is there any way to pass through the captcha.
The whole point of CAPTCHA1 is to make sure a real human is facing the computer so if QTP could solve a general CAPTCHA it would mean that the whole concept of CAPTCHAs is flawed.
On a case to case basis there may be a solution (perhaps involving Insight) but you would have to share more information to get a meaningful answer.
The best course of action would probably be to get R&D to provide a non-CAPTCHA protected way to enter the application during testing (and make sure this is not present in the production servers).
1 Completely Automated Public Turing test to tell Computers and Humans Apart
CAPTCHA objects are designed to prevent automation, by ensuring that a human is interacting with the application, not a computer. These controls are designed to prevent automation. With this in mind, QuickTest Professional (QTP) / Unified Functional Testing (UFT) does not have a method to capture the text from the object. You will most likely need to test that portion of the application manually. Here are a couple suggestions you can consider:
If possible, limit the CAPTCHA control (during the testing phase) to only a few words/letter combinations. Then use QTP/UFT to cycle through these defined words/combinations. Again, this limitation on the control would only need to be done in the testing phase.
If possible, ask your developers to add a method which will capture the characters used in the CAPTCHA control at runtime. Then, have QTP/UFT call that method, retrieve the text, and enter it into the field as needed. Once again, this method would only need to be in place while the testing the application.
If possible, ask your developers to add a flag that will allow you to bypass the control during the testing phase.
Depending on the settings used within the CAPTCHA control, you may be able to use another application (for example, OCR software) to read the text from the image and return that text to QTP/UFT. Once QTP/UFT has the text, it can be entered into the field.
if you are testing on an application for which you can also access the Database, you can take the generated CAPTCHA from the database and store it in a variable. Use the stored variable for printing CAPTCHA
There is a simpler way to to handle CAPTCHA on a webpage in QTP/UFT through "dynamic execution of data" used in the parametrization technique.

Ways to programatically check if a website is up and functioning as expected

I know this is an open ended question, but hopefully it will get some good answers before the thread is locked...
I'm wondering what methods there are to programmatically check (language agnostic) if a website is online from a client perspective (assume you can't make changes to the site/server, but you can rely on certain behaviours of the site.)
The result of each method could stack to provide a measure of certainty that the site is up/down - that is, a method does not have to provide a definite indication if the site is up/down on its own.
Some common tests just to check 'upness' may be:
Ping the site (which in the case of shared hosting isn't very
indicative)
Send a http head/get request and check the status
Others I can think of to check that the site is up and functioning:
Check you received a well formed html response i.e. html to html
tags, if the site is experiencing trouble it may spit an error and
exit without writing the rest of the page (not all that reliable
though because the site may handle most errors in a better way)
Check certain content is or is not on the page, i.e. perhaps there is some content that is always present on your pages, or always present in the case of an error
Can anybody think of any other methods that could be used to help determine if a site is in fact up/down and functioning/not functioning correctly from within a program?
If your get request on a page that displays info from database comes back with status 200 and matching keywords are found, you can be pretty certain that your site is up and running.
And you don't really need to write your own script to do that. There are free services such as GotSiteMonitor, Pingdom, UptimeRobot etc. allows you to monitor your site.
Based your set of test on the unit tests priciple. It is normally used in programming to test classes, modules or other artefacts after changes have been made. You can use any of the available frameworks, so don't have to reinvent the wheel. You must describe (implement) tests to be run, in your case a typical test should request a url inside the page and then do some evaluations like:
call result (for example return code of curl execution)
http return code
http headers
response mime type
response size
response content (test against a regular expression)
This way you can add, remove and modify single tests without having to care about the framework, once you are up. You can also chain tests, so perform a login in one test and virtually click a button in subsequent test.
There are also tools to handle such test runs automatically including visualization of results, statistics and the like.
OK, it sounds like you want to test and monitor your website from a customer experience perspective rather than purely establishing if a server is up (using ping for example). An effective way to replicate the customer experience is to simulate tests against the site using one of the headless browser testing tools (phantomJS is great a great choice) as they will render the page fully (including images, CSS, JS etc.) giving you a real page load time. These tools also allow you to make assertions on all aspects of the HTML content and HTTP response.
pingdom recently started offering a (paid for) service to perform these exact types of checks for alongside their existing monitoring solution. The demo is worth looking at, their interface for writing the actual tests is very nice.

selenium testing gwt wizard

i m doing selenium testing against an gwt wizard application, as a wizard, there are multiple steps, once user finish one step and click next, it transfer to next step, as gwt application, all steps are refreshed in the same page.
now i need to use selenium RC (java client) to write test against that gwt wizard and have 2 questions:
1. each time i start the wizard it require user login first, how can i avoid that login step to test the wizard directly?
2. since all steps are hold on the same page, how can i separate the test, say one test method for each step, without put the test in one big method?
Thanks.
I would suggest using Selenium2/Webdriver. Selenium 2 has the concept of page objects which allow you to create test objects which map to different pages within your app. I assume that you are enabling ensureDebugId in your gwt app (which allows you to access elements based on a predictable dom id). The combination of debugIds and selenium2 will allow you to quickly create a clean test representation of your pages and then allow your unit tests to simply drive the pages to where you need. The last bit of advice I would give for selenium2 and gwt is make sure that your page objects are created via AjaxElementLocatorFactory.

Resources