Test manager, changed a shared step and all tests want to be re-recorded - microsoft-test-manager

In microsoft test manager, I have changed a shared step and now all the test cases that used that shared step have the message "The action recording cannot be loaded for..." which basically happens if you modify a step in the test case. Is there a work-around so I don't have to re-record all the steps(I only changed the shared step)?

No, not really. In the future it is always best practice to record action recordings for shared steps separate from the action recording for the test case that uses them. You can record an action recording for just the shared step under the Organize > Shared Steps Manager tab.

Related

Jmeter CLI stops tests after sometime. Any ideas?

When I run Jmeter from Windows CLI, after some random time, the tests are being stopped or stuck. I can click on ctrl+C (one time) just to refresh the run but part of the request will be lost during the time it was stuck.
Take a look at jmeter.log file, normally it should be possible to figure out what's wrong by looking at messages there. If you don't see any suspicious entries there - you can increase JMeter's logging verbosity by changing values in logj2.xml file or via -L command-line parameters.
Take a thread dump and see what exactly threads are doing when they're "stuck"
If you're using HTTP Request samplers be aware that JMeter will wait for the result forever and if the application fails to respond at all - your test will never end so you need to set reasonable timeouts.
Make sure to follow JMeter Best Practices
Take a look at resources consumption like CPU, RAM, etc. - if your machine is overloaded and cannot conduct the required load you will need to switch to distributed testing
There are several approaches to debugging a JMeter test which can be combined as a general systematic approach that I capable of diagnosing most problems.
The first thing that I would suggest is running the test within the JMeter GUI to visualize the test execution. For this you may want to add a View Results Tree listener which will provide you with real time results from each request generated:
Another way you can monitor your test execution in real time within the JMeter GUI is with the Log Viewer. If any exceptions are encountered during your test execution you will see detailed output in this window. This can be found under the Options menu:
Beyond this, JMeter records output files which are often very useful in debugging you load tests. Both the .log file and the .jtl file will provide a time stamped history of every action your test performs. From there you can likely track down the offending request or error if your test unexpectedly hangs:
If you do decide to move your test into the cloud using a service that hosts your test, you may be able to ascertain more information through that platform. Here is a comprehensive example on how to debug JMeter load tests that covers the above approaches as well as more advanced concepts. Using a cloud load test provider can provide your test will additional network and machine resources beyond what your local machine can, if the problem is related to a performance bottleneck.

Recorded Robo Test fails to execute any actions

I've recorded a simple login Robo Test to be executed by App Crawler. I've provided the script to app crawler, and I see in the logs where it loads it, and tries to execute it. However, it always fails at the first action by saying it cannot find the Element.
I see on the screen where it tries to start executing the actions, but it immediately says it executed zero actions and then goes into the pre-canned scripts.
The most common reason for such cases is that your app looks/behaves differently during recording and replaying phases. In particular:
Your app might be built with one app package id for debug APK (which is used for Roboscript recording) and a different one for release APK (which you use to perform a Robo crawl with the recorded Roboscript).
Your app might show different dialogs or have a somewhat different screen setup during recording and replaying phases (e.g., due to different environments and/or versions).
You either need to ensure a consistent app look/behavior or modify the recorded Roboscript to remove attributes that are different during recording and replaying phases (e.g., resource ids that use app package id as prefix or contextDescriptors for parent elements).

JMeter - Execute multiple test plans from single console

I am new to JMeter. I have created several test plans. Is it possible to combine different test plans in a single '.jmx' file so that the user can see all the different test plans in one console? Not only that, the user can pick and choose more than one test plans and run them? The test plans may not be collaborating with each other. Theyare completely isolated test plans. The idea is, the user can view and execute them from one console?
First of all, looking into Adding and Removing Elements chapter of Building a Test Plan article:
Adding elements to a test plan can be done by right-clicking on an element in the tree, and choosing a new element from the "add" list. Alternatively, elements can be loaded from file and added by choosing the "merge" or "open" option.
You can also store multiple .jmx scripts as Test Fragments and add them to the "main" script via Include Controller and/or Module Controller
Check out How to Manage Large JMeter Scripts With JMeter Test Fragments article for more information.
instead of maintaining multiple test plans .. i would suggest make multiple thread groups in one test plan so whoever going to use specific group then can enable and run ..
I am not sure you can add multiple test plans in one JMX file, as above comment says you can create multiple Thread groups in one test plan.
You can even create a test plan with JDBC request to test Data base and another test plan with Http request to API tests.
Add different headers and Listeners in each thread group as needed.
If you want aggregate report of all thread groups, you can listener by right clicking on Test Plan.

how to auto open an entity created in a workflow

I have run into a situation where I need to open a newly created quote at the end of a workflow. I have a feeling this is going to require me to create a a very simple custom workflow that uses "window.open", but I would like to avoid this if anyone has a better idea.
So I need to open a newly created quote as soon as it is created in a workflow. Anyone have any good ideas on how to do this?
Workflows are asynchronous; they run on the server (as opposed to the client) and do not run in realtime. eg a workflow that is triggered by creation of a record will run on the server sometime after the record is created (depending on system load etc - it could be a second or two, or could be half an hour later. If you have stopped the CRM Async service on the server, they might well never run.)
Because they run on the server the user has no interaction with them. As a result you can't open a window, as there's no user involved to open a window for...
What you probably want to do is make use of Dialogs (introduced in CRM 2011). You won't be able to use window.open() but as long as you've got a recent update rollup installed on the server you can present the user with a hyperlink to most CRM records.
Setup of Dialogs is much the same as Workflows, and they use the same mechanics under the hood. The difference is that they're syncronous (i.e. run in real-time) and they are client-side. There's some detail on Technet: http://technet.microsoft.com/en-us/library/gg334463.aspx

Client-Side API for TFS / MTM TestRunner?

Does anyone know for a fact whether or not the microsoft test manager for tfs 2010 throws any client side events like OnFinished/OnSaved etc? I am asking because our business process requires certain minimum amount of information in each test run/result to be provided prior to closing the testrun (e.g. in case of a failed step a reason and or defect id has to be provided at the affected steps' comment field).
Postprocessing / report-driven checks etc makes basically means the tester can make 'errors' and we'll have to re-test the whole tc instead of having a prompt process check which allows fixing the tc immediately..
Possibly, by developing a Custom Diagnostic Data Adapter you could implement this.
Using perhaps the TestCaseEnd event, you can get to the TestElement via the TestCaseEndEventArgs and do your processing.

Resources