How to do bulk update of fields for test suite in test link? - testcase

There is a test suite already created with test summary and test steps in test link. I have to update few fields in each test case. How can i update in bulk instead of doing it one by one by clicking edit option for each test case?

You can update in bulk some of the fields like Status and Importance by selecting the test suite and click on Settings icon, then the Table view.

Related

Jmeter: How to bulk check/uncheck "Generate parent sample"

You create and test your script locally in jmeter, so that later you can upload it to blazemeter to run.
The test contains hundreds of transaction controllers, each representing a page, and created by the script recorder.
The problem is, to run it locally, and get transaction controller summary by page, not by every request, you have to check "generate parent sample".
but this has to be disabled when you upload to blaze.
How do people handle this?
It doesn't seem possible to bulk edit these or use a variable, every transaction controller (representing a page) has to be manually edited every time.
Any suggestions?
JMeter .jmx scripts are basically XML files so you can open your script with text/XML editor of your choice, look for the next line:
<boolProp name="TransactionController.parent">false</boolProp>
it means that the Transaction Controller doesn't generate the parent sample. If you replace it with:
<boolProp name="TransactionController.parent">true</boolProp>
it will generate the parent sample (and vice versa)
You can use Taurus tool which has capability of bulk changing arbitrary JMeter test element property to the desired value, in your case disabling "parent" mode of the Transaction Controller would be something like:
execution:
scenario:
script: test.jmx
modifications:
set-prop:
"Transaction Controller>TransactionController.parent": false

Is there a way see if associated entities have been deleted. MSCRM

We have a feature in our system where the unit tests have been passing for the passed year.
Since yesterday, they are failing and when looking into those unit tests, it seems the unit tests should never have passed because the associating entities are just not there.
My question is whether there is a audit trail for when someone deletes an entity connected to another entity?
When you navigate to Settings - Auditing - Audit summary view you will be able to see all the entries of Audit including Delete action for all the entities across the CRM system.
Click on Enable/Disable filters, then apply the filter whatever you want like Event, Entity, Date, etc to drill down & see the desired data.
Read more

VS add on Test scribe - modifying report

I'm new to Microsoft Test Manager, but have found a way to generate test plans and doing testing relatively efficiently. Now, I'm trying to learn how to use Test Scribe.
I need the reports to contain passed tests, together with test steps. Right now I'm able to create reports from "plan" with test steps included, and reports from "test" with success/no success included - but how do I get both? I sort of want the data from the "test plan summary" report, containing "step successful" / "step not successful".
Does anyone know how to do that..?
You need to amke some changes in the template of the summary report file generated.
Follow the URL here

Select list for QUnit modules in test runner bar?

I recall having seen at some point screen shots of a select list of QUnit test modules in the test runner toolbar of QUnit. My impression was that selecting one of the modules in the select list would cause that module's tests to be run.
Question: Does such a feature actually exist OOB for QUnit? I know one can set filter via the URL but I would like a more "discoverable" option.
Thanks!
The select list only shows itself if you have defined more than one module in your test suite.
Also, make sure that your test suite is ready before QUnit initializes itself. i.e. QUnit initializes itself when the page finishes loading (the onload event). If you happen to define your test suite after this, then you have to call the (undocumented) QUnit.load() method to notify QUnit that your test suite has been defined.
Demo: http://jsfiddle.net/brianpeiris/98fc8/show/

How to close program via TestComplete after failed Keword Test

So lets say I am doing a Test Complete keyword test. If something fails in it the text stops . Actually what I have founded out is that if i have 8 checkpoints if the 4th one fails the rest will always fail after it. So i get a "test execution was interrupted" error. Thats fine but it doesnt finish out the test and close the application. The reason this is an issue is because any tests after it will fail because the application is still left open. I could rewrite these tests so that the application is open when they start but is there a way to kill and application after your tests fail? If the tests pass the application is closed.
You need to organize your tests with test items. In this case, you create at least 3 test items: the first one starts the application, the second performs the test and the third closes the application. If an error occurs during execution of the second test, this second test execution is ended and TestComplete runs the third finalization test item.
Information on test items can be found in the Tests and Test Items help topic. Please note that you need to specify the Test Item value in the Stop on error column for the needed test item (the second one in the above example). Information on this and other columns can be found here. The column is hidden by default and you need to add it: right-click the header of the test items list and select Field Chooser. After this, drag the needed column to the header from the Field Chooser dialog.
Find more information on this solution in Stopping Tests on Errors and Exceptions.
Alternative solution is using the OnLogError or OnStopTest event handlers. Find description of how to handle standard TestComplete events in the Creating Event Handlers for TestComplete Events help topic.
Perhaps I'm oversimplifying, but could it be the setting for the test playback? Pls check the following page and let me know if it helps: http://support.smartbear.com/viewarticle/28751/.
If that doesn't work feel free to repost in the SmartBear Forum: http://community.smartbear.com/
The support team is monitoring the forum and I'm sure they'll be happy to help.

Resources