Is there a way to "Test" current file in Xcode4? - xcode

Is there a way to "Test" current file in Xcode4?
That is if you are writting your 10th unit test, but rather than going TEST and having all unit tests run, you want to only trigger running those testing in the unit test file you are current in?

Pretty old post, but Xcode evolved since.
In Xcode6, got to Product > Perform Action
You'll see some other Test options, like Testing the current function.
The shortcut is Cmd-alt-ctrl-U. Pretty difficult with a single hand if you're not a pianist.

It looks like Xcode 4 is heading that way, but it doesn't work well:
Edit scheme
Go to Test section
Expand to show checkboxes.
Unfortunately, you have to click individual checkboxes; there is no way to select the entire group, or even individual suites, and turn them off with a single click. Ideally, it would be nice to turn off everything, then reenable only the tests you want.
As a workaround, you can set up a test target containing the infrastructure to run any test, but not containing any tests. Call it something like "Ad-hoc Tests". Then take the test suite you want and temporarily add it to the target. Use the technique described above to turn off all tests in your main test target.

Related

How to add breakpoints to .feature files

As a developer
I want to put breakpoint(s) in feature files
So that I can debug a feature/scenario/step
Have any of you implemented this functionality with Behave or Cucumber?
You can not put breakpoints to .feature files, because it is a plain text files. Instead you can put breakpoints inside test-steps which implement your BDD steps.
Example
When I click button "save"
Then saved page opened
You need to go inside that step and see something like that:
public void iClickButton(String buttonName) {
getButtonByName(buttonName).click();
}
You can put breakpoint inside method iClickButton and debug it.
That is how you can debug execution in BDD style.
I can't say I've ever been able to put breakpoints in feature files. Instead, I put them inside the step files so when that step runs, you can verify it did its job. It's a lot of switching back and forth between feature and step files, but it works
I just want to point out that in 2018 feature files are used by a lot more than Behave or Cucumber [i.e. Codeception for PHP]. "Gherkin" is supposed to be a business readable DSL and the concept of a "breakpoint" really doesn't apply in that domain. I'd say it's definitionally impossible to put a break point in a Gherkin feature file. You could certainly do that if you want, go for it, but there isn't a standard way of doing it, and it's possible you'd be confusing things in a large organization or team.
First of all you need to add debug in step definition not in feature file
Now, You are running it by maven command like:
mvn clean install
But you need to run it using junit or testng.
Put an debug point in your step defination and run the project as junit/testng in debug section
OR
If you still need to run it using maven you can try below parameter in maven command
-Dmaven.surefire.debug test -DforkCount=0 test

MSTest: .testsettings is not always deploying files

We have a solution that contains a series of projects used for our tests. One of those projects contains a set of files that are needed for the tests. Those files are schemas that will get validated against every time an API route is called. Tests, naturally, call one or more API routes.
The solution has a .testsettings file. This file has deployment enabled, and it specifies that these schemas need to be deployed. In addition, every schema file is set to Copy Always. Also. the .testsettings file is in the solution, under Solution Items.
The problem is that the .testsettings file is only occasionally respected. Sometimes the files are copied; sometimes they are not. When they don't copy, we can do the following to fox it:
Go to the Test -> Test Settings menu and Choose Select Test Settings
Select the .testsettings file in the solution
Rebuild the solution
Rerun the tests
This usually works at least once. But inevitably, it stops working and the files aren't deployed again.
Note that when you go to the Test -> Test Settings menu, our current .testsettings file is always already checked. So choosing a new .testsettings file just means choosing the one that the UI already says is chosen.
We thought of going the DeploymentItem route, but this is impractical for a two reasons, surrounding code maintenance.
From what I can tell, DeploymentItem can only be placed on individual tests. With literally hundreds of tests, we'd be sprinkling it everywhere. It'd become a code maintenance nightmare. I thought of placing it on the global TestInitialize method, but that would just re-copy the files every time a test is run, which seems unnecessary. Not to mention that I'd have to put literally dozens of DeploymentItem attributes on the method and we'd need to keep that up-to-date every time a new schema is added.
Related to that, adding new schemas means altering existing tests where necessary. Again, we have hundreds of tests.
A far better solution would be to have the files copied over once, and then have the code look in the communal pool of schemas whenever it needs one.
I also looked at replacing .testsettings with .runsettings, but it doesn't seem to have a DeploymentEnabled node in the XML, and where that option exists it's again specific to DeploymentEnabled.
Does anyone have a solution to this, or does anyone know if it's a known bug? Schema validation happens behind the scenes -- individual test authors don't have to explicitly call it -- and it doesn't fail the test if it doesn't occur (we don't always have schemas available yet for every API call, so we don't want to fail the test if that's the case). As a result, it's often difficult to immediately ascertain whether or not validation happened. Because of that, we sometimes get false passes on tests where the schema actually broke, all because the .testsettings file didn't actually deploy our files like it's set to.
So I found out the problem: apparently this issue was fixed in Visual Studio 2015 Update 3. We were using Update 2. Once we got the new update, this problem went away.

Xcode 6.3.2 runs all the tests instead of just the one I selected (KIF)

This question is similar to: XCode run all the tests (even the disabled ones)
But different in that I'm not disabling any tests. I'm just pressing the single test icon next to a test function or test case:
A friend of mine is running KIF in a Swift project just like I am and has no problem with this. I'm guessing it's something to do with my setup:
I have a main xcworkspace file that contains my main target, a unit tests target, and an automated tests target (which contains the KIF tests). The workspace also has the pods project, using frameworks. That's it. Here's my scheme setup:
I've experienced this issue in all released versions of Xcode 6.
Edit
I found a workaround for the time being.
You have to modify each of your test classes (white space changes are fine). This will trigger Xcode to index those files and to recognize the tests and test cases and generate symbols & icons for them in the test navigator. (It's recommended to delete derived data first to remove any "ghost tests".)
If you don't do this for each test case class, all the unrecognized test case classes will always run, even if you only select one test to run.
Once you force Xcode to recognize all the test classes, you can successfully run a single test. (or a single test case, if you choose that instead.)
I also noticed while trying to fix this issue that the symbols and indexing for the default UnitTests target works fine. So there's something wrong with either a) having a second test target or b) my second test target meta info is corrupt or c) I set my second test target up incorrectly.
The test icon next to the class is not for a single test.
Unlike the method test icon (which only tests itself), the class test icon runs all the tests in the test class.
Delete your project's Derived Data. You can do this by closing the project, opening Window->Projects, then selecting your project. Click the Delete button to the right of the Derived Data path.

Visual Studio Ordered Tests - Different Projects

I have created a number of (separate) CodedUI projects within Visual Studio 2013, in order to test the basic functions of a website.
Test-cases are created as separate projects, as I expect some (or all?) of them to change over time, and thus to ensure 'modularity' of capture for ease of subsequent maintenance.
Now, I see I can easily create an Ordered Test within each project, which will allow the same test-case to be run and re-run as many times as I wish; But, it's not obvious to me how I can create an Ordered Test whereby I can add different test-cases created as different projects. Certainly, not directly.
Is it possible?
Also, can I rename the Ordered Test list and save it to a separate folder where I can store differing Ordered Tests to test functionality, as I wish?
Ideally, I'd like to create an Ordered Test external to any specific project, so I can go into any project I wish and add whatever tests I wish, as the test-environment is always the same.
you should have created a single project for your application. To ensure 'modularity', coded ui has given us the option of creating different UI Maps within a same project. Each UI MAP will represent a module of your application. This approach will give you easy maintenance and it will also help you to create ordered test cases which contain test cases from different UI Maps.
for more details please see this link
https://msdn.microsoft.com/en-us/library/ff398056.aspx
Thanks
Yes, I sort of see that. And I guess it's easy enough to move the code to become separate 'solutions' within a 'project'.
However, I want to work with TFS server too, so will look at the MTM route as well.
But it may be that I need my captured CodedUI to be 'solutions' within a single project too - though I really want my modules to be 'stand-alone' projects for safe-keeping.
Will investigate further.

How do I run (unit) tests in different folders/projects separately in Visual Studio?

I need some advice as to how I easily can separate test runs for unit tests and integration test in Visual Studio. Often, or always, I structure the solution as presented in the above picture: separate projects for unit tests and integration tests. The unit tests is run very frequently while the integration tests naturally is run when the context is correctly aligned.
My goal is to somehow be able configure which tests (or test folders) to run when I use a keyboard shortcut. The tests should preferably be run by a graphical test runner (ReSharpers). So for example
Alt+1 runs the tests in project BLL.Test,
Alt+2 runs the tests in project DAL.Tests,
Alt+3 runs them both (i.e. all the tests in the [Tests] folder, and
Alt+4 runs the tests in folder [Tests.Integration].
TestDriven.net have an option of running just the test in the selected folder or project by right-clicking it and select Run Test(s). Being able to do this, but via a keyboard command and with a graphical test runner would be awesome.
Currently I use VS2008, ReSharper 4 and nUnit. But advice for a setup in the general is of course also appreciated.
I actually found kind of a solution for this on my own by using keyboard command bound to a macro. The macro was recorded from the menu Tools>Macros>Record TemporaryMacro. While recording I selected my [Tests] folder and ran ReSharpers UnitTest.ContextRun. This resulted in the following macro,
Sub TemporaryMacro()
DTE.Windows.Item(Constants.vsWindowKindSolutionExplorer).Activate
DTE.ActiveWindow.Object.GetItem("TestUnitTest\Tests").Select(vsUISelectionType.vsUISelectionTypeSelect)
DTE.ExecuteCommand("ReSharper.UnitTest_ContextRun")
End Sub
which was then bound to it's own keyboard command in Tools>Options>Environment>Keyboard.
However, what would be even more awesome is a more general solution where I can configure exactly which projects/folders/classes to run and when. For example by the means of an xml file. This could then easily be checked in to version control and distributed to everyone who works with the project.
This is a bit of fiddly solution, but you could configure some external tools for each of group of tests you want to run. I'm not sure if you'll be able to launch the ReSharper test runner this way, but you can run the console version of nunit. Once you have of those tools setup, you can assigned keyboard shortcuts to the commands "Tools.ExternalCommand1", "Tools.ExternalCommand2", etc.
This wont really scale very well, and it's awkward to change - but it will give you keyboard shortcuts for running your tests. It does feel like there should be a much simpler way of doing this.
You can use a VS macro to parse the XML file and then call nunit.exe with the /fixture command line argument to specify which classes to run or generate a selection save file and run nunit using that.
I have never used this but maybe it could help....
http://www.codeplex.com/VS2008UnitTestGUI
"Project Description
This project is about running all unit test inside multiple .NET Unit tests assembly coded with Visual Studio 2008."

Resources