Visual Studio 2017 Live Testing exclusions - visual-studio

I'm looking at Live Testing feature in the new Visual Studio (I'm using NUnit).
There is an "exclude" option for unit tests, to indicate that specific tests should not be run (maybe they are integration tests, or slow tests, or whatever).
Where does this information get stored? I don't see any indication in the csproj or anywhere else that a test should not be included in Live Testing. Shouldn't there be some information file somewhere that I can check into source control so the rest of my team doesn't have to manually specify which tests should not be run by live testing?

Include/exclude is a user level feature. This is extremely useful when you want to run a specific set of tests for a particular edit session or to persist your own personal preferences. To prevent tests from running and to persist that information, you could do something like the following:
[ExcludeFromCodeCoverage]
public class SkipLiveFactAttribute : FactAttribute
{
private static bool s_lutRuntimeLoaded = AppDomain.CurrentDomain.GetAssemblies().Any(a => a.GetName().Name == "Microsoft.CodeAnalysis.LiveUnitTesting.Runtime");
public override string Skip => s_lutRuntimeLoaded ? "Test excluded from Live Unit Testing" : "";
}
public class Class1
{
[SkipLiveFact]
public void F()
{
Assert.True(true);
}
}

You can now use the following attributes to specify in source code that you want to exclude targeted test methods from Live Unit Testing:
For xUnit: [Trait("Category", "SkipWhenLiveUnitTesting")]
For NUnit: [Category("SkipWhenLiveUnitTesting")]
For MSTest: [TestCategory("SkipWhenLiveUnitTesting")]
Offical Docs here

Related

Unbound breakpoints when debugging in Blazor Webassembly when using certain attributes/classes

I'm developing a modular blazor application (5.0.2) using VS 2019 (16.8.4), which is structured as follows:
a "Main" Solution, which consists of
RCL
Wasm project to startup the application
several "Sub" solutions which reference the Main RCL (Base components, etc) which consist of
.net5 libraries (Models, Web-service access, etc)
RCL with components, referencing the .net5 libraries (via project reference)
All projects have a post-build event to copy the DLL and PDB files to a certain path, e.g. D:\TMP.
The SubSolution references the MainRCL library via this path.
The Main Wasm project references the SubRCL library also via this path (for adding services at startup/Program.cs).
The MainRCL does not have a reference to SubRCL (components are rendered via reflection/BuildRenderTree() according to configurable UI definition).
Debugging the Main Solution worked perfectly (IIS Express/Application Debugging).
Then I tried to debug the SubModules -> I started debugging from the MainSolution and opened files from the SubModules projects in this VS instance.
At some libraries, debugging was working, but not for the SubRCL ("Unbound Breakpoint"). Then I was able to reproduce the (very strange) issue with sample solutions:
The "MainRCL" provides 2 Attributes:
[AttributeUsage(AttributeTargets.Class)]
public sealed class TestNoEnumAttribute : Attribute
{
public string Name { get; set; }
public string Mode { get; set; }
public TestNoEnumAttribute(string name, string mode)
{
Name = name;
Mode = mode;
}
}
[AttributeUsage(AttributeTargets.Class)]
public sealed class TestEnumAttribute : Attribute
{
public string Name { get; set; }
public EventExecutionMode Mode { get; set; }
public TestEnumAttribute(string name, EventExecutionMode mode)
{
Name = name;
Mode = mode;
}
}
public enum EventExecutionMode
{
AutomaticAll = 0,
ManualConfiguration = 2
}
The SubRCL uses these attributes at a test-method:
[TestNoEnum("Test", "EventExecutionMode.ManualConfiguration")]
//[TestEnum("Test", EventExecutionMode.ManualConfiguration)]
public class Module1Test
{
public int IncreaseNum(int num)
{
var x = new Part1();
var part1Num = x.DoStuff(num);
var newNum = part1Num + 1;
return newNum;
}
}
The class "Part1()" which is called, is located at another library of the SubSolution
The breakpoint at the "DoStuff()" method in Part1 class is always hit (in separate .net5 library).
The breakpoint at the "IncreaseNum()" method is only called when the [TestEnum] attribute is NOT used.
As soon as the [TestEnum] attribute is used, there is an "Unbound Breapoint"; the breakpoint in "DoStuff()" method in another library is still hit.
Then I tried to "add existing project" to SubSolution and added the MainWasm project and started debugging directly from SubSolution -> same behavior.
Is there anything I oversee (e.g. regarding DLL-references or PDB file copy)?
This is already my second approach of trying to debug these modular-structured solutions - first I tried to debug via IIS (How to debug Blazor Webassembly on IIS with VS by attaching to Chrome?), but this was also not successful.
Found out there is an issue with debugging when using attribues with enum parameters:
https://github.com/dotnet/aspnetcore/issues/25380
-> I replaced the enum parameters and debugging is working fine now - Didn't get any feedback when this will be fixed so far
I had the same issue with my Blazor WASM not able to be debugged in VS due to 'Unbound breakpoint'. I have multiple projects running under the same solution and while initially the debugging worked for the WASM, it stopped after a while.
Eventually I was able to find a work around by waiting until all projects loaded and then I could disable the 'Unbound' breakpoint and re-select it. It then worked as expected.
It's not an ideal solution (especially if you have multiple breakpoints while troubleshooting) but it is workable.
I had this problem in .NET 6 and Visual Studio 2022.
I made a codebehind-file component.razor.cs but I also had code in the razor-file itself. Moving the code to the codebehind-file solved the issue and enabled the breakpoints.

UFT API-test execution via vbScript

I am trying to run my API test via a vbscript file based on Automation Object Model. I am able to launch, open and run my GUI tests but for API tests I get an Error "cannot open test" code: 800A03EE.
I have read somewhere that my testcase is probably corrupted, so I saved the test as a new one but still doesn't work.
Following is my vbscript:
testPath = "absolute address to my API-test folder"
Set objUFTapp = CreateObject("QuickTest.Application")
objUFTapp.Launch
objUFTapp.Visible = TRUE
objUFTapp.Open testPath, TRUE '------> throws the error
Set pDefColl = qtApp.Test.ParameterDefinitions
Set rtParams = pDefColl.GetParameters()
Set rtParam = rtParams.Item("param1")
rtParam.Value = "value1"
objUFTapp.Test.Run uftResultsOpt,True, rtParams
objUFTapp.Test.Close
objUFTapp.Quit
For some unknown reason, I was also facing similar issue.
As a workaround, I created one GUI test from which I was calling API test like this:
RunAPITest "API_Test_Name"
To do so:
1. Create new GUI test
2. Go to Design -> Call to existing API test
3. Provide path to your API test in Test path
4. Select <Entire Test> for Call to
5. You can pass any Input or Output parameter from this screen
5. Click OK
Now, you can use your own VBScript to call this GUI test which will run your desired API test.
I know it's not good idea to do so, but it will get the job done.
By time of UFT installation we can opt for an additional automation tool, LeanFT.
As the main feature of LeanFT, we can have the test environment right next to our development environment, either in Java(Eclipse) or C#.net(Visual Studio). Also we are provided with an object identification tool (GUI spy) which makes it possible to develop GUI test not in VBScript anymore but in one of the most powerful modern languages (Java or C#). With this very short summary, let's have a look at how actually we can execute API tests outside of UFT IDE.
After a successful installation of LeanFT tool, we can create a LeanFT project in our Eclipse or Visual Studio. Create a new LeanFT project
C# code:
using HP.LFT.SDK;
using HP.LFT.SDK.APITesting.UFT;
......
[TestMethod]
public void TestMethod1()
{
Dictionary<string, object> InputParameters = new Dictionary<string, object>();
InputParameters.Add("environment", "TEST");
APITestResult ExecutionResult = APITestRunner.Run("UFT Test Path" , InputParameters);
MessageBox.Show(ExecutionResult.Status.ToString());
.....
}
For sure above code is just to give you an insight although it works pretty fine. For better diagnistic, we can take advantage of other libraries like "HP.LFT.Verifications" for checking the result
Important: You cannot use UFT and LeanFT at the same time as your runtime engine!

How can a SonarPlugin query its settings?

I'm currently developing a SonarQube plugin and want to ask whether there is a way to query the settings from the sonar-project-properties file at run time.
More specifically, in the sonar-project-properties file you can set the analysis mode to analysis, preview or incremental, e.g., sonar.analysis.mode=analysis.
Due to the problem that preview and incremental mode run into an error, I want to disable the plugin when one of these two modes is specified.
I know that there is the sonar.preview.excludePlugins setting for excluding plugins, however, I cannot use it. In other words, I have to figure out at run-time which mode is set.
Can someone give me a hint, because I haven't found an approach for querying the settings from a sonar-project.properties file?
Plugins can not auto-disable themselves through the standard plugin exclusion properties.
The solution is that your plugin extensions, for instance sensors, read the properties through the component org.sonar.api.config.Settings and then accordingly continue or stop execution. Basically:
public class MySensor implements Sensor {
private final Settings settings;
public MySensor(Settings settings) {
this.settings = settings;
}
public void analyse(Project module, SensorContext context) {
if ("analysis ".equals(settings.getString("sonar.analysis.mode"))) {
return;
}
// else continue...
}
}

Automatic versioning of web site projects in Team City

At my current client we have some legacy ASP.Net web site projects. I am in the process of introducing automatic versioning for our builds and was wondering of how to best do this with web site projects in Team City?
I am currently using Team City's %build.number% variable (set through project build template) as the authoritative version number for a build. For any .NET project that produces assemblies it's hassle-free to use "AssemblyInfo Patcher" Build Feature in Team City but this does not work for web site projects since they do not produce assemblies.
So, any suggestions? I am already using Powershell and psake in my builds so creating scripts that use %build.number% is not a problem, it is more a question of how to inject this into the web site project in a "nice" manner.
I attempted several solutions but ended up with using the version number of a dependent assembly that gets set by Team City during the build. I added a class to the assembly and it looks something like this:
public class VersionUtils
{
private readonly ILog _logger;
public VersionUtils()
{
_logger = LogManager.GetLogger("VersionLogger");
}
public string GetWebAppVerison()
{
string version = "unknown version";
var assembly = Assembly.GetAssembly(typeof(VersionUtils));
try
{
version = assembly.GetName().Version.ToString();
}
catch(Exception ex)
{
_logger.Warn("Could not find or read version number from " + assembly.GetName(), ex);
}
return version;
}

How to speed up Azure deployment from Visual Studio 2010

I have Visual Studio 2010 solution with an Azure Service and an ASP.NET MVC 3 solution that serves as a Web Role for the Azure service. No other roles attached to the service other than that.
Every deployment to the Azure staging (or production, for that matter) environment takes up to 20 minutes to complete, form the moment I click publish on Visual Studio until all instances (2) are started.
As you can imagine this makes it a PITA to publish often, or to quick-fix some bugs. Is there a way to speed the process up? Would it be faster to upload the package to de Blob storage and upgrade from there? How would I go about achieving that?
I feel on-line docs on Azure leave a lot to be desired. Particularly when it comes to troubleshooting by the way.
Thanks.
One idea for reducing the need (and frequency) for redeploying is to move static content into blob storage, external to the package. For instance, move your css and javascript to blob storage, along with images. Once this is done, you'd only have to recompile / redeploy for .NET code changes. You can upload updated css, at any time, to blob storage. If you want to test this in staging first, you could always have a staging vs. production container name for your static content and store that container name in a config setting.
This doesn't change the deployment time when you do need to redeploy, but at least you can reduce how often you go through that process...
You should enable Web Deploy in your Azure project. It works this way :
1/ Create a RDP account (don't forget, you need to upload a certificate with its private key so that Azure can decipher the password). That is hidden in the Deploy Dialog Box for your Azure deployment project.
2/ Enable Web Deployment - same place
Once you've published the app that way, right-click in the web application (not the azure deployment project) and select Publish. The pop-up has everything defined except the password, enter that as well and you'll upload your changes to Azure in a matter of seconds.
CAVEAT : this is meant for single-instance web apps, definitely not the way to go for a production upgrade strategy, and the Blob storage answer already mentioned is the best option in that case.
Pierre
My solution to this problem is only to push a new package when I am changing code in the RoleEntryPoint or with the Service Definition. In Azure 1.3 you now have the ability to use Remote Desktop Connection. Using RDC, I will compile my code locally and use copy/paste to place it on the Azure server in the appropriate directory. Once the production code is running correctly, I can then push the fully tested version to staging and then do a VIP swap. This limits the number of times I actually have to deploy a package.
You actually have quite a long window in which you can keep modifying your code in Azure before you have to publish a new package. The new package is only really needed for those cases where Azure has to shutdown/restart your role instance.
It's a nice idea to try uploading your project to blob storage first, but unfortunately this is what Visual Studio is doing for you behind the scene anyway. As has been pointed out elsewhere, most of the time in doing the deploy is not the upload itself, but the stopping and starting of all of your update domains.
If you're just running this site in a development environment, then the only way I know to speed it up is to run just one instance. If this is the live environment, then... sorry, I think you're out of luck.
So that I don't have to deploy to the cloud to test minor changes, what I've found works quite well is to engineer the site so that it works when running in local IIS just like any other MVC site.
The biggest barrier to this working are settings that you have in the cloud config. The way we get around this is to make a copy of all of the settings in your cloud config and put them in your web.config in the appSettings. Then rather than using RoleEnvironment.GetConfigurationSettingValue() create a wrapper class that you call instead. This wrapper class checks RoleEnvironment.IsAvailable to see if it is running in the Azure fabric, if it is, it calls the usual config function above, if not, it calls WebConfigurationManager.AppSettings[].
There are a few other things that you'll want to do around getting the config setting change events which hopefully you can figure out from the code below:
public class SmartConfigurationManager
{
private static bool _addConfigChangeEvents;
private static string _configName;
private static Func<string, bool> _configSetter;
public static bool AddConfigChangeEvents
{
get { return _addConfigChangeEvents; }
set
{
_addConfigChangeEvents = value;
if (value)
{
RoleEnvironment.Changing += RoleEnvironmentChanging;
}
else
{
RoleEnvironment.Changing -= RoleEnvironmentChanging;
}
}
}
public static string Setting(string configName)
{
if (RoleEnvironment.IsAvailable)
{
return RoleEnvironment.GetConfigurationSettingValue(configName);
}
return WebConfigurationManager.AppSettings[configName];
}
public static Action<string, Func<string, bool>> GetConfigurationSettingPublisher()
{
if (RoleEnvironment.IsAvailable)
{
return AzureSettingsGet;
}
return WebAppSettingsGet;
}
public static void WebAppSettingsGet(string configName, Func<string, bool> configSetter)
{
configSetter(WebConfigurationManager.AppSettings[configName]);
}
public static void AzureSettingsGet(string configName, Func<string, bool> configSetter)
{
// We have to store these to be used in the RoleEnvironment Changed handler
_configName = configName;
_configSetter = configSetter;
// Provide the configSetter with the initial value
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
if (AddConfigChangeEvents)
{
RoleEnvironment.Changed += RoleEnvironmentChanged;
}
}
private static void RoleEnvironmentChanged(object anotherSender, RoleEnvironmentChangedEventArgs arg)
{
if ((arg.Changes.OfType<RoleEnvironmentConfigurationSettingChange>().Any(change => change.ConfigurationSettingName == _configName)))
{
if ((_configSetter(RoleEnvironment.GetConfigurationSettingValue(_configName))))
{
RoleEnvironment.RequestRecycle();
}
}
}
private static void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
// If a configuration setting is changing
if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
// Set e.Cancel to true to restart this role instance
e.Cancel = true;
}
}
}
The uploading itself takes a bit more than a minute most of the time. It's the starting up of the instances that take up most of the time.
What you can do is to deploy your fixes to staging first (note that it costs money so don't let it be there for too long). Swapping from staging to production only takes a couple of seconds. So while your application's still running you can upload the patched version, let your testers test it on staging and when they give the go then simply swap it to production.
I haven't tested your possible alternative approach by first uploading to blob storage first. But I think that's overhead as it doesn't speed up starting up the instances.

Resources