where is the [Duration] attribute in mbunit, for timing tests? - performance

Is it gone from the latest version?
I want to see that a method doesn't deteriorate in its performance.
I forgot the technical term for it.

Have a look at the [Timeout] attribute. It might decorate test methods or test fixtures. The default value (when not specified) is 10 minutes.
[TestFixture]
public class Fixture
{
[Test, Timeout(60)] // timeout at 60 seconds
public void Method()
{
}
}

Related

Call a Scheduled Async bean method in a running application

I've got an Async method which is scheduled to run once a day:
#Component
public class MyClass {
//myCronProperty set to "0 25 20 * * *" in application.properties
#Scheduled(cron="{myCronProperty}")
#Async
#Override
public void doDailyTask() {
//Do work here
}
}
Is there a way of triggering doDailyTask() for testing purposes when the application is already running, perhaps by doing something clever with Groovy and reflection?
I figure I can always tweak the cron property to 1 minute in the future in my application.properties file, and then restart the application - but just wondered if there was a smarter method?
You should be able to simply inject the component into another class, for example a #RestController, and invoke the doDailyTask() method on it there.

how to call a method automatically in every seconds using spring boot?

In Spring-boot project, is there a method that want to make it called every seconds automatically?
And make an rest-api to set calling term in same project?
Here comes an example.
The greeting method will be executed every 5 seconds and it can be called when you visit /hello endpoint.
#SpringBootApplication
#EnableScheduling
#RestController
public class So47301079 {
public static void main(String[] args) {
SpringApplication.run(So47301079.class, args);
}
#Scheduled(fixedRate = 5000)
#GetMapping(value="/hello")
public void greeting() {
System.out.println("Hello!!!");
}
}
Hope this helps you!
Can use cron expression like #Scheduled(cron="*/5 * * * * *"). In this way, we have a control on minutes, hours, days also. Have a look at this video to know different possible ways to use cron expression.

Annotation to log the execution time of each test case in Junit5

I have a requirement to log the execution time for each test case. I don't want to use a custom implementation. I want to know whether there is an annotation available for this in junit5Btw, I know about Stopwatch or Junit4 #Timeout.
After a look through into junit5 documentation, I found this sample
TimeExtension
This snippet is from their docs:
#ExtendWith(TimingExtension.class)
class TimingExtensionTests {
#Test
void sleep20ms() throws Exception {
Thread.sleep(20);
}
#Test
void sleep50ms() throws Exception {
Thread.sleep(50);
}
}
EDIT: Source code for TimingExtension
I wrote a BenchmarkExtension that you can apply with #Benchmark. You can use it as follows:
#Benchmark
class BenchmarkTest {
#Test
void notBenchmarked() {
assertTrue(true);
}
#Test
#Benchmark
void benchmarked() throws InterruptedException {
Thread.sleep(100);
assertTrue(true);
}
}
When applied to the class you will get a single message with the total run time of all the tests methods in that class. When applied to a single test method, you will get a message for that test only.
I hope it will eventually find its way into JUnit Pioneer.
Junit5 also has a #Timout annotation.
There is also an assertTimeout and assertTimeoutPreemptively (see documentation).
Finally, the JUnit 4 Stopwatch functionality is not available in JUnit 5. Use TimeExtension as already mentioned. However this extension is not (yet?) part of the distribution (see issue 343).
If you are using Sonarqube, you can find time duration of each test and total of all test sorted by time
test time sorted
all test
There is nothing special that needs to be done to get the execution time for a test case in JUnit 5 - Jupiter.
The time is right there in the XML report.
On the 'testCase' element, there is a property 'time', which gives the time for execution of the test case. Below is an example. The times are in seconds
<testcase name="checkZKConnectivityWithAuth"
classname="test.zk.ZookeeperConnectionProviderTests"
time="7.177"/>
<testcase name="checkConfigRetrieval"
classname="test.zk.ZookeeperConnectionProviderTests"
time="0.213"/>
<testcase name="checkConfigInsertion"
classname="test.zk.ZookeeperConnectionProviderTests"
time="0.255"/>

Mvvmcross Testing different view models fails when running together

I've come across an interesting error. I have two test files for my xamarin mobile application, both testing view models:
public class TestFirstViewModel : MvxIoCSupportingTest
{
public void AdditionalSetup() {
//Register services and dependencies here.
}
[Fact]
public TestMethod1() {
// Successful test code here.
}
}
That's in one file. In another file, I have:
public class TestSecondViewModel : MvxIoCSupportingTest
{
public void AdditionalSetup() {
//Register services and dependencies here, slightly different from first
}
[Fact]
public TestMethod2() {
// Successful test code here.
}
}
When I run these files individually (I'm using xunit), they work just fine. However, when I run them together, I get the following error on one of the test cases:
Result Message: Cirrious.CrossCore.Exceptions.MvxException : You cannot create more than one instance of MvxSingleton
Result StackTrace:
at Cirrious.CrossCore.Core.MvxSingleton`1..ctor()
at Cirrious.CrossCore.IoC.MvxSimpleIoCContainer..ctor(IMvxIocOptions options)
at Cirrious.CrossCore.IoC.MvxSimpleIoCContainer.Initialize(IMvxIocOptions options)
at Cirrious.MvvmCross.Test.Core.MvxIoCSupportingTest.ClearAll()
at Cirrious.MvvmCross.Test.Core.MvxIoCSupportingTest.Setup()
at Project.Test.TestFirstViewModel.TestMethod1() in ...
Can anyone tell me what's going on here?
The issue stems from the parallelization of XUnit without the option to do proper tear-down. You could diable parallelization in the AssemblyIndo.cs file in you test project by adding:
[assembly: CollectionBehavior(DisableTestParallelization = true)]
I ended up solving this question by changing testing frameworks. I had different ioc singleton initializations, because, well, they're different test cases and needed different inputs/mocks. Instead of using Xunit, I resorted to Nunit where their cache clearing was much more defined: Xunit doesn't exactly believe in setup and tear-down, so it made a test environment like this more difficult.
I fixed the issue by using the collection attribute.
[Collection("ViewModels")]
class ViewModelATest : BaseViewModelTest {
...
}
[Collection("ViewModels")]
class ViewModelBTest : BaseViewModelTest {
...
}
The base view model test class has the mock dispatcher and performs the singleton registrations in the additional setup method.
Each of my tests calls ClearAll() at the beginning.
I hade some success with setup things in a constructor and add this check:
public PaymentRepositoryTests()
{
if (MvxSingletonCache.Instance == null)
{
Setup();
}
//other registerings.
}`
Also I did implement the IDisposable Interface
public void Dispose()
{
ClearAll();
}
But tbh not sure how much impact that had..
It works ok with xunit
Copy MvxIocSupportingTest and Mvxtest in your xunit PCL project.
Modify MvxTest to remove the attributes and use a simple contructor:
public class MvxTest : MvxIoCSupportingTest
{
protected MockMvxViewDispatcher MockDispatcher { get; private set; }
public MvxTest()
{
Setup();
}
...
And in each of you test, derive from IClassFixture
public class TestRadiosApi : IClassFixture<MvxTest>
{
[Fact]
public async Task TestToken()
{
...
xunit will create the MvxTest class only once for all tests.

Forcing MSTest to use a single thread

Given this test fixture:
[TestClass]
public class MSTestThreads
{
[TestMethod]
public void Test1()
{
Trace.WriteLine(Thread.CurrentThread.ManagedThreadId);
}
[TestMethod]
public void Test2()
{
Trace.WriteLine(Thread.CurrentThread.ManagedThreadId);
}
}
Running the test with MSTest through Visual Studio or command line prints two different thread numbers (yet they are run sequentially anyway).
Is there a way to force MSTest to run them using a single thread?
I solved this problem with locking:
public static class IntegrationTestsSynchronization
{
public static readonly object LockObject = new object();
}
[TestClass]
public class ATestCaseClass
{
[TestInitialize]
public void TestInitialize()
{
Monitor.Enter(IntegrationTestsSynchronization.LockObject);
}
[TestCleanup]
public void TestCleanup()
{
Monitor.Exit(IntegrationTestsSynchronization.LockObject);
}
//test methods
}
// possibly other test cases
This can of course be extracted to a base test class and reused.
I've fought for endless hours to make MSTest run in a single threaded mode on a large project that made heavy use of nHibernate and it's not-thread-safe (not a problem, it's just not) ISession.
We ended up more time writing code to support the multi-threaded nature of MSTest because - to the best of my and my teams knowledge - it is not possible to run MSTest in a single threaded mode.
You can derive your test class from
public class LinearTest
{
private static readonly object SyncRoot = new object();
[TestInitialize]
public void Initialize()
{
Monitor.Enter(SyncRoot);
}
[TestCleanup]
public void Cleanup()
{
Monitor.Exit(SyncRoot);
}
}
We try hard to make out tests isolated from each other. Many of them achieve this by setting up the state of a database, then restoring it afterwards. Although mostly tests set up different data, with some 10,000 in a run there is a fair chance of a collision unless the code author of a test takes care to ensure its initial data is unique (ie doesn't use the same primary keys as another test doing something similar). This is, frankly, unmanageable, and we do get occasional test failures that pass second time around. I am fairly sure this is caused by collisions that would be avoided running tests strictly sequentially.
The way to make an MSTest method run in single-threaded mode:
Nuget:
install-package MSTest.TestAdapter
install-package MSTest.TestFramework
In your test source on those methods that need to run while no other tests are running:
[TestMethod]
[DoNotParallelize]
public void myTest(){
//
}
Whilst it is a cop out answer, I would actually encourage you to make your code thread-safe. The behaviour of MSTest is to ensure isolation as Richard has pointed out. By encountering problems with your unit tests you are proving that there could be some problems in the future.
You could ignore them, use NUnit, or deal with them and continue to use MSTest.
I tried a bit of a different approach, because the underlying problem is that the names of the pipes are the problem. So I made a fakePipe, derived it from the one I use in the program. And named the pipe with the tests name.
[TestClass]
public class PipeCommunicationContractTests {
private PipeDummy pipe;
/// <summary>
///Gets or sets the test context which provides
///information about and functionality for the current test run.
///</summary>
public TestContext TestContext { get; set; }
[TestInitialize]
public void TestInitialize() {
pipe = new PipeDummy(TestContext.TestName);
pipe.Start();
}
[TestCleanup]
public void TestCleanup() {
{
pipe.Stop();
pipe = null;
}
...
[TestMethod]
public void CallXxOnPipeExpectResult(){
var result = pipe.Xx();
Assert.AreEqual("Result",result);
}
}
It appears to be a bit faster, since we can run on multiple cores and threads...

Resources