Batch testing showing false negatives in LUIS UI - azure-language-understanding

I have been working on improving my bot accuracy and utilize the batch testing panel. I have about 1000 tests I run across 9 different intents. 1 week ago everything was working just fine, I had got them all working and I was seeing 100% passing across the board.
When I got to work on Monday I ran the batch testing panel (without any changes) and I was seeing all sorts of failures. It was showing that 6/9 batches failed with some having massive failures of maybe 100/300 passing.
At first I freaked because I as going to take this to prod and I couldn't figure out what happened.
However, when I clicked on "see results" the next scene showed that indeed all tests were still passing. It's just the first panel that is showing the wrong information about what is passing/failing.
Seems like there is some sort of bug here, I checked it out on another Model I had and it was having the same issue of the false failure reporting.

Related

Tests stall for no obvious reason

I have been happily unit testing in Laravel 9 for the last few days and am gradually getting the hang of how to write tests for various circumstances. I ran many tests and was learning a lot when suddenly, for no obvious reason, the php artisan test command stopped working properly. Starting at about 11am today, the very first test in the very first test case executes successfully and then the whole thing just stalls: no other tests are attempted. There are no error messages and waiting prolonged periods doesn't result in the tests restarting on their own.
I've tried doing a CTRL-C to break out of the test and then started again. Same result. I've tried closing the terminal window, opening another and running php artisan test again. I've tried using a different terminal window outside of VS Code altogether. Same result. I've tried closing and re-opening VS Code. Same result. I've even rebooted the computer (I'm running Windows 10) but am getting exactly the same result. I just can't get beyond the first test in the first test case.
I can't think of anything else to try. The second test case has been run successfully many times so I can't believe it is the problem. Can anyone suggest what the problem might be and how I can get past it?
I'm using PHPUnit 9.x.
UPDATE
As per Matias' suggestion, I ran php artisan test -vvv on my test suite. (I also renamed all of the files in the Unit Test directory so that they wouldn't run, except for the two ExampleTest test cases that were generated by Laravel, one for feature testing and one for unit testing.) The very long stacktrace pointed to an inability of the tests to connect to my local MySQL database. That made sense because I hadn't restarted XAMPP after the reboot. (Then again, I didn't see the urgency because none of the code I was testing used the database; they dealt with strings, arrays and collections.) I restarted XAMPP, including Apache and MySQL. Now, all tests the ExampleTest test cases completed. I renamed my Helper2Txxx.php to Helper2Test.php and assumed it would run smoothly now. Unfortunately, that didn't happen. Helper2Text.php don't run at all: the testing stopped as soon as the ExampleTest had finished.
I commented out all but the most trivial test in Helper2Test and it ran successfully. I uncommented a few tests at a time until I found the one that made all of the test cycle stop and that (unsurprisingly) turned out to be the last one I had written. Further investigation revealed that a small function I'd written to help with the testing was running into a memory error:
Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 264257536 bytes) in C:\Laravel\laragigs\tests\Unit\Helper2Test.php on line 154
It looks very much like I have to fix up my small function but I won't be able to do that to untilmorrow sometime. My working theory is that this memory error is the big problem that somehow breaks artisan test; somehow this kept the successful tests earlier in Helper2Test from displaying so that I had no idea where/why the test command was failing.
I'll update here as I learn more. I think I will be able to fix this problem myself but I'll leave the problem open in case I turn out to be wrong.
UPDATE 2
Everything is working now. I reworked my small function to accomplish the same thing via explode() instead of strtok() and it now works perfectly with no memory issues.
Thank you, Matias, for reminding me that I can get diagnostics by using the -vvv subcommand! That was the critical step in figuring out what was wrong.

New Visual Studio installation, tests not running in Test Explorer

This question is very similar to other questions that also in some cases literally have the text "tests not running in Test Explorer" in the title. But, my context is a bit different. In those questions, there was a fair bit of investigation into what might be wrong with the tests. I am fairly confident nothing is wrong with the tests in this case.
I am one of hundreds of developers working on a project, and this project has a large bank of automated tests (though perhaps not as large as it ought to be :-P). Everybody is frequently running tests, and triggers fire when pull requests are made and merged to automatically run them then too. Tests were working fine for me as well. But, I have just been given a new laptop with better hardware specs, and I am trying to get it set up. On the new laptop, the project builds just fine (and noticeably faster :-) ), but the automated tests just don't run. I can't figure out why, and I'm looking for suggestions about what to check in this context -- given that there are hundreds of places where the exact same code is working perfectly, I really don't think the tests or test projects themselves are at fault here.
I have observed that the build output, apparently randomly, sometimes does not contain the test adapter files:
Microsoft.VisualStudio.TestPlatform.MSTest.TestAdapter.dll
Microsoft.VisualStudio.TestPlatform.MSTestAdapter.PlatformServices.dll
Microsoft.VisualStudio.TestPlatform.MSTestAdapter.PlatformServices.Interface.dll
xunit.runner.visualstudio.testadapter.dll
If these files are missing, then VSTest.Console.exe also cannot run the test. But, usually rebuilding the project results in the files magically appearing, and then VSTest.Console.exe works just fine.
I haven't been able to ascertain a reason why the adapter files are sometimes put into the build output and sometimes not, and in either case, the Test Explorer within Visual Studio always fails to run the tests -- it discovers the tests just fine, puts several thousand items into the forest of trees, but when told to run tests, it just sits there for a minute or two and then returns to idle state with no output at all in the "Tests" output window.
This is a brand new installation of Visual Studio Enterprise 2019 Preview, the exact same version that is on my old laptop, but on my old laptop the tests run fine. What do?? I don't know what to check next. :-(
Well, I am thoroughly confused. I tried installing new features, I tried checking for system updates, I rebooted multiple times, and tests did not work. So, finally, I decided to make a cut-down minimal test project and see if I could observe any differences in Process Monitor between the two computers. I made a project with two tiny tests, one with NUnit and one with xUnit, and ... they worked. So, I opened up the big project again and hit Run Tests, and ... they worked. I am completely stumped, and the only advice I can offer to anyone finding this question with a similar problem is, just keep trying.

Why does my Xcode bot trigger twice?

I've been working on using Xcode server to build my app, and have been running into some snags. The most recent involves Bots running over-zealously. I'll commit and push one change to one file, and two builds get triggered, separated by a minute or two. This also happens if I click the "Integrate Now" button, or if I make changes to the bot, with "Integrate immediately" unchecked.
Since my build takes a while to run, this is a pretty big problem, especially when I'm trying to iterate on Bot configuration.
Is anyone aware of what process triggers builds, or how can I troubleshoot this type of failure in general? It seems like there are multiple daemons listening for the signal to trigger the build or something like that.
Since it may be a bug in the Xcode beta, I submitted a radar (rdar://20456212)
I had the same problem. I changed the bot so that it does not do a clean for each integration and now it only does one build per commit. My guess is that the clean process and download of code was taking so long that the bot was being triggered before it was complete. So now I clean once a day and I only get a double build on the first build of the day. Hope this helps.

TeamCity test failure statistics

I have a TeamCity build set up which does nothing but run integration tests. Sadly, the tests are a tad unreliable. Most of them work fine, but a few intermittently fail from time to time.
I would dearly love to be able to get a graph of the most common test failures. Basically I want to know which tests fail most often.
I know that TC can show me pass/fail statistics for any single test. But I'm not going to go click on all 400+ tests to find out which ones fail most often!
If it's not possible to make TC show me this information, is there some interface that will enable me to download the data so I can process it myself?
You can get count on frequently of fail test details from teamcity by following steps as details in this link:
Traverse with route : Projects -> (select proj) -> Current Problems (tab) -> View tests failed within the last 120hrs (link present at right side of page )
http://confluence.jetbrains.com/display/TCD7/Viewing+Tests+and+Configuration+Problems#ViewingTestsandConfigurationProblems-ViewingTestsFailedwithinLast120Hours
I actually found a better way than the accepted answer, for version 9 at least.
In a failed build "overview" page like this:
You can click on the test history button in the small drop down menu
You will then be presented with a view like this:

Microsoft Coded UI Test takes exteremly long time to refresh page with cascading dropdowns

Unfortunately I cannot post any source code but I've seen this behavior on multiple websites.
I have a page that contains 3 drop downs. The first drop down will populate the second, the second populating the third.
Each drop down initiates a post-back. While regularly navigating the site, it works extremely fast, without a hitch. However, with the Coded UI test, it takes an extremely long time to load the page after making each selection. I'm guessing since the page post-back, there's some stuff the test must do before continuing.
Has anyone experienced similar issues?
I think this behavior occurs because CodedUI Tests are collecting many data from the pc they are executed.
I suggest you to disable all Data and Diagnostics, by editing the TestSettings file.
Then, if your tests are running faster try to find the balance between the Data you want to collect and the test's optimization.

Resources