Is there a YAML test suite? - yaml

A comprehensive test suite would be a valuable tool to have, especially when evaluating all of the variant parsers out there. Does such a beast exist?
In a perfect world, I imagine it would have different sections for different versions of the YAML spec...

There is currently one in the making, see here.
We also generate a result matrix for YAML implementations which
we know of
are written in a language we know so we can implement adapter code to validate the test suite against it
Full disclosure: I am the author of NimYAML and some of the test cases and adapters.

Related

Testing a DSL / Grammar Written With Xtext

I've been set a task of writing a grammar / DSL with XText. It seems reasonably simple. I've been asked to incrementally add rules to the grammar as per a specification. I want to be able to test each new rule that I create.
I have Spring Tool Suite Version: 3.9.2.RELEASE, Build Id: 201712210947, Platform: Eclipse Oxygen.2 (4.7.2).
I've seen textboxes in web browsers that show syntax highlighting based on the validity of the input as per the grammar rules. Also, I'm aware that unit testing is a possibility.
What's the simplest way, minimal fuss, of verifying that the grammar works as expected?
Your help is greatly appreciated.
Thanks in advance,
Ryan
Writing a bunch of unit tests using ParseHelper and ValidationTestHelper should be the most sustainable way to do this.
e.g. https://github.com/eclipse/xtext-eclipse/blob/master/org.eclipse.xtext.xtext.ui.examples/projects/domainmodel/org.eclipse.xtext.example.domainmodel.tests/src/org/eclipse/xtext/example/domainmodel/tests/ValidationTests.xtend
The answer depends on what you mean by 'verifying that the grammar works as expected'. As Christian noted, unit tests using ParseHelper and ValidationTestHelper will tell you if your grammar is being parsed into the correct model and generating the appropriate errors otherwise. If however you want to test things like content assist, syntax higlighting and such, you have to go a little further and write some tests using AbstactEditorTest, AbstractOutlineTest, etc. in the ui.tests package for your Xtext project. After looking at a lot of tutorials I finally went ahead and bought Lorenzo Bettini's book:
Implementing Domain-Specific Languages with Xtext and Xtend - Second Edition
It was a great help in understanding Xtext, and how to test a grammar as you create it.

Hudson/Jenkins source code metrics?

Are there any useful plugins for source code metrics for Hudson/Jenkins?
I'm looking for total lines of code, total number of tests, classes, etc. with graphing.
Does anything like this exist?
Are you using Java? If so, SONAR should certainly be your first port of call. It does a lot on it's own and also wraps up all the major Java analysis tools, such as:
Out of the box, you'll get metrics on:
Potential Architectural & Design issues
Unit test coverage (uses cobertura)
Lines of code\packages\classes etc
Potential bugs
Code duplication
Adherence to code formatting standards
(plus many more)
It allows you to traverse from the high level analysis through to the source code it relates to. It will be easier if you're using Maven for your build though...
There is a Hudson plugin. And it's free.
Try CCCC (http://sourceforge.net/projects/cccc/). It does code counting, module counting (classes), etc., and the plugin also graphs it for you. (for C, C++)
Incidently, what language are you looking at?
There's also CLOC (Count lines of Code) which will tell you how many lines of each language you have, although I can't seem to find a link for it.
You don't specify which language you are using, but Redsolo's awesome blog post Guide to building .NET projects using Hudson shows you how to use FxCop and NUnit on Hudson to give some of what you are looking for. The Violations plugin used also supports Simian, CPD, PMD and PyLint.

Test Automation Framework

I was wondering what would be a good UI to specify test cases.
Currently we use macros with excel to specify our test cases and generate an xml out of it and export it to the script generator.
Excel is good and really flexible and allows testers to enter their test cases very quickly.
However the xml generated is sometimes not well formed and the system has a huge learning curve.
I want to change the UI from excel to something else that would allow testers to enter test cases quickly and provide flexibility.
A nice TDD tool is SLIM/FitNesse. It is a wiki system which allows to enter special tables and/or commands which trigger test methods. These test methods can be written in Java and .NET (other languages might be supported). Also there are various plug-ins for doing DB testing or Selenium web tests. Here is a first tutorial video.
I've used Test Link for this sort of task. It's an opensource php project.
You might check out Fitnesse, which does a similar thing. http://fitnesse.org/

Code and unit tests in two different languages?

I've recently have started writing unit tests for PL/SQL code in Ruby.
Are there any other language combinations where you write your code and unit tests in two completely different languages?
A common combination is code in Java and tests in Groovy. Which is particular interesting because Groovy is built "on top of" Java, for example Groovy even uses the same testing framework as Java.
We write groovy tests for our Java application. Mainly cause we want to learn and experience other programming languages.
I've seen unit tests written in Ruby for a C library wrapped with swig.
The main advantage compared to the same unit tests written in C being the interactive Rub interpreter (irb) that permits to do exploratory testing.
Few years ago we used Python to test C++ code, using Boost to export classes.
Unit tests were written in python.
The interesting part of this architecture is that we were able to access to living objects from a python console, because the logic was expressed in python, C++ was used to build low level classes.
If you're adding a new language to an existing project it's perfectly reasonable to do functional/acceptance tests in the existing language.
When we first adopted Ruby on Rails, we still used JUnit and HTMLUnit to test the web front-end and did assertions directly against the database backend.
If you're still learning how to use a new piece of infrastructure it makes sense to keep using a testing method that you can trust whilst you do the transition.
We did eventually start using test/unit in ruby, and selenium - but it was useful to a transition period where we relied on our existing Java-based tests...

NUnit best practice

Environment: (C# WinForms application in Visual Studio Professional 2008)
I've been digging around a little for guidance on NUnit best practices. As a solo programmer working in a relatively isolated environment I'm hoping that collective wisdom here can help me.
Scott White has a few good starting points here but I'm not sure I totally agree with everything he's said -- particularly point 2. My instincts tell me that the closer a test is to the code being tested the more likely you are to get complete test coverage. In the comments to Scott's blog posting is a remark that just testing the public interface is considered best practice by some, but I would argue the test framework is not a typical class consumer.
What can you recommend as best practices for NUnit?
If by point 2, you mean the "bin folder per solution" -- I can see your point. Personally, I would simply add the reference to each test project. If, on the other hand, you really mean (1b) "don't put your tests in the same assembly as your code" I heartily agree with him and disagree with you. Your tests should be distinct from your production code in order to enhance code clarity and organization. Keeping your test classes separate helps the next programmer understand it more easily. If you need access to internals in your tests -- and you might since internal methods are "public" to the assembly, you can use the InternalsVisibleTo construct in the Assembly.cs file.
I, too, would recommend that, in general, it is sufficient to unit test only the public interface of the code. Done properly (using TDD), the private methods of your code will simply be refactorings of previous public code and will have sufficient test coverage through the public methods. Of course, this is a guideline not a law so there will be times that you might want to test a private method. In those instances, you can create an accessor and use reflection to invoke the private method.
Another recommendation that I would make is to use unit testing and code coverage in tandem. Code coverage can be a useful heuristic to identify when you need more tests. Lack of coverage should be used as a guide to indicate where more testing may be needed. This isn't to say that you need 100% coverage -- some code may be simple enough not to warrant a unit test (automatic properties, for instance) and they may not be touched by your existing tests.
There were a couple of issues that I had with the article. Probably the biggest is the lack of abstraction away from the database for unit tests. There probably are some integration tests that need to go against the db -- perhaps when testing trigger or constraint functionality if you can't convince yourself of their correctness otherwise. In general, though, I'm of the opinion that you should implement your data access as interfaces, then mock out the actual implementations in your unit tests so that there is no need to actually connect to the database. I find that my tests run faster, and thus I run them more often when I do this. Building up a "fake" database interface might take a little while but can be reused as long as you stick with the same design pattern for your data access.
Lastly, I would recommend using nUnit with TestDriven.Net - a very useful plugin whether you're doing nUnit or MSTest. Makes it very handy to run or debug tests with a right-click context menu.
My instincts tell me that the closer a
test is to the code being tested the
more likely you are to get complete
test coverage. In the comments to
Scott's blog posting is a remark that
just testing the public interface is
considered best practice by some, but
I would argue the test framework is
not a typical class consumer.
If your code cannot be tested using only public entry points, then you have a design problem. You should read more about TDD and SOLID principles (especially single responsibility principle and dependency inversion). Then you will understand that following TDD approach will help you write more testable, flexible and maintainable code, without the need for using such "hacks" as testing classes' private parts.
I also highly recommend reading Google's guide to testability by Miško Hevery, it has plenty of code samples which cover these topics.
I'm in a fairly similar situation and this question describes what I do keep-your-source-close-and-your-unit-tests-closer. There weren't too many others enamoured with my approach but it works perfectly for me.

Resources