How to detect untested ruby files? - ruby

I recently started working on a large Rails application. Simplecov says test coverage is above 90%. Very good.
However now and again I discover files that are not even loaded by the test suite. These files are actually used in production but for some reason nobody cared enough to even write the simplest test about them. As a result they are not counted in the coverage metrics.
It worries me since there is an unknown amount of code that is likely to break in prod without us noticing.
Am I the only one to have this problem? Is there a well-known solution? Can we have coverage metrics for files not loaded?

The contributors added new config optiontrack_files exactly for this purpose. For rails project the value could look like this
track_files '{app,lib}/**/*.rb'
More details here: https://github.com/colszowka/simplecov/pull/422

I ended up adding this to my environments/test.rb:
config.eager_load = true
config.eager_load_paths += ["#{config.root}/lib"]
However adding lib/ can have downsides, such as loading generators and such. This post does a good job at explaining each approach pros and cons.

Related

Sonarqube 6.2 - multilanguage setup, show coverage per language

We do have an pretty old code base, where at the moment everything is handled within (frontend/backend) - to improve our quality, we setup a multilanguage project, now instead of analyzing just Java, we also analyse SCSS, HTML, JS, Xml,...
Well so far everything is running smooth, and working as expected, I am just curious if there is a way to show "coverage per language"? We do have a lot of Java Tests but no JavaScript tests, and it would be pretty neat, to have an overview of "how much tested" the different languages are!
There is also some kind of business value related to this! As the Coverage is now not separated into Integration and Unit tests, it is now also factoring the coverage of the JavaScript files into the overall coverage -> which we can argument easily, but we lose some kind of comparability :D
This is not available synthetically within SonarQube. If it's really important to you, you'll need to use the web services to pull the data and do the calculations externally.

Sharing Specflow Feature Files with Multiple Applications

My goal is to be able to write core testing that I can use within a unit testing framework as well as UI testing with selenium.
For simple test like:
Scenario: Add two numbers
Given I have entered 50 into the calculator
And I have entered 70 into the calculator
When I press add
Then the result should be 120
I would create both unit tests to prove that my core API would pass as well as a Selenium test that would prove my UI is doing the correct thing as well.
I briefly tried to find anyone doing something similar through Google, but couldn't find any examples. So I guess my question is, has anyone here done anything similar?
On approach I had thought of was simple adding the feature files to a project or directory and using the add existing item as link as the solution.
Update: Adding feature files to a common directory and adding them as a link appears to be working great. The feature bindings regenerates for each project the feature file was included in so I can run unit tests in one and Selenium UI tests in the other.
First, lets start with why you might want to do this. Its laziness of the good kind.
The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don't have to answer so many questions about it. Hence, the first great virtue of a programmer. Also hence, this book. See also impatience and hubris. (p.609)
Larry Wall, Programming Perl
Except it isn't, because we aren't going to reduce our overall energy expenditure.
When you are using SpecFlow, the easy part to keep up to date is the plain text. You will find yourself refactoring the [Binding]s again and again, but the scenarios tend to be quite easy to work with, and need very little revision once they have been agreed.
In addition the [Binding]s are global. Load them in from any assembly and they are available to the SpecFlow runner. In respect of what you are trying this actually makes things harder as you need to put effort in to keep the UI bindings from being mixed up with the non-UI bindings.
Also consider the way that SpecFlow actually runs the tests from feature files. It's a two stage process.
When you save the .feature file the SpecFlow VS plugin generates a .feature.cs file.
When you run your test engine (e.g. NUnit) it ignores the plain text and uses compiled code from .feature.cs
So if you start using linked .features I have no idea if the SpecFlow plugin will generate .feature.cs for both instances of the file. (If you try this please let us know)
Second lets consider the features themselves. I think you will constantly finding yourself compromising your tests to make them fit the other place they are used. Already in the example you have given you have on the screen. If you are working with just the core API then there won't be a screen, so do we change this to fit better in a non-UI scenario?
Finally you have another thing to consider, just how useful will your tests be. If you have already got a test that tests the Core API, then what will it mean to run same test via Selenium. All you will really test is the UI layer. In my current employment we have a great number of regression tests that perform this very kind of testing, running up a client that connects to a server and manipulating the UI to get the desired scenarios enacted. These are the most fragile tests we have due to their scale. They constantly break and we basically have to check our entire codebase to find the line that broke them. Often something like 10-100 of them break just for a one line change. If these tests weren't so important to the regression cycle then the effort in maintaining them would just be too much. In my own personal projects I tend to remove these tests completely and instead with UIs, I avoid testing the View layer. With WPF MVVM, I execute Commands and test for results in ViewModels. If somebody then decides the TextBox should be a ComboBox or that it will work better in mauve, then my testing is isolated.
In short, there is a reason you can't find anything about this on Google :-)
In general (see http://martinfowler.com/bliki/TestPyramid.html), one should limit the number of automated tests that test the UI directly, and prefer tests that start at the presentation layer (just below the view layer), or below.
SpecFlow is agnostic; the tests can be implemented using e.g. Selenium at the UI layer or just MSTest or NUnit at any of the layers below.
However having said that I appreciate that you will have situations where you are doing ATDD and want to implement SpecFlow scenarios to match each of the acceptance criteria. Some of the criteria will be perfectly fine to test at a lower architectural level, but one or two of them may be specific to the GUI-- for example testing Login and ensuring that the user is redirected to the home page after successful login. If using Angular2 or React routing (see https://en.wikipedia.org/wiki/Single-page_application), that redirect is likely done in the GUI layer itself.
I don't have a perfect answer yet, but as a certified SpecFlow trainer, I have a vested interest in this! The way I am currently leaning is to use a complementary tool like CucumberJS for the front-end specific tests (such as testing React router redirects) and SpecFlow for tests at lower architectural layers. Our front-end uses Node.JS/Express and our backend is .NET Core. The idea is that the front-end tests mostly use the front-end only with mocked out AJAX calls to the backend (see sinonjs), and the back-end tests use EF Core with the in-memory option (see docs.efproject.net/en/latest/providers/in-memory/. So the tests all run fast.
Of course, you still need a few tests that actually go all the way through, but those are different-- we should call those integration tests. I do not believe that acceptance tests need to be integration tests. That way, you have a suite of acceptance tests from doing ATDD, plus a relatively small set of integration tests that test all the way front-to-back. The integration tests run more slowly and require more maintenance, so you separate them out into a different part of the CI/CD build chain.
I hope this makes sense. It is not so much solving the problem as avoiding the problem.

Generate tests skeleton when creating a gem

Is there a tool for creating Ruby gem which automatically generate not only project file structure, but also tests file structure (with default ruby assert tests or rspec)?
You may want to take a look at hoe, which allows you to write your own project templates in erb, amongst other goodies. See http://docs.seattlerb.org/hoe/
Personally I have found, despite looking for something like this initially, that bundle gem <gemname> and a quick reference to an earlier project is not all that much work in practice, considering the number of gems I have written (about 7, though only one is published). Boris Stitnicky's comment rings true for me as well - understanding why a particular structure works, and building it from scratch at least a couple of times is worth the time invested in gaining Ruby knowledge.
However, if my day job project involved creating many in-house gems, I'd probably be using hoe, or a similar tool to get them started consistently.

What does regression test mean?

Could anyone explain the word regression test in an understandable way?
Regression test is a test that is performed to make sure that previously working functionality still works, after changes elsewhere in the system. Wikipedia article is pretty good at explaining what it is.
Your unit tests are automatically regression tests, and that's one of their biggest advantages. Once those tests are written, they will be run in future, whenever you add new functionality or change existing functionality. You don't need to explicitly write regression tests.
Notwithstanding the old joke, "Congress" is not the opposite of "progress;" "regress" is. For your code to regress is for it to "move backward," typically meaning that some bad behavior it once had, which you fixed, has come back. A "regression" is the return of a bug (although there can be other interpretations). A regression test, therefore, is a test that validates that you have fixed the bug, and one that you run periodically to ensure that your fix is still in place, still working.
The word regression as coined by Francis Galton means
Regression: The act of going back
I.e. it is the phenomenon/technique in software testing to check any change / bug fixes hasn't impacted the existing functionality of the system. Thus the intent of regression testing is to ensure that a change, such as a bug fix should not result in another fault being uncovered in the application.
Regression Testing is required when
there is a change in requirements and code is modified according to the requirement
a new feature is added to the software
defects are fixed
a performance issue is fixed
Regression testing can be done both manually and automated.
These are some tools for the automation approach:
QTP
AdventNet QEngine
Regression Tester
vTest
Watir
Selenium
actiWate
Rational Functional Tester
SilkTest
During a regression test, testers run through your application testing features that were known to work in the previous build.
They look specifically for parts of the application that may not have been directly modified, but depend on (and could have residual bugs from) code that was modified.
Those bugs (ones caused by bugs in dependent code even though they were working before) are known as regressions (because the feature was working properly and now has a bug...and therefore, regressed).
Regression testing is a part of testing activity, which can be start after modification has been made to check the reliability of each software released.It's nothing but an impact analysis to check wheather it not affecting critical area of the software.
Do unit test
Do integration test
After (1) and (2) are passed, do regression test
In simple term, regression test is to repeat step (1) and (2) again.
Regression testing basically perform after completing of retesting.
The main purpose of regression testing is to check the impact of modification. Whether still our application is acting stable.
Its necessary to perform a regression testing because sometimes it happened after retesting or while fixing bug developer fixed the bug and missed out something on other code or on dependent code
http://en.wikipedia.org/wiki/Regression_testing
Basically, test the code you've updated to make sure you haven't introduced new bugs and that the functionality still works as before.
Regression test:- IF THERE ANY Changes, delate,modification, up dings or adding in my application . In that case I have to know that my application works as it was working before.
Regression test - Is a type of SW testing where we try to cover or check around the bug Original bug Fix.
The functionality around the bug fix should not get changed or altered due to the Fix provided. Issues found in such process are called as Regression Issues.
In a simple way, Regression test is a test to make sure that the functionality of a system still works after a new code change has been introduced. It doesn't really have to be a thorough testing of the whole functionality (such as functional testing), only the areas that are considered to be impacted by the introduced code changes.
Regression test is a test which enables us to find introduced bug by testing some areas in the software that we are testing. Introduced bug means a bug which is caused by the new changes made by the developer.
The key in the regression test is how we can effectively do the test by wisely deciding some areas which might be impacted by the changes since we can't test all the functionalities due to the time constraint (most of the time). 'Effective' in here means we can find bugs in a relatively short period of time.
Regression testing means testing your software/website repeatedly. The main reason for it is to make sure there aren't any new bugs introduced.
Typically, regression tests will be automated, to reduce the cost of rerunning the test. The more high value test cases you can construct, the better. This is one example of a Play and Record regression testing platform
Definition: - Regression testing is defined as a type of software testing to confirm that a recent program or code change has not harmfully affected existing features.
Regression Testing is Re-Testing to make sure that any modification done in a program will not affect the other functionality.
Regression testing is nothing but a full or partial selection of already executed test cases which are re-executed to ensure existing functionalities work fine.
We can do Regression Testing at all the level of testing like Unit Testing, Integration Testing and System Testing Level.
Need of Regression Testing
Common code changed correctly or not.
Correct or incorrect version control.
Bug fixes perfectly.
Bug fixes completely.
Performance issue fix.
6.Change in requirements and code is modified according to the requirement.
The new feature is added to the software perfectly.
For More Visit LINK
I like this definition of regression testing:
[regression testing] tells you if a previously written and tested code broke after you’ve added an update or a fix
[...] it helps you notice if you’ve unknowingly introduced bugs to your software while adding new code. New bugs of this kind are called regressions.
Basically, a regression is returning to a state where your application has bugs.
Regression testing is an activity performed to ensure the different functionalities of the system are still working as expected and the new functionalities added did not break any of the existing ones.
Secondly, you generally write automated tests or do manual testing to the above mentioned testing. It could be a combination of Unit/API/UI tests that are run on a daily basis. Regression testing can be performed in various phases of the SDLC, it all depends on the context.
Hopefully this gives an idea on what is regression testing.

What is the ruby test tool called that 'breaks' your code to see how tight your tests are?

A wee while ago I ended up on a page which hosted several ruby tools, which had 'crazy' names like 'mangler' or 'executor' or something. The tool's job was to modify you production code (at runtime) in order to prove that your tests were precise.
Unfortunately I would now like to find that tool again, but can't remember what it was called. Any ideas?
I think you're thinking about Heckle, which flips your code to make sure your tests are accurate. Here:
http://seattlerb.rubyforge.org/heckle/
Maybe you're thinking of the Flay project and related modules:
http://ruby.sadi.st/Ruby_Sadist.html
Also you can try my mutant. Its AST based and currently runs under MRI and RBX in > 2.0 mode. It only has a killer for rspec3, but others are possible also.

Resources