Ways to test an access modifier - move-lang

good morning, guys.
Is there a way to test the access modifier? e.g. ensure that create_block is public(fun)? afaik that's not possible to do in a unit test (as the 'expected failure' test itself would not compile). is this something we could use the prover for?

The access modifier is statically set in the compiler, so as you pointed out, you could have a unit test, to ensure that it is public, but there's not necessarily a way to test that it's not public.
I think this is particularly similar to other languages, just a compiler time check.
https://move-book.com/syntax-basics/function.html#function-visibility

Related

TDD vs Defensive Programming

Uncle Bob says:
"Defensive programming, in non-public APIs, is a smell, and a symptom, of teams that don't do TDD."
I am wondering how TDD can avoid an (internal) function to be used in an unintended way? I think TDD can´t avoid it. It merely shows that the function is used correctly because a calling function is covered by it´s passing unit tests.
When developing a new feature using the (undefensive) function this feature is also developed with TDD. So unintended use of the function will fail the new features tests.
So using TDD to drive new features will force you to correcty use (internal) functions.
Do you think that is what is meant by Uncle Bob´s tweet?
So using TDD to drive new features will force you to correctly use (internal) functions.
Exactly. But keep in mind the subtle "gap" here: you should use TDD to write (unit) tests that test the contract of your public methods. You do not care about the implementation of these methods - that is all internal implementation detail.
Therefore: if your "new" code uses an existing method in an unintended way you are "told" because an exception is thrown or you receive an unexpected result.
That is what I mean by "gap": you see, the above describes a black box testing approach. You have a public method X, and you verify its public contract. Compare that to white box testing where you write tests to cover all paths taken within X. When doing that, you could notice: "ok to test that one condition in my internal method, I would have to drive this special data".
But as said - I think you should go for black box testing - white box tests might break easily when refactoring internal methods.
And there is an additional dimension here: keep in mind that ideally you change code in order to implement new features. This means that adding new features only takes place by writing new classes and methods. This means that your new code has no chance using private internal methods. Because you are within a new class. In other words: when you regularly happen to run into situations where your internal methods are used in many different ways - then you are probably doing something wrong.
The ideal path is: you implement a new requirement by creating a set of new classes. Later on, you have to add other requirements - by writing more classes.
In that ideal path - there is no need for defensive programming within internal methods. Because you exactly understand each use case for such internal methods!
Thus, the conclusion is: avoid defensive programming in internal methods. Make sure that your public APIs check all pre-conditions, so they fail (as fast as possible) if there is a problem. Try to avoid these internal consistency checks - as they tend to bloat your code - and rest assured: in 5 weeks or 5 months you will not remember if you really needed that check, or if it is just "defensive".
One way to answer this is to look at what else Uncle Bob has had to say on the topic. For example:
In a system with meager code coverage, few tests, and lots of tangled legacy code, defensive programming should be the rule.
In a system born of TDD, with 90+% coverage and highly reliable, well-maintained unit tests, defensive programming should be the exception.
From this, we can infer his main argument -- if the defensive checks are actually providing a benefit, then that is a hint that we are missing some constraints. If we are missing some constraints, and all the tests are passing, then we must also be missing some tests.
Or, to express the same idea in a slightly different way -- the constraints implied by the defensive patterns in your implementation belong closer to the boundary (ie, in the public API).
If there are constraints, for example, to limit what data is allowed to pass through the boundary, then there should be tests to ensure that the boundary actually implements the constraints.
When you use TDD properly, you cover all the possible cases and assert that your public functions that call the private ones do respond properly as expected not only for the happy scenario, but for all different possible scenarios. When you use defending programing in your private methods, you are actually getting yourself ready for these (different possible) scenarios mentioned above.
I, personally, do not think defending programing is bad even if it is in private methods, however, based on my description above I see it is a double effort that is unnecessary and also, it eliminates the importance of the TTD because you are handling these special cases in your application by complicating the code, instead of writing it a way that is proof.

TDD, How to write tests that will compile even if objects don't exist

I'm using VS 2012, but that's not really important.
What is important is that I'm trying to do some TDD by writing all my tests first and then creating the code.
However, the app will not compile because none of my objects or methods exist.
Now, to my mind, I should be able to create ALL my tests but still run my app so I can debug etc. The tests shouldn't prevent compilation because objects and methods are missing.
I thought the whole point of it was that as you develop your tests you can begin to see duplications etc so that you can refactor before writing a single line of code.
So the question is, is there a way to do this or am I doing this wrong?
EDIT
I am using VS2012 and C#
I see a small problem with
writing all my tests first and then creating the code.
You don't need to write ALL your tests first, you just need one, make it fail, make it pass and repeat. That means ideally at any point you should have ideally one failing test.
A compile failure counts as a failed test in that sense. So the next step is to make it pass - i.e. add stubs or return default values to make it compile. The test would then be red.. then work at getting it to green.
Test Driven Development is about very small iterations. You don't define all your tests up front. You create one test based on one fraction of one requirement. Then you implement the code to pass that test. Once it's passing, you work on another fraction of a requirement.
The idea is that trying to do all the design up front (whether it be creating detailed class diagrams or creating a bunch of tests) means that you will find it too expensive to change a weakness in your design, so you won't improve your code.
Here's an example. Let's say you decide to use inheritance to relate two objects, but when you started implementing the objects, you found that made testing them tough. You discover it would be much easier to test each object individually, and relate them via containment instead. What is happening is the tests are driving your design in a more loosely coupled direction. This is a very good outcome of TDD - you are using tests to improve the design.
If you had written all your tests in advance assuming your design decision of inheritance was a good choice, you would either throw away a lot of work, or you would say "it's too tough to make a change like that now, so I'll just live with this sub-optimal design instead."
You can certainly create business-rule-related acceptance tests in advance. Those are called behavior tests (part of Behavior Driven Development, or BDD) and they are good to test features of the software from the user's point of view. But those are NOT unit tests. Unit tests are for testing code from the developer's point of view. Creating the unit tests in advance defeats the purpose of TDD, because it will make testing harder, it will prevent you from improving your code, and will often lead to rebellion and failure of the practice. That's why it's important to do it right.
What is important is that I'm trying to do some TDD by writing all my tests first and then creating the code.
The problem is that "writing all my tests first" is most emphatically not "do[ing] some TDD". Test driven development consists of lots of small repetitions of the "red-green-refactor" cycle:
Add a unit test to the test suite, run it and watch it fail (red)
Add just enough code to the system under test to make all the tests
pass (green)
Improve the design of the system under test (typically
by removing duplication) while keeping all the tests passing
(refactor)
If you code an entire huge test suite up front, you'll spend forever trying to get to the "green" (all tests passing) state.
However, the app will not compile because none of my objects or methods exist.
That's typical of any compiled language; it's not a TDD issue per se. All it means is that, in order to watch the new test fail, you may have to write a minimal stub for whatever feature you're currently working on to make the compiler happy.
For example, I might write this test (using NUnit):
[Test]
public void DefaultGreetingIsHelloWorld()
{
WorldGreeter target = new WorldGreeter();
string expected = "Hello, world!";
string actual = target.Greet();
Assert.AreEqual(expected, actual);
}
And I'd have to then write this much code to get the app to compile and the test to fail:
public class WorldGreeter
{
public string Greet()
{
return String.Empty;
}
}
Once I've gotten the solution to build and I've seen the one failing test, I can add the code to make the first test pass:
public string Greet()
{
return "Hello, world!";
}
Once all tests pass, I can look through the system under test and see what could be done to improve the design. However, it's essential to the TDD discipline to go through both the "red" and "green" steps before playing around with refactoring.
I thought the whole point of it was that as you develop your tests you can begin to see duplications etc so that you can refactor before writing a single line of code.
Martin Fowler defines refactoring as "a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior" (emphasis added). If you haven't written a single line of code, there's nothing to refactor.
So the question is, is there a way to do this or am I doing this wrong?
If you're looking to do TDD, then, yes, I fear you are doing this wrong. You may well be able to deliver great code doing what you're doing, but it isn't TDD; whether or not that's a problem is for you to decide for yourself.
You should be able to create your empty class with stub functions, no?
class Whatever {
char *foo( const char *name ) {}
int can_wibble( Bongo *myBongo ) {}
}
Then you can compile.
No. It's about coding just enough to verify the implementation of the required use cases
You can define your tests cases early, but to code the test cases them you iteratively write a test, have it fail. Then write some code that ensures that the code passes.
Then rinse and repeat until all your test cases are covered,
Edit to address comment.
As you build out the code, that's where your programming designs and faults are identified. Extreme programming lends it self to you being able to change code with out care as the test base protects your requirements. Your intentions are good but the reality is that you'll refactor/redesign test test cases as you discover design issues and flaws through building out the code and test base.
However IMHO, in a very general case, a test that doesn't compile is effectively a meta test that's failing that needs to be corrected before moving on. It's telling you to write some code!
Use mock, from Wikipiedia: mock objects are simulated objects that mimic the behavior of real objects in controlled ways. A programmer typically creates a mock object to test the behavior of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behavior of a human in vehicle impacts.
Please refer this.
I found this using dynamic for objects that don't exists yet:
https://coderwall.com/p/0uzmdw

Is it Acceptable to use Globals Variables for Debug code

I have a very well thought out object oriented structure to a large project that I am working on. However, in areas of my code I would like to toggle debug sections on and off through a set of variables located in one easy to access area. My question is whether this is a good practice or if I should implement an even more convoluted passing scheme to pass debug parameters.
You should probably take a good look at the System.Diagnostics.Debug class and how it is implemented using the Conditonal attribute.
Build something like that. Ease of use is nothing against the complexity of being certain you turned it all off.
And of course C# doesn't have glbal variables anyway.
You should use the debug class that has numerous methods to handle debugging, which are removed when built in release mode. Also conditional methods would probably help you as well.

Can I ensure all tests contain an assertion in test/unit?

With test/unit, and minitest, is it possible to fail any test that doesn't contain an assertion, or would monkey-patching be required (for example, checking if the assertion count increased after each test was executed)?
Background: I shouldn't write unit tests without assertions - at a minimum, I should use assert_nothing_raised if I'm smoke testing to indicate that I'm smoke testing.
Usually I write tests that fail first, but I'm writing some regression tests. Alternatively, I could supply an incorrect expected value to see if the test is comparing the expected and actual value.
To ensure unit tests actually verify anything a technique called Mutation testing is used.
For Ruby, you can take a look at Mutant.
As PK's link points out too, the presence of assertions in itself doesn't mean the unit test is meaningful and correct. I believe there is no automatic replacement for careful thinking and awareness.
Ensuring the tests fail first is a good practice, which should be made into a habit.
Apart from the things you mention, I often set wrong values in asserts in new tests, to check that the test really runs and fails. (Then I fix it of course :-) This is less obtrusive than editing the production code.
I don't really think that forcing the test to fail without an assert is really helpful. Having an assert in a test is not a goal in itself - the goal is to write useful tests.
The missing assert is just an indication that the test may not be useful. The interesting question is: Will the test fail if something breaks?. If it doesn't, it's obviously useless.
If all you're testing for is that the code doesn't crash, then assert_nothing_raised around it is just a kind of comment. But testing for "no explosions" probably indicates a weak test in itself. In most cases, it doesn't give you any useful information about your code (because "no crash != correct"), so why did you write the test in the first place? Plus I rather prefer a method that explodes properly to one that just returns a wrong result.
I found the best regression test come from the field: Bang your app (or have your tester do it), and for each problem you find write a test that fails. Fix it, and have the test pass.
Otherwise I'd test the behavior, not the absence of crashes. In the case that I have "empty" tests (meaning that I didn't write the test code yet), I usually put a #flunk inside to remind me.

Is it idiomatic Ruby to add an assert( ) method to Ruby's Kernel class?

I'm expanding my Ruby understanding by coding an equivalent of Kent Beck's xUnit in Ruby. Python (which Kent writes in) has an assert() method in the language which is used extensively. Ruby does not. I think it should be easy to add this but is Kernel the right place to put it?
BTW, I know of the existence of the various Unit frameworks in Ruby - this is an exercise to learn the Ruby idioms, rather than to "get something done".
No it's not a best practice. The best analogy to assert() in Ruby is just raising
raise "This is wrong" unless expr
and you can implement your own exceptions if you want to provide for more specific exception handling
I think it is totally valid to use asserts in Ruby. But you are mentioning two different things:
xUnit frameworks use assert methods for checking your tests expectations. They are intended to be used in your test code, not in your application code.
Some languages like C, Java or Python, include an assert construction intended to be used inside the code of your programs, to check assumptions you make about their integrity. These checks are built inside the code itself. They are not a test-time utility, but a development-time one.
I recently wrote solid_assert: a little Ruby library implementing a Ruby assertion utility and also a post in my blog explaining its motivation. It lets you write expressions in the form:
assert some_string != "some value"
assert clients.empty?, "Isn't the clients list empty?"
invariant "Lists with different sizes?" do
one_variable = calculate_some_value
other_variable = calculate_some_other_value
one_variable > other_variable
end
And they can be deactivated, so assert and invariant get evaluated as empty statements. This let you avoid performance problems in production. But note that The Pragmatic Programmer: from journeyman to master recommends against deactivating them. You should only deactivate them if they really affect the performance.
Regarding the answer saying that the idiomatic Ruby way is using a normal raise statement, I think it lacks expressivity. One of the golden rules of assertive programming is not using assertions for normal exception handling. They are two completely different things. If you use the same syntax for the two of them, I think your code will be more obscure. And of course you lose the capability of deactivating them.
Some widely-regarded books that dedicate whole sections to assertions and recommend their use:
The Pragmatic Programmer: from Journeyman to Master by Andrew Hunt and David Thomas
Code Complete: A Practical Handbook of Software Construction by Steve McConnell
Writing Solid Code by Steve Maguire
Programming with
assertions
is an article that illustrates well what assertive programming is about and
when to use it (it is based in Java, but the concepts apply to any
language).
What's your reason for adding the assert method to the Kernel module? Why not just use another module called Assertions or something?
Like this:
module Assertions
def assert(param)
# do something with param
end
# define more assertions here
end
If you really need your assertions to be available everywhere do something like this:
class Object
include Assertions
end
Disclaimer: I didn't test the code but in principle I would do it like this.
It's not especially idiomatic, but I think it's a good idea. Especially if done like this:
def assert(msg=nil)
if DEBUG
raise msg || "Assertion failed!" unless yield
end
end
That way there's no impact if you decide not to run with DEBUG (or some other convenient switch, I've used Kernel.do_assert in the past) set.
My understanding is that you're writing your own testing suite as a way of becoming more familiar with Ruby. So while Test::Unit might be useful as a guide, it's probably not what you're looking for (because it's already done the job).
That said, python's assert is (to me, at least), more analogous to C's assert(3). It's not specifically designed for unit-tests, rather to catch cases where "this should never happen".
How Ruby's built-in unit tests tend to view the problem, then, is that each individual test case class is a subclass of TestCase, and that includes an "assert" statement which checks the validity of what was passed to it and records it for reporting.

Resources