Is it idiomatic Ruby to add an assert( ) method to Ruby's Kernel class? - ruby

I'm expanding my Ruby understanding by coding an equivalent of Kent Beck's xUnit in Ruby. Python (which Kent writes in) has an assert() method in the language which is used extensively. Ruby does not. I think it should be easy to add this but is Kernel the right place to put it?
BTW, I know of the existence of the various Unit frameworks in Ruby - this is an exercise to learn the Ruby idioms, rather than to "get something done".

No it's not a best practice. The best analogy to assert() in Ruby is just raising
raise "This is wrong" unless expr
and you can implement your own exceptions if you want to provide for more specific exception handling

I think it is totally valid to use asserts in Ruby. But you are mentioning two different things:
xUnit frameworks use assert methods for checking your tests expectations. They are intended to be used in your test code, not in your application code.
Some languages like C, Java or Python, include an assert construction intended to be used inside the code of your programs, to check assumptions you make about their integrity. These checks are built inside the code itself. They are not a test-time utility, but a development-time one.
I recently wrote solid_assert: a little Ruby library implementing a Ruby assertion utility and also a post in my blog explaining its motivation. It lets you write expressions in the form:
assert some_string != "some value"
assert clients.empty?, "Isn't the clients list empty?"
invariant "Lists with different sizes?" do
one_variable = calculate_some_value
other_variable = calculate_some_other_value
one_variable > other_variable
end
And they can be deactivated, so assert and invariant get evaluated as empty statements. This let you avoid performance problems in production. But note that The Pragmatic Programmer: from journeyman to master recommends against deactivating them. You should only deactivate them if they really affect the performance.
Regarding the answer saying that the idiomatic Ruby way is using a normal raise statement, I think it lacks expressivity. One of the golden rules of assertive programming is not using assertions for normal exception handling. They are two completely different things. If you use the same syntax for the two of them, I think your code will be more obscure. And of course you lose the capability of deactivating them.
Some widely-regarded books that dedicate whole sections to assertions and recommend their use:
The Pragmatic Programmer: from Journeyman to Master by Andrew Hunt and David Thomas
Code Complete: A Practical Handbook of Software Construction by Steve McConnell
Writing Solid Code by Steve Maguire
Programming with
assertions
is an article that illustrates well what assertive programming is about and
when to use it (it is based in Java, but the concepts apply to any
language).

What's your reason for adding the assert method to the Kernel module? Why not just use another module called Assertions or something?
Like this:
module Assertions
def assert(param)
# do something with param
end
# define more assertions here
end
If you really need your assertions to be available everywhere do something like this:
class Object
include Assertions
end
Disclaimer: I didn't test the code but in principle I would do it like this.

It's not especially idiomatic, but I think it's a good idea. Especially if done like this:
def assert(msg=nil)
if DEBUG
raise msg || "Assertion failed!" unless yield
end
end
That way there's no impact if you decide not to run with DEBUG (or some other convenient switch, I've used Kernel.do_assert in the past) set.

My understanding is that you're writing your own testing suite as a way of becoming more familiar with Ruby. So while Test::Unit might be useful as a guide, it's probably not what you're looking for (because it's already done the job).
That said, python's assert is (to me, at least), more analogous to C's assert(3). It's not specifically designed for unit-tests, rather to catch cases where "this should never happen".
How Ruby's built-in unit tests tend to view the problem, then, is that each individual test case class is a subclass of TestCase, and that includes an "assert" statement which checks the validity of what was passed to it and records it for reporting.

Related

How to Work with Ruby Duck Typing

I am learning Ruby and I'm having a major conceptual problem concerning typing. Allow me to detail why I don't understand with paradigm.
Say I am method chaining for concise code as you do in Ruby. I have to precisely know what the return type of each method call in the chain, otherwise I can't know what methods are available on the next link. Do I have to check the method documentation every time?? I'm running into this constantly running tutorial exercises. It seems I'm stuck with a process of reference, infer, run, fail, fix, repeat to get code running rather then knowing precisely what I'm working with during coding. This flies in the face of Ruby's promise of intuitiveness.
Say I am using a third party library, once again I need to know what types are allow to pass on the parameters otherwise I get a failure. I can look at the code but there may or may not be any comments or declaration of what type the method is expecting. I understand you code based on methods are available on an object, not the type. But then I have to be sure whatever I pass as a parameter has all the methods the library is expect, so I still have to do type checking. Do I have to hope and pray everything is documented properly on an interface so I know if I'm expected to give a string, a hash, a class, etc.
If I look at the source of a method I can get a list of methods being called and infer the type expected, but I have to perform analysis.
Ruby and duck typing: design by contract impossible?
The discussions in the preceding stackoverflow question don't really answer anything other than "there are processes you have to follow" and those processes don't seem to be standard, everyone has a different opinion on what process to follow, and the language has zero enforcement. Method Validation? Test-Driven Design? Documented API? Strict Method Naming Conventions? What's the standard and who dictates it? What do I follow? Would these guidelines solve this concern https://stackoverflow.com/questions/616037/ruby-coding-style-guidelines? Is there editors that help?
Conceptually I don't get the advantage either. You need to know what methods are needed for any method called, so regardless you are typing when you code anything. You just aren't informing the language or anyone else explicitly, unless you decide to document it. Then you are stuck doing all type checking at runtime instead of during coding. I've done PHP and Python programming and I don't understand it there either.
What am I missing or not understanding? Please help me understand this paradigm.
This is not a Ruby specific problem, it's the same for all dynamically typed languages.
Usually there are no guidelines for how to document this either (and most of the time not really possible). See for instance map in the ruby documentation
map { |item| block } → new_ary
map → Enumerator
What is item, block and new_ary here and how are they related? There's no way to tell unless you know the implementation or can infer it from the name of the function somehow. Specifying the type is also hard since new_ary depends on what block returns, which in turn depends on the type of item, which could be different for each element in the Array.
A lot of times you also stumble across documentation that says that an argument is of type Object, Which again tells you nothing since everything is an Object.
OCaml has a solution for this, it supports structural typing so a function that needs an object with a property foo that's a String will be inferred to be { foo : String } instead of a concrete type. But OCaml is still statically typed.
Worth noting is that this can be a problem in statically typed lanugages too. Scala has very generic methods on collections which leads to type signatures like ++[B >: A, That](that: GenTraversableOnce[B])(implicit bf: CanBuildFrom[Array[T], B, That]): That for appending two collections.
So most of the time, you will just have to learn this by heart in dynamically typed languages, and perhaps help improve the documentation of libraries you are using.
And this is why I prefer static typing ;)
Edit One thing that might make sense is to do what Scala also does. It doesn't actually show you that type signature for ++ by default, instead it shows ++[B](that: GenTraversableOnce[B]): Array[B] which is not as generic, but probably covers most of the use cases. So for Ruby's map it could have a monomorphic type signature like Array<a> -> (a -> b) -> Array<b>. It's only correct for the cases where the list only contains values of one type and the block only returns elements of one other type, but it's much easier to understand and gives a good overview of what the function does.
Yes, you seem to misunderstand the concept. It's not a replacement for static type checking. It's just different. For example, if you convert objects to json (for rendering them to client), you don't care about actual type of the object, as long as it has #to_json method. In Java, you'd have to create IJsonable interface. In ruby no overhead is needed.
As for knowing what to pass where and what returns what: memorize this or consult docs each time. We all do that.
Just another day, I've seen rails programmer with 6+ years of experience complain on twitter that he can't memorize order of parameters to alias_method: does new name go first or last?
This flies in the face of Ruby's promise of intuitiveness.
Not really. Maybe it's just badly written library. In core ruby everything is quite intuitive, I dare say.
Statically typed languages with their powerful IDEs have a small advantage here, because they can show you documentation right here, very quickly. This is still accessing documentation, though. Only quicker.
Consider that the design choices of strongly typed languages (C++,Java,C#,et al) enforce strict declarations of type passed to methods, and type returned by methods. This is because these languages were designed to validate that arguments are correct (and since these languages are compiled, this work can be done at compile time). But some questions can only be answered at run time, and C++ for example has the RTTI (Run Time Type Interpreter) to examine and enforce type guarantees. But as the developer, you are guided by syntax, semantics and the compiler to produce code that follows these type constraints.
Ruby gives you flexibility to take dynamic argument types, and return dynamic types. This freedom enables you to write more generic code (read Stepanov on the STL and generic programming), and gives you a rich set of introspection methods (is_a?, instance_of?, respond_to?, kind_of?, is_array?, et al) which you can use dynamically. Ruby enables you to write generic methods, but you can also explicity enforce design by contract, and process failure of contract by means chosen.
Yes, you will need to use care when chaining methods together, but learning Ruby is not just a few new keywords. Ruby supports multiple paradigms; you can write procedural, object oriend, generic, and functional programs. The cycle you are in right now will improve quickly as you learn about Ruby.
Perhaps your concern stems from a bias towards strongly typed languages (C++, Java, C#, et al). Duck typing is a different approach. You think differently. Duck typing means that if an object looks like a , behaves like a , then it is a . Everything (almost) is an Object in Ruby, so everything is polymorphic.
Consider templates (C++ has them, C# has them, Java is getting them, C has macros). You build an algorithm, and then have the compiler generate instances for your chosen types. You aren't doing design by contract with generics, but when you recognize their power, you write less code, and produce more.
Some of your other concerns,
third party libraries (gems) are not as hard to use as you fear
Documented API? See Rdoc and http://www.ruby-doc.org/
Rdoc documentation is (usually) provided for libraries
coding guidelines - look at the source for a couple of simple gems for starters
naming conventions - snake case and camel case are both popular
Suggestion - approach an online tutorial with an open mind, do the tutorial (http://rubymonk.com/learning/books/ is good), and you will have more focused questions.

Ruby features that are risky to use

As a Ruby programmer did you feel any time about any feature which is a bit risky to use, may be because of it's strange behaviour? It might be well documented but hard to find while debugging or counterintuitive to remember?
I generally try to stay away from String#gsub!. The doc says "Performs the substitutions of String#gsub in place, returning str, or nil if no substitutions were performed." So if there's nothing to substitute then it'll return nil. Practically I didn't see any use case where this comes handy.
SO, with your experience, is there anything else you'd like to add?
Using return in a lambda, Proc or block. The semantics are well defined, but you will get it wrong, and you will get a LocalJumpError.
The meta programming features of Ruby can be used in very dangerous ways. I've seen code that attempts to runtime-redefine common methods on code classes like String, or Array, and while I can see this being MAYBE acceptable for a tiny temporary script, I don't think it is remotely a good idea in a complex application with many dependancies, or many maintainers.
Well, the most widely abused dangerous feature of Ruby is certainly evaling strings. In vast majority of cases (if not all) it can be avoided using other methods, usually define_method, const_get, const_set etc.
throw/catch (not the same as begin/rescue!) is basically GOTO, and that could be considered to be a risky feature to use in any language.

When to use RSpec let()?

I tend to use before blocks to set instance variables. I then use those variables across my examples. I recently came upon let(). According to RSpec docs, it is used to
... to define a memoized helper method. The value will be cached across multiple calls in the same example but not across examples.
How is this different from using instance variables in before blocks? And also when should you use let() vs before()?
I always prefer let to an instance variable for a couple of reasons:
Instance variables spring into existence when referenced. This means that if you fat finger the spelling of the instance variable, a new one will be created and initialized to nil, which can lead to subtle bugs and false positives. Since let creates a method, you'll get a NameError when you misspell it, which I find preferable. It makes it easier to refactor specs, too.
A before(:each) hook will run before each example, even if the example doesn't use any of the instance variables defined in the hook. This isn't usually a big deal, but if the setup of the instance variable takes a long time, then you're wasting cycles. For the method defined by let, the initialization code only runs if the example calls it.
You can refactor from a local variable in an example directly into a let without changing the
referencing syntax in the example. If you refactor to an instance variable, you have to change
how you reference the object in the example (e.g. add an #).
This is a bit subjective, but as Mike Lewis pointed out, I think it makes the spec easier to read. I like the organization of defining all my dependent objects with let and keeping my it block nice and short.
A related link can be found here: http://www.betterspecs.org/#let
The difference between using instances variables and let() is that let() is lazy-evaluated. This means that let() is not evaluated until the method that it defines is run for the first time.
The difference between before and let is that let() gives you a nice way of defining a group of variables in a 'cascading' style. By doing this, the spec looks a little better by simplifying the code.
I have completely replaced all uses of instance variables in my rspec tests to use let(). I've written a quickie example for a friend who used it to teach a small Rspec class: http://ruby-lambda.blogspot.com/2011/02/agile-rspec-with-let.html
As some of the other answers here says, let() is lazy evaluated so it will only load the ones that require loading. It DRYs up the spec and make it more readable. I've in fact ported the Rspec let() code to use in my controllers, in the style of inherited_resource gem. http://ruby-lambda.blogspot.com/2010/06/stealing-let-from-rspec.html
Along with lazy evaluation, the other advantage is that, combined with ActiveSupport::Concern, and the load-everything-in spec/support/ behavior, you can create your very own spec mini-DSL specific to your application. I've written ones for testing against Rack and RESTful resources.
The strategy I use is Factory-everything (via Machinist+Forgery/Faker). However, it is possible to use it in combination with before(:each) blocks to preload factories for an entire set of example groups, allowing the specs to run faster: http://makandra.com/notes/770-taking-advantage-of-rspec-s-let-in-before-blocks
It is important to keep in mind that let is lazy evaluated and not putting side-effect methods in it otherwise you would not be able to change from let to before(:each) easily.
You can use let! instead of let so that it is evaluated before each scenario.
In general, let() is a nicer syntax, and it saves you typing #name symbols all over the place. But, caveat emptor! I have found let() also introduces subtle bugs (or at least head scratching) because the variable doesn't really exist until you try to use it... Tell tale sign: if adding a puts after the let() to see that the variable is correct allows a spec to pass, but without the puts the spec fails -- you have found this subtlety.
I have also found that let() doesn't seem to cache in all circumstances! I wrote it up in my blog: http://technicaldebt.com/?p=1242
Maybe it is just me?
Dissenting voice here: after 5 years of rspec I don't like let very much.
1. Lazy evaluation often makes test setup confusing
It becomes difficult to reason about setup when some things that have been declared in setup are not actually affecting state, while others are.
Eventually, out of frustration someone just changes let to let! (same thing without lazy evaluation) in order to get their spec working. If this works out for them, a new habit is born: when a new spec is added to an older suite and it doesn't work, the first thing the writer tries is to add bangs to random let calls.
Pretty soon all the performance benefits are gone.
2. Special syntax is unusual to non-rspec users
I would rather teach Ruby to my team than the tricks of rspec. Instance variables or method calls are useful everywhere in this project and others, let syntax will only be useful in rspec.
3. The "benefits" allow us to easily ignore good design changes
let() is good for expensive dependencies that we don't want to create over and over.
It also pairs well with subject, allowing you to dry up repeated calls to multi-argument methods
Expensive dependencies repeated in many times, and methods with big signatures are both points where we could make the code better:
maybe I can introduce a new abstraction that isolates a dependency from the rest of my code (which would mean fewer tests need it)
maybe the code under test is doing too much
maybe I need to inject smarter objects instead of a long list of primitives
maybe I have a violation of tell-don't-ask
maybe the expensive code can be made faster (rarer - beware of premature optimisation here)
In all these cases, I can address the symptom of difficult tests with a soothing balm of rspec magic, or I can try address the cause. I feel like I spent way too much of the last few years on the former and now I want some better code.
To answer the original question: I would prefer not to, but I do still use let. I mostly use it to fit in with the style of the rest of the team (it seems like most Rails programmers in the world are now deep into their rspec magic so that is very often). Sometimes I use it when I'm adding a test to some code that I don't have control of, or don't have time to refactor to a better abstraction: i.e. when the only option is the painkiller.
let is functional as its essentially a Proc. Also its cached.
One gotcha I found right away with let... In a Spec block that is evaluating a change.
let(:object) {FactoryGirl.create :object}
expect {
post :destroy, id: review.id
}.to change(Object, :count).by(-1)
You'll need to be sure to call let outside of your expect block. i.e. you're calling FactoryGirl.create in your let block. I usually do this by verifying the object is persisted.
object.persisted?.should eq true
Otherwise when the let block is called the first time a change in the database will actually happen due to the lazy instantiation.
Update
Just adding a note. Be careful playing code golf or in this case rspec golf with this answer.
In this case, I just have to call some method to which the object responds. So I invoke the _.persisted?_ method on the object as its truthy. All I'm trying to do is instantiate the object. You could call empty? or nil? too. The point isn't the test but bringing the object ot life by calling it.
So you can't refactor
object.persisted?.should eq true
to be
object.should be_persisted
as the object hasn't been instantiated... its lazy. :)
Update 2
leverage the let! syntax for instant object creation, which should avoid this issue altogether. Note though it will defeat a lot of the purpose of the laziness of the non banged let.
Also in some instances you might actually want to leverage the subject syntax instead of let as it may give you additional options.
subject(:object) {FactoryGirl.create :object}
"before" by default implies before(:each). Ref The Rspec Book, copyright 2010, page 228.
before(scope = :each, options={}, &block)
I use before(:each) to seed some data for each example group without having to call the let method to create the data in the "it" block. Less code in the "it" block in this case.
I use let if I want some data in some examples but not others.
Both before and let are great for DRYing up the "it" blocks.
To avoid any confusion, "let" is not the same as before(:all). "Let" re-evaluates its method and value for each example ("it"), but caches the value across multiple calls in the same example. You can read more about it here: https://www.relishapp.com/rspec/rspec-core/v/2-6/docs/helper-methods/let-and-let
Note to Joseph -- if you are creating database objects in a before(:all) they won't be captured in a transaction and you're much more likely to leave cruft in your test database. Use before(:each) instead.
The other reason to use let and its lazy evaluation is so you can take a complicated object and test individual pieces by overriding lets in contexts, as in this very contrived example:
context "foo" do
let(:params) do
{ :foo => foo, :bar => "bar" }
end
let(:foo) { "foo" }
it "is set to foo" do
params[:foo].should eq("foo")
end
context "when foo is bar" do
let(:foo) { "bar" }
# NOTE we didn't have to redefine params entirely!
it "is set to bar" do
params[:foo].should eq("bar")
end
end
end
I use let to test my HTTP 404 responses in my API specs using contexts.
To create the resource, I use let!. But to store the resource identifier, I use let. Take a look how it looks like:
let!(:country) { create(:country) }
let(:country_id) { country.id }
before { get "api/countries/#{country_id}" }
it 'responds with HTTP 200' { should respond_with(200) }
context 'when the country does not exist' do
let(:country_id) { -1 }
it 'responds with HTTP 404' { should respond_with(404) }
end
That keeps the specs clean and readable.

How to deal with not knowing what exceptions can be raised by a library method in Ruby?

This is somewhat of a broad question, but it is one that I continue to come across when programming in Ruby. I am from a largely C and Java background, where when I use a library function or method, I look at the documentation and see what it returns on error (usually in C) or which exceptions it can throw (in Java).
In Ruby, the situation seems completely different. Just now I need to parse some JSON I receive from a server:
data = JSON.parse(response)
Naturally, the first thing I think after writing this code is, what if the input is bad? Is parse going to return nil on error, or raise some exception, and if so, which ones?
I check the documentation (http://flori.github.com/json/doc/JSON.html#M000022) and see, simply:
"Parse the JSON string source into a Ruby data structure and return it."
This is just an example of a pattern I have run into repeatedly in Ruby. Originally, I figured it was some shortcoming of the documentation of whatever library I was working with, but now I am starting to feel this is standard practice and I am in a somewhat different mindset than Ruby programmers. Is there some convention I am unaware of?
How do developers deal with this?
(And yes I did look at the code of the library method, and can get some idea of what exceptions are raised but I cannot be 100% sure and if it is not documented I feel uncomfortable relying on it.)
EDIT: After looking at the first two answers, let me continue the JSON parsing example from above.
I suspect I should not do:
begin
data = JSON.parse(response)
raise "parse error" if data.nil?
rescue Exception => e
# blahblah
end
because I can look at the code/tests and see it seems to raise a ParserError on error (returning nil seems to not be standard practice in Ruby). Would I be correct in saying the recommended practice is to do:
begin
data = JSON.parse(response)
rescue JSON::ParserError => e
# blahblah
end
...based upon what I learned about ParserError by looking through the code and tests?
(I also edited the example to clarify it is a response from a server that I am parsing.)
(And yes I did look at the code of the library method, and can get some idea of what exceptions are raised but I cannot be 100% sure and if it is not documented I feel uncomfortable relying on it.)
I suggest having a look at the tests, as they will show some of the "likely" scenarios and what might be raised. Don't forget that good tests are documentation, too.
Should you want to discard non valid JSON data:
begin
res = JSON.parse(string)
rescue JSON::ParserError => e
# string was not valid
end
I guess that if no documentation is provided, you have to rely on something like this :
begin
# code goes here
rescue
# fail reason is in $!
end
You can never be sure what exceptions can be raised, unless the library code catches all and then wraps them. Your best bet is to assume good input from your code by sanitising what goes in and then use your own higher level exception handling to catch bad input from your inputs.
Your question boils down to basically two questions: is there a convention or standard for finding possible exceptions, and also where is the documentation related to such a convention?
For the first question, the closest thing to a convention or standard is the location and existence of an exceptions.rb file. For libraries or gems where the source code is publicly available, you can typically find the types of exceptions in this file. (Ref here).
If source code is not available or easily accessed, the documentation is your next best source of information. This brings us to your second question. Unfortunately documentation does not have a consistent format concerning potential exceptions, even in standard libraries. For example, the Net::Http documentation doesn't make it obvious what exceptions are available, although if you dig through it you will find that all exceptions inherit from Net::HTTPExceptions.
As another example (again from the standard library documentation), JSON documentation shows an Exception class (which is indeed in an exceptions.rb file, albeit in the source at json/lib/json/add/exceptions.rb). The point here is that it's inconsistent; the documentation for the Exception class is not listed in a way similar to that of Net::HTTPException.
Moreover, in documentation for most methods there is no indication of the exception that may be raised. Ref for example parse, one of the most-used methods of the JSON module already mentioned: exceptions aren't mentioned at all.
The lack of standard and consistency is also found in core modules. Documentation for Math doesn't contain any reference to an exceptions.rb file. Same with File, and its parent IO.
And so on, and so on.
A web search will turn up lots of information on how to rescue exceptions, and even multiple types of exceptions (ref here, here, here, here, etc). However, none of these indicate an answer to your questions of what is the standard for finding exceptions that can be raised, and where is the documentation for this.
As a final note, it has been suggested here that if all else fails you can rescue StandardError. This is a less-than-ideal practice in many cases (ref this SO answer), although I'm assuming you already understand this based on your familiarity with Java and the way you've asked this question. And of course coming from a Java world, you'll need to remember to rescue StandardError and not Exception.

How can I program defensively in Ruby?

Here's a perfect example of the problem: Classifier gem breaks Rails.
** Original question: **
One thing that concerns me as a security professional is that Ruby doesn't have a parallel of Java's package-privacy. That is, this isn't valid Ruby:
public module Foo
public module Bar
# factory method for new Bar implementations
def self.new(...)
SimpleBarImplementation.new(...)
end
def baz
raise NotImplementedError.new('Implementing Classes MUST redefine #baz')
end
end
private class SimpleBarImplementation
include Bar
def baz
...
end
end
end
It'd be really nice to be able to prevent monkey-patching of Foo::BarImpl. That way, people who rely on the library know that nobody has messed with it. Imagine if somebody changed the implementation of MD5 or SHA1 on you! I can call freeze on these classes, but I have to do it on a class-by-class basis, and other scripts might modify them before I finish securing my application if I'm not very careful about load order.
Java provides lots of other tools for defensive programming, many of which are not possible in Ruby. (See Josh Bloch's book for a good list.) Is this really a concern? Should I just stop complaining and use Ruby for lightweight things and not hope for "enterprise-ready" solutions?
(And no, core classes are not frozen by default in Ruby. See below:)
require 'md5'
# => true
MD5.frozen?
# => false
I don't think this is a concern.
Yes, the mythical "somebody" can replace the implementation of MD5 with something insecure. But in order to do that, the mythical somebody must actually be able to get his code into the Ruby process. And if he can do that, then he presumably could also inject his code into a Java process and e.g. rewrite the bytecode for the MD5 operation. Or just intercept the keypresses and not actually bother with fiddling with the cryptography code at all.
One of the typical concerns is: I'm writing this awesome library, which is supposed to be used like so:
require 'awesome'
# Do something awesome.
But what if someone uses it like so:
require 'evil_cracker_lib_from_russian_pr0n_site'
# Overrides crypto functions and sends all data to mafia
require 'awesome'
# Now everything is insecure because awesome lib uses
# cracker lib instead of builtin
And the simple solution is: don't do that! Educate your users that they shouldn't run untrusted code they downloaded from obscure sources in their security critical applications. And if they do, they probably deserve it.
To come back to your Java example: it's true that in Java you can make your crypto code private and final and what not. However, someone can still replace your crypto implementation! In fact, someone actually did: many open-source Java implementations use OpenSSL to implement their cryptographic routines. And, as you probably know, Debian shipped with a broken, insecure version of OpenSSL for years. So, all Java programs running on Debian for the past couple of years actually did run with insecure crypto!
Java provides lots of other tools for defensive programming
Initially I thought you were talking about normal defensive programming,
wherein the idea is to defend the program (or your subset of it, or your single function) from invalid data input.
That's a great thing, and I encourage everyone to go read that article.
However it seems you are actually talking about "defending your code from other programmers."
In my opinion, this is a completely pointless goal, as no matter what you do, a malicious programmer can always run your program under a debugger, or use dll injection or any number of other techniques.
If you are merely seeking to protect your code from incompetent co-workers, this is ridiculous. Educate your co-workers, or get better co-workers.
At any rate, if such things are of great concern to you, ruby is not the programming language for you. Monkeypatching is in there by design, and to disallow it goes against the whole point of the feature.
Check out Immutable by Garry Dolley.
You can prevent redefinition of individual methods.
I guess Ruby has that a feature - valued more over it being a security issue. Ducktyping too.
E.g. I can add my own methods to the Ruby String class rather than extending or wrapping it.
"Educate your co-workers, or get better co-workers" works great for a small software startup, and it works great for the big guns like Google and Amazon. It's ridiculous to think that every lowly developer contracted in for some small medical charts application in a doctor's office in a minor city.
I'm not saying we should build for the lowest common denominator, but we have to be realistic that there are lots of mediocre programmers out there who will pull in any library that gets the job done, paying no attention to security. How could they pay attention to security? Maybe the took an algorithms and data structures class. Maybe they took a compilers class. They almost certainly didn't take an encryption protocols class. They definitely haven't all read Schneier or any of the others out there who practically have to beg and plead with even very good programmers to consider security when building software.
I'm not worried about this:
require 'evil_cracker_lib_from_russian_pr0n_site'
require 'awesome'
I'm worried about awesome requiring foobar and fazbot, and foobar requiring has_gumption, and ... eventually two of these conflict in some obscure way that undoes an important security aspect.
One important security principle is "defense in depth" -- adding these extra layers of security help you from accidentally shooting yourself in the foot. They can't completely prevent it; nothing can. But they help.
If monkey patching is your concen, you can use the Immutable module (or one of similar function).
Immutable
You could take a look at Why the Lucky Stiff's "Sandbox"project, which you can use if you worry about potentially running unsafe code.
http://code.whytheluckystiff.net/sandbox/
An example (online TicTacToe):
http://www.elctech.com/blog/safely-exposing-your-app-to-a-ruby-sandbox
Raganwald has a recent post about this. In the end, he builds the following:
class Module
def anonymous_module(&block)
self.send :include, Module.new(&block)
end
end
class Acronym
anonymous_module do
fu = lambda { 'fu' }
bar = lambda { 'bar' }
define_method :fubar do
fu.call + bar.call
end
end
end
That exposes fubar as a public method on Acronyms, but keeps the internal guts (fu and bar) private and hides helper module from outside view.
If someone monkeypatched an object or a module, then you need to look at 2 cases: He added a new method. If he is the only one adding this meyhod (which is very likely), then no problems arise. If he is not the only one, you need to see if both methods do the same and tell the library developer about this severe problem.
If they change a method, you should start to research why the method was changed. Did they change it due to some edge case behaviour or did they actually fix a bug? especially in the latter case, the monkeypatch is a god thing, because it fixes a bug in many places.
Besides that, you are using a very dynamic language with the assumption that programmers use this freedom in a sane way. The only way to remove this assumption is not to use a dynamic language.

Resources