Validate arguments in Ruby? - ruby

I wonder if one should validate that the arguments passed to a method is of a certain class.
eg.
def type(hash = {}, array = [])
# validate before
raise "first argument needs to be a hash" unless hash.class == Hash
raise "second argument needs to be an array" unless array.class == Array
# actual code
end
Is it smart to do this or is it just cumbersome and waste of time to validate all passed in arguments?
Are there circumstances when you would like to have this extra security and circumstances when you won't bother?
Share your experiences!

I wouldn't recommend this specific approach, as you fail to accommodate classes that provide hash or array semantics but are not that class. If you need this kind of validation, you're better off using respond_to? with a method name. Arrays implement the method :[], for what it's worth.
OpenStruct has hash semantics and attribute-accessor method semantics, but won't return true for the condition hash.class==Hash. It'll work just like a hash in practice, though.
To put it in perspective, even in a non-dynamic language you wouldn't want to do it this way; you'd prefer to verify that an object implements IDictionary<T>. Ruby would idiomatically prefer that, when necessary, you verify that the method exists, because if it does, the developer is probably intending their object to act alike. You can provide additional sanity with unit tests around the client code as an alternative to forcing things to be non-dynamic.

There's usually no need to validate that arguments are of a certain class. In Ruby, you're encouraged to use Duck Typing.

I have found that validating that the input parameters meet your preconditions is a very valuable practice. The stupid person that its saves you from is you. This is especially true for Ruby as it has no compile time checks. If there are some characteristics of the input of your method that you know must be true it makes senes to check them at run time and raise errors with meaningful error messages. Otherwise, the code just starts producing garbage out for garbage in and you either get a wrong answer or an exception down the line.

I think that it is unnecessary. I once read on a blog something like "If you need to protect your code from stupid people, ruby isn't the language for you."

If you want compiler/runtime-enforced code contracts, then Ruby isn't for you. If you want a malleable language and easy testability, then Ruby is.

Related

Is there a gem that provides support to detect changes to native ruby type instances?

Although I agree that extending native types and objects is a bad practice, inheriting from them should not be.
In a supposedly supporting gem (that I could not find), the way that the native types were to be used would be as follows:
require 'cool-unkown-light-gem'
class MyTypedArray < CoolArray # would love to directly < Array
def initialize(*args)
super(*args)
# some inits for DataArray
#caches_init = false
end
def name?(name)
init_caches unless !#caches_init
!!#cache_by_name[name]
end
def element(name)
init_caches unless !#caches_init
#cache_by_name[name]
end
private
# overrides the CoolArray method:
# CoolArray methods that modify self will call this method
def on_change
#caches_init = false
super
end
def init_caches
return #cache_by_name if #caches_init
#caches_init = true
#cache_by_name = self.map do |elem|
[elem.unique_name, elem]
end.to_h
end
end
Any method of the parent class not overridden by the child class that modifies self would call, let's say (in this case), the on_change function. Which would allow to do not have to re-define every single one of those methods to avoid losing track on changes.
Let's say the MyTypedArray would array Foo objects:
class Foo
attr_reader :unique_name
def initialize(name)
#unique_name = name
end
end
a short example of the expected behaviour of its usage:
my_array = MyTypedArray.new
my_array.push( Foo.new("bar") ).push( Foo.new("baz") )
my_array.element("bar").unique_name
# => "bar"
my_array.shift # a method that removes the first element from self
my_array.element("bar").unique_name
# => undefined method `unique_name' for nil:NilClass (NoMethodError)
my_array.name?("bar")
# => false
I understand that we should search for immutable classes, yet those native types support changes on the same object and we want a proper way to do an inheritance that is as brief and easy as possible.
Any thoughts, approaches, or recommendations are more than welcome, of course. I do not think I am the only one that have thought on this.
The reason why I am searching for a maintained gem is because different ruby versions may offer different supported methods or options for native types / classes.
[Edit]
The aim of the above is to figure out a pattern that works. I could just follow the rules and suggestions of other posts, yet would not get things work the way I am intended and when I see it proper (a coding language is made by and for humans, and not humans made for coding languages). I know everyone is proud of their achievements in learning, developing and making things shaped in a pattern that is well known in the community.
The target of the above is because all the methods of Array are more than welcome. I do not care if in the version 20 of Ruby they remove some methods of Array. By then my application will be obsolete or someone will achieve the same result in far less code.
Why Array?
Because the order matters.
Why an internal Hash?
Because for the usage I want to make of it, in overall, the cost of building the hash compensates the optimization it offers.
Why not just include Enumerable?
Because we just reduce the number of methods that change the object, but we do not actually have a pattern that allows to change #caches_init to false, so the Hash is rebuilt on next usage (so same problem as with Array)
Why not just whitelist and include target Array methods?
Because that does not get me where I want to be. What if I want anyone to still use pop, or shift but I do not want to redefine them, or even having to bother to manage my mixins and constantly having to use responds_to?? (perhaps that exercise is good to improve your skills in coding and read code from other people, but that is not what it should be)
Where I want to be?
I want to be in a position that I can re-use / inherit any, I repeat, any class (no matter if it is native or not). That is basic for an OOP language. And if we are not talking about an OOP language (but just some sugar at the top of it to make it appear as OOP), then let's keep ourselves open to analyse patterns that should work well (no matter if they are odd - for me is more odd that there are no intermediate levels; which is symptom of many conventional patterns, which in turn is symptom of poor support for certain features that are more widely required than what is accepted).
Why should a gem offer the above?
Well, let's humble it. The above is a very simple case (and even though not covered). You may gain in flexibility at some point by using what some people want to call the Ruby way. But at a cost when you move to bigger architectures. What if I want to create intermediate classes to inherit from? Enriched native classes that boost simple code, yet keeping it aligned with the language. It is easier to say this is not the Ruby way than trying to make the language closer to something that escalates well from the bottom.
I am not surprised that Rails and Ruby are almost "indistinctly" used by many. Because at some point, without some Rails support, what you have with Ruby is a lot of trouble. As, consequently, I am not surprised that Rails is so maintained.
Why should I redefine a pop, or a last, or first methods? For what? They are already implemented.
Why should I whitelist methods and create mixins? is that a object or method oriented programming?
Anyway... I do not expect anyone to share my view on this. I do see other patterns, and I will keep allowing my mind to find them. If anyone is open enough, please, feel free to share. Someone may criticize the approach and be right, but if you got there is because it worked.
To answer your question as it is written, no, there is no gem for this. This is not a possibility of the language, either in pure Ruby or in C which is used internally.
There is no mechanism in detect when self is changed, nor any way to detect if a method is pure (does not change self) or impure (does change self). It seems you want a way to "automatically" be able to know when a method is one or the other, and that, to put simply, is just not possible, nor is it in any language that I am aware of.
Internally (using your example) an Array is backed by a RArray structure in C. A struct is simple storage space: a way to look at an arbitrary block of memory. C does not care how you choose to look at memory, I could just as easily cast the pointer of this struct and say it is a now a pointer to an array of integers and change it that way, it will happily manipulate the memory as I tell it to, and there is nothing that can detect that I did so. Now add in the fact that anyone, any script, or any gem can do this and you have no control over it, and it just shows that this solution is fundamentally and objectively flawed.
This is why most (all?) languages that need to be notified when an object is changed use an observer pattern. You create a function that "notifies" when something changes, and you invoke that function manually when needed. If someone decides to subclass your class, they need only continue the pattern to raise that function if it changes the object state.
There is no such thing as an automatic way of doing this. As already explained, this is an "opt-in" or "whitelist" solution. If you want to subclass an existing object instead of using your own from scratch, then you need to modify its behavior accordingly.
That said, adding the functionality is not as daunting as you may think if you use some clever aliasing and meta-programming with module_eval, class_eval or the like.
# This is 100% untested and not even checked for syntax, just rough idea
def on_changed
# Do whatever you need here when object is changed
end
# Unpure methods like []=, <<, push, map!, etc, etc
unpure_methods.each do |name|
class_eval <<-EOS
alias #{name}_orig #{name}
def #{name}(*args, &block)
#{name}_orig(*args, &block)
on_changed
end
EOS
end

ruby dynamically change method while keeping original type signature

My aim is to take an existing function, foo, and create an exact copy of it called bar, which is simple enough with alias_method. I would then like to dynamically redefine foo such that it has the exact same type signature, so that I can call bar from it, among other reasons.
This requirements mean that I cannot just do something like
define_method(:foo) do |*args, &block|
send(:bar, *args, &block)
end
because it changes the type signature of foo.
I also don't see how I can use something like method(:foo).parameters as that will tell me what the type signature is, but will not specify, for example, the values of default arguments.
Any help is greatly appreaciated!
Ruby has no concept of manifest types and manifest type signatures. Since they don't exist, you obviously can't get them.
When doing Ruby programming, there is of course a latent concept of types and type signatures in the programmer's head. But that's exactly where that concept exists: in the programmer's head and only in the programmer's head.
It might also exist in documentation, but that is not guarateed. Also, there is no standard format for putting it in documentation. There are various different formats for expressing types in Ruby documentation, sometimes the types are not expressed using any form of (semi-)formal notation at all, but only in prose, and sometimes, they are implicit in the names of parameters. In some cases, the types are just part of the Ruby culture, everybody knows them, but they are never actually written down anywhere (the most obvious example is the each protocol that the Enumerable mixin depends on, which everybody "just knows" without being explicitly specified).
You are also asking about default arguments for optional parameters: these are evaluated dynamically, getting static information about them is simply impossible because of the Halting Problem, Rice's Theorem and all the other fun undecidability results in programming.
TL;DR
Ruby doesn't care about types. It only cares about whether or not an object can #respond_to? a message. As a result, any hard-wired expectations about types should be encoded in method/variable names, and in the documentation.
Use Duck-Typing to Wrap Methods
The "Ruby way" is to use duck-typing rather than strict "type signatures." While there's nothing wrong with wrapping a method, your methods should:
Use meaningful argument names, if the type of an argument matters.
Perform implicit or explicit coercion or define singleton methods in the cases where you need an object to #respond_to? a method it doesn't currently support.
Implement singleton methods when necessary to permit duck-typing.
For example:
def foo array
array.is_a? Array
end
def flexible_foo string_or_array
if string_or_array.respond_to? :split
array = string_or_array.split /,?\s+/
else
array = string_or_array
end
foo array
end
flexible_foo 'a, b, c'
#=> true
flexible_foo %w[a b c]
#=> true
In this example, #foo expects an array. By wrapping #foo, we create a work-alike method that coerces the value into an array if it responds to the :split message, which String does and Array does not.
Documentation
Both RDoc and YARD do a reasonable job of documenting method signatures "out of the box," but YARD also has support for using tags to document things like "type signatures" and return types.
If your code is written with fixed expectations about what kinds of objects can be passed as arguments, then you can document those expectations in comments which RDoc or YARD will dutifully report. However, this is considered to be the programmer's responsibility rather than the Ruby interpreter's, and you'll know if you've broken the implicit contract when Ruby raises a NoMethodError exception at runtime.
This is one reason the Ruby community embraces test-driven development: since Ruby can redefine methods and classes on the fly, the interpreter won't know until runtime whether the calling method has sent an invalid message or not. This is generally considered a Good ThingĀ®, but your mileage and opinions may certainly vary.

Why does rubocop or the ruby style guide prefer not to use get_ or set_?

I was running rubocop on my project and fixing the complaints it raised.
One particular complaint bothered me
Do not prefix reader method names with get_
I could not understand much from this complaint so I looked at source code in github.
I found this snippet
def bad_reader_name?(method_name, args)
method_name.start_with?('get_') && args.to_a.empty?
end
def bad_writer_name?(method_name, args)
method_name.start_with?('set_') && args.to_a.one?
end
So the advice or convention is as follows:
1) Actually they advice us not to use get_ when the method does not have arguments . otherwise they allow get_
2) And they advice us not to use set_ when the method has only one argument .otherwise they allow set_
What is the reason behind this convention or rule or advice?
I think the point here is ruby devs prefer to always think of methods as getters since they returns something and use the equals "syntactic sugar" (like in def self.dog=(params) which lets you do Class.dog = something). In essence the point I've always seen made is that the get and set are redundant and verbose.
In opposition to this you have get and set with multiple args which are like finder methods (particularly get; think of ActiveRecord's where).
Keep in mind that 'style guide' = pure opinion. Consistency is the higher goal of style guides in general so unless something is arguably wrong or difficult to read, your goal should be more on having everything the same than of a certain type. Which is why rubocop let's you turn this off.
Another way to see it: the getter/setter paradigm was, as far as I can tell, largely a specific convention in Java/C++ etc.; at least I knew quite a few Java codebases in the very foggy past where Beans were littered with huge amounts of get_-getters and set_-setters. In that time, the private attribute would likely be called "name" with "set_name()" and "get_name()"; as the attribute itself was called "name", the getter could not be "name()" as well.
Hence the very existence of "get_" and "set_" is due to a (trivial) technical shortcoming of languages that do not allow the "=" in method names.
In Ruby, we have quite a different array of possibilities:
First and foremost, we have name() and name=(), which immediately removes the need for the get_ and set_ syntax. So, we do have getters and setters, we just call them differently from Java.
Also, the attribute then is not name but #name, hence solving this conflict as well.
Actually, you don't have attributes with the plain "obj.name" syntax at all! For eaxmple; while Rails/ActiveRecord pretends that "person.name" is a "attribute", it is in fact simply a pair of auto-generated getter name() and setter name=(). Conceptionally, the caller is not supposed to care about what "name" is (attribute or method), it is an implementation detail of the class.
It saves 4 or 3 key presses for each call. This might seem like a joke, but writing concise yet easily comprehensible code is a trademark of Ruby, after all. :-)
The way I understand it is that it's because foo.get_value is imperative and foo.value is declarative.

What is special about boolean?

In Ruby, there is a convention to have a method name end with a question mark to indicate that its return value is boolean. Why is boolean considered so special? Is there anything convenient if you know that a method's return value is particularly boolean? After all, in Ruby, you can insert all kinds of value returning (getter) methods into a conditional without caring whether it is boolean or not.
I think it is a waste to use the question mark just for indicating a boolean value. There should be more useful uses. I have plenty of use case where I want to have a pair of getter and setter methods, where the setter method should return self so that I can use it in a method chain. And naming them something like get_foo and set_foo looks cumbersome. Rather than following the convention, I am tempted to name a pair of getter and setter methods like this:
def foo?; #foo end
def foo v; #foo = v end
where the value of #foo is not (necessarily) boolean. (Besides potential criticism that breaking the convention will confuse other programmers), is there something wrong with doing that?
There is nothing special at all, it's just a convention. A question can be answered with "yes" or "no", but also with another stuff like someone's name.
By returning a boolean on methods with a question mark, it indicates it to be an explicit behavior.
If you make the answer be "yes" or "no", it's easy for the reader of your code to identify the behavior of your method without even looking at the implementation. On the other hand, if you make it return any other type, it is more difficult for the reader to understand your code without reading your class and method definition.
With a boolean there are only two possible answers. If the return value is not boolean it can be anything, which would not help at all. You would still need to look at the method implementation. You should always look further to understand some piece of code, but using this convention makes it simpler.
There is a convention to use question mark in method names to indicate that a method is a predicate. AFAIK, this predicate is not required (by the convention) to return a boolean value, thanks to simple rules for truthy/falsey values.
Besides potential criticism that breaking the convention will confuse other programmers, is there something wrong with doing that?
Confusing and surprising fellow programmers is bad. Ruby couldn't care less. It's just a convention. And conventions exist for a reason.
You can put anything in a flow control construct, but semantically booleans are appropriate. "If" in real human language typically takes a boolean, and the same is true of the construct in many programming languages. Ruby likes to make things convenient and assigns a "truthiness" value to everything in the language, which affects how it behaves in a boolean context.
In other words, booleans are the only things that are almost exclusively used for flow control, so the convention is to make them look "right" for flow-control constructs. It's their native environment.
(Besides potential criticism that breaking the convention will confuse other programmers), is there something wrong with doing that?
In the same sense that there is nothing wrong with naming all your variables after 1920s comedians, no, there's nothing wrong with that. But also in the same sense as naming all your variables after 1920s comedians, it isn't a very good idea. Nowhere in any language that I know of -- human or computer -- does the question mark mean "get." So the semantics of your code are off with that convention.
This question and the answers boil down to "POLS" AKA "Principle of Least Surprise".
A method name can be a random choice of letters and numbers separated by underscores, with '!', '?' and '=' sprinkled through them, if we chose to do so. They could be randomly created by the code at run time, and, as long as the rest of the code used the same arrangement of characters, the program would run and Ruby would be happy.
We humans, the programmers, determine the name of the methods used, to represent something, a characteristic or an action. Trying to use randomly named methods would lead to madness, or at least a very hard to maintain program. So, instead, we try to use sensible names for things. Sometimes they're verbs or adjectives, sometimes they're more descriptive because the method does several things.
As part of that naming, sometimes we want to provide additional hints about the behavior of the method. By convention in Ruby, we use "!" to warn the coder that the method changes something or is destructive. "=" indicates the method takes a parameter and assigns it to the receiver/object. It's a setter method and in many other languages it'd be idiomatic to use "set_flag..." or "set_value..." as the name. It's just a convention in that language, and followed by developers in the language.
We use "?" in Ruby to ask a question about an object, whether it is, or isn't, true about that object. We could say "is_true?" or "true?" and indicate we are testing whether something is true about it. If it's true, or false, it's a Boolean response so we return a true/false value.

When to use RSpec let()?

I tend to use before blocks to set instance variables. I then use those variables across my examples. I recently came upon let(). According to RSpec docs, it is used to
... to define a memoized helper method. The value will be cached across multiple calls in the same example but not across examples.
How is this different from using instance variables in before blocks? And also when should you use let() vs before()?
I always prefer let to an instance variable for a couple of reasons:
Instance variables spring into existence when referenced. This means that if you fat finger the spelling of the instance variable, a new one will be created and initialized to nil, which can lead to subtle bugs and false positives. Since let creates a method, you'll get a NameError when you misspell it, which I find preferable. It makes it easier to refactor specs, too.
A before(:each) hook will run before each example, even if the example doesn't use any of the instance variables defined in the hook. This isn't usually a big deal, but if the setup of the instance variable takes a long time, then you're wasting cycles. For the method defined by let, the initialization code only runs if the example calls it.
You can refactor from a local variable in an example directly into a let without changing the
referencing syntax in the example. If you refactor to an instance variable, you have to change
how you reference the object in the example (e.g. add an #).
This is a bit subjective, but as Mike Lewis pointed out, I think it makes the spec easier to read. I like the organization of defining all my dependent objects with let and keeping my it block nice and short.
A related link can be found here: http://www.betterspecs.org/#let
The difference between using instances variables and let() is that let() is lazy-evaluated. This means that let() is not evaluated until the method that it defines is run for the first time.
The difference between before and let is that let() gives you a nice way of defining a group of variables in a 'cascading' style. By doing this, the spec looks a little better by simplifying the code.
I have completely replaced all uses of instance variables in my rspec tests to use let(). I've written a quickie example for a friend who used it to teach a small Rspec class: http://ruby-lambda.blogspot.com/2011/02/agile-rspec-with-let.html
As some of the other answers here says, let() is lazy evaluated so it will only load the ones that require loading. It DRYs up the spec and make it more readable. I've in fact ported the Rspec let() code to use in my controllers, in the style of inherited_resource gem. http://ruby-lambda.blogspot.com/2010/06/stealing-let-from-rspec.html
Along with lazy evaluation, the other advantage is that, combined with ActiveSupport::Concern, and the load-everything-in spec/support/ behavior, you can create your very own spec mini-DSL specific to your application. I've written ones for testing against Rack and RESTful resources.
The strategy I use is Factory-everything (via Machinist+Forgery/Faker). However, it is possible to use it in combination with before(:each) blocks to preload factories for an entire set of example groups, allowing the specs to run faster: http://makandra.com/notes/770-taking-advantage-of-rspec-s-let-in-before-blocks
It is important to keep in mind that let is lazy evaluated and not putting side-effect methods in it otherwise you would not be able to change from let to before(:each) easily.
You can use let! instead of let so that it is evaluated before each scenario.
In general, let() is a nicer syntax, and it saves you typing #name symbols all over the place. But, caveat emptor! I have found let() also introduces subtle bugs (or at least head scratching) because the variable doesn't really exist until you try to use it... Tell tale sign: if adding a puts after the let() to see that the variable is correct allows a spec to pass, but without the puts the spec fails -- you have found this subtlety.
I have also found that let() doesn't seem to cache in all circumstances! I wrote it up in my blog: http://technicaldebt.com/?p=1242
Maybe it is just me?
Dissenting voice here: after 5 years of rspec I don't like let very much.
1. Lazy evaluation often makes test setup confusing
It becomes difficult to reason about setup when some things that have been declared in setup are not actually affecting state, while others are.
Eventually, out of frustration someone just changes let to let! (same thing without lazy evaluation) in order to get their spec working. If this works out for them, a new habit is born: when a new spec is added to an older suite and it doesn't work, the first thing the writer tries is to add bangs to random let calls.
Pretty soon all the performance benefits are gone.
2. Special syntax is unusual to non-rspec users
I would rather teach Ruby to my team than the tricks of rspec. Instance variables or method calls are useful everywhere in this project and others, let syntax will only be useful in rspec.
3. The "benefits" allow us to easily ignore good design changes
let() is good for expensive dependencies that we don't want to create over and over.
It also pairs well with subject, allowing you to dry up repeated calls to multi-argument methods
Expensive dependencies repeated in many times, and methods with big signatures are both points where we could make the code better:
maybe I can introduce a new abstraction that isolates a dependency from the rest of my code (which would mean fewer tests need it)
maybe the code under test is doing too much
maybe I need to inject smarter objects instead of a long list of primitives
maybe I have a violation of tell-don't-ask
maybe the expensive code can be made faster (rarer - beware of premature optimisation here)
In all these cases, I can address the symptom of difficult tests with a soothing balm of rspec magic, or I can try address the cause. I feel like I spent way too much of the last few years on the former and now I want some better code.
To answer the original question: I would prefer not to, but I do still use let. I mostly use it to fit in with the style of the rest of the team (it seems like most Rails programmers in the world are now deep into their rspec magic so that is very often). Sometimes I use it when I'm adding a test to some code that I don't have control of, or don't have time to refactor to a better abstraction: i.e. when the only option is the painkiller.
let is functional as its essentially a Proc. Also its cached.
One gotcha I found right away with let... In a Spec block that is evaluating a change.
let(:object) {FactoryGirl.create :object}
expect {
post :destroy, id: review.id
}.to change(Object, :count).by(-1)
You'll need to be sure to call let outside of your expect block. i.e. you're calling FactoryGirl.create in your let block. I usually do this by verifying the object is persisted.
object.persisted?.should eq true
Otherwise when the let block is called the first time a change in the database will actually happen due to the lazy instantiation.
Update
Just adding a note. Be careful playing code golf or in this case rspec golf with this answer.
In this case, I just have to call some method to which the object responds. So I invoke the _.persisted?_ method on the object as its truthy. All I'm trying to do is instantiate the object. You could call empty? or nil? too. The point isn't the test but bringing the object ot life by calling it.
So you can't refactor
object.persisted?.should eq true
to be
object.should be_persisted
as the object hasn't been instantiated... its lazy. :)
Update 2
leverage the let! syntax for instant object creation, which should avoid this issue altogether. Note though it will defeat a lot of the purpose of the laziness of the non banged let.
Also in some instances you might actually want to leverage the subject syntax instead of let as it may give you additional options.
subject(:object) {FactoryGirl.create :object}
"before" by default implies before(:each). Ref The Rspec Book, copyright 2010, page 228.
before(scope = :each, options={}, &block)
I use before(:each) to seed some data for each example group without having to call the let method to create the data in the "it" block. Less code in the "it" block in this case.
I use let if I want some data in some examples but not others.
Both before and let are great for DRYing up the "it" blocks.
To avoid any confusion, "let" is not the same as before(:all). "Let" re-evaluates its method and value for each example ("it"), but caches the value across multiple calls in the same example. You can read more about it here: https://www.relishapp.com/rspec/rspec-core/v/2-6/docs/helper-methods/let-and-let
Note to Joseph -- if you are creating database objects in a before(:all) they won't be captured in a transaction and you're much more likely to leave cruft in your test database. Use before(:each) instead.
The other reason to use let and its lazy evaluation is so you can take a complicated object and test individual pieces by overriding lets in contexts, as in this very contrived example:
context "foo" do
let(:params) do
{ :foo => foo, :bar => "bar" }
end
let(:foo) { "foo" }
it "is set to foo" do
params[:foo].should eq("foo")
end
context "when foo is bar" do
let(:foo) { "bar" }
# NOTE we didn't have to redefine params entirely!
it "is set to bar" do
params[:foo].should eq("bar")
end
end
end
I use let to test my HTTP 404 responses in my API specs using contexts.
To create the resource, I use let!. But to store the resource identifier, I use let. Take a look how it looks like:
let!(:country) { create(:country) }
let(:country_id) { country.id }
before { get "api/countries/#{country_id}" }
it 'responds with HTTP 200' { should respond_with(200) }
context 'when the country does not exist' do
let(:country_id) { -1 }
it 'responds with HTTP 404' { should respond_with(404) }
end
That keeps the specs clean and readable.

Resources