RSpec - Best Way to Set Env Variable - ruby

I have class methods on a module and put a CONSTANT and call it on each class-level method
module Cryption
module Crypt
class << self
OFF_ENCRYPT = ENV['IS_OFF']
def encrypt
OFF_ENCRYPT
end
def decrypt
OFF_ENCRYPT
end
end
end
end
That module created as a gem
That forces me to set environment variable before require the class
ENV["IS_OFF"] = "YES"
require "bundler/setup"
require "cryption"
Cryption::Crypt.encrypt
# result => "YES"
Is there the proper way set the environment variable on rspec, I think I can't set the variable after require the module
spec_helper.rb
require "bundler/setup"
require "cryption"
crypt_off_spec.rb
RSpec.describe "YES" do
before(:context) do
ENV['IS_OFF'] = "YES"
end
it "encryption off" do
expect(Cryption::Crypt.encrypt).to eq("YES")
end
end
crypt_on_spec.rb
RSpec.describe "NO" do
before(:context) do
ENV['IS_OFF'] = "NO"
end
it "encryption off" do
expect(Cryption::Crypt.encrypt).to eq("NO")
end
end

Env vars aren't really supposed to be changed at runtime. So, calling ENV['IS_OFF'] = "NO" in your tests isn't really a good idea. The problem is, env vars are a global configuration, so if you change it from one test, the value will still be changed when you run the other test.
The easiest way to do this is probably stub_const, which will temporarily change the value of the constant, e.g. stub_const "Cryption::Crypt", "OFF_ENCRYPT", "false". This is the best option, imo.
But for the sake of completeness, I can show you a way to actually change the environment variable from a specific test. First you would need to change the constant to a method, e.g. def off_encrypt; ENV["OFF_ENCRYPT"]; end. Either that or change the encrypt/decrypt methods to just read the env var directly, e.g. not from a constant.
Then, in your tests, you could do something like this:
around(:each) do |example|
orig_value = ENV["OFF_ENCRYPT"]
ENV["OFF_ENCRYPT"] = "true" # set the new value
example.run
ensure
ENV["OFF_ENCRYPT"] = orig_value
end
Again, I wouldn't really recommend this, but it is possible.

Mutating ENV: Generally an Anti-Pattern Easily Avoided with Some Minimal TDD Refactoring
ENV variables are inherited from Ruby's process environment at startup, and while you can change an ENV variable for the current process and any subprocesses at runtime, you're essentially working against the purpose of ENV variables by treating ENV variables as class or instance variables.
From a testing perspective, if you want to validate that a method or class does the right thing when ENV holds a particular value, you should design your application to create a copy of the ENV values you care about, and mutate those as needed. Alternatively, if you're testing whether specific values are properly picked up, you can fork a process and then change the value within the fork, which will not affect that parent process.
Here's what I'd recommend:
Refactor your class to use the ENV variable as a default.
Refactor your class to enable tests or runtime behavior to override the defaults.
Build tests that ensure ENV["ENCRYPTION_ENABLED"] (or whatever you want to name it) is valid, and can be easily modified within your tests or at runtime.
Example Refactoring and Some Test Skeletons
class Foo
attr_accessor :encryption_enabled
# set a default of true (or false if you prefer), or take an
# optional keyword argument like +encryption_enabled: true+
# when you initialize an instance of your class
def initialize
#encryption_enabled = ENV.fetch "ENCRYPTION_ENABLED", true
end
end
This will pick up your existing environment value, but you now have an accessor where you can change it as needed for specific tests. For example:
Rspec.describe Foo
describe "#encryption_enabled" do
it "sets an instance variable based on ENV['ENCRYPTION_ENABLED']" do
pending
end
it "defaults to 'true' if ENV['ENCRYPTION_ENABLED'] is unset" do
pending
end
it "can be set to 'false' via its accessor" do
subject.encryption_enabled = false
expect(subject).to be_false
end
end
end
You could also update #new to take a keyword argument to override the default value, or perform other actions. The point is that ENV should provide defaults that don't change often, and your classes and methods should override them when necessary. This provides more flexibility for tests.
The only ENV-related tests you should really need are those that ensure that the ENV values are read and parsed correct, such as when they are unset, invalid, or whatever. Changing runtime behavior based on ENV variables is just an anti-pattern, so this is ripe for refactoring.

Related

How can I determine what objects a call to ruby require added to the global namespace?

Suppose I have a file example.rb like so:
# example.rb
class Example
def foo
5
end
end
that I load with require or require_relative. If I didn't know that example.rb defined Example, is there a list (other than ObjectSpace) that I could inspect to find any objects that had been defined? I've tried checking global_variables but that doesn't seem to work.
Thanks!
Although Ruby offers a lot of reflection methods, it doesn't really give you a top-level view that can identify what, if anything, has changed. It's only if you have a specific target you can dig deeper.
For example:
def tree(root, seen = { })
seen[root] = true
root.constants.map do |name|
root.const_get(name)
end.reject do |object|
seen[object] or !object.is_a?(Module)
end.map do |object|
seen[object] = true
puts object
[ object.to_s, tree(object, seen) ]
end.to_h
end
p tree(Object)
Now if anything changes in that tree structure you have new things. Writing a diff method for this is possible using seen as a trigger.
The problem is that evaluating Ruby code may not necessarily create all the classes that it will or could create. Ruby allows extensive modification to any and all classes, and it's common that at run-time it will create more, or replace and remove others. Only libraries that forcibly declare all of their modules and classes up front will work with this technique, and I'd argue that's a small portion of them.
It depends on what you mean by "the global namespace". Ruby doesn't really have a "global" namespace (except for global variables). It has a sort-of "root" namespace, namely the Object class. (Although note that Object may have a superclass and mixes in Kernel, and stuff can be inherited from there.)
"Global" constants are just constants of Object. "Global functions" are just private instance methods of Object.
So, you can get reasonably close by examining global_variables, Object.constants, and Object.instance_methods before and after the call to require/require_relative.
Note, however, that, depending on your definition of "global namespace" (private) singleton methods of main might also count, so you check for those as well.
Of course, any of the methods the script added could, when called at a later time, themselves add additional things to the global scope. For example, the following script adds nothing to the scope, but calling the method will:
class String
module MyNonGlobalModule
def self.my_non_global_method
Object.const_set(:MY_GLOBAL_CONSTANT, 'Haha, gotcha!')
end
end
end
Strictly speaking, however, you asked about adding "objects" to the global namespace, and neither constants nor methods nor variables are objects, soooooo … the answer is always "none"?

Shadowing a top-level constant within a binding

I would like to shadow ENV within a templating method, so that I can raise an error if keys are requested which are not present in the real ENV. Obviously I don't want to shadow the constant elsewhere - just within a specific method (specific binding). Is this even possible?
Explainer: - I know about the existence of Hash#fetch and I use it all the time and everywhere. However, I want to use this in an ERB template generating a config file. This config file is likely to be touched by more people than usual, and not everyone is familiar with the Ruby behavior of returning a nil for a missing Hash key. I am also working on a system where, of late, configuration mishaps (or straight out misconfigurations, or misunderstandings of a format) caused noticeable production failures. The failures were operator error. Therefore, I would like to establish a convention, within that template only, that would cause a raise. Moreover, I have a gem, strict_env, that does just that already - but you have to remember to use STRICT_ENV instead of just ENV, and every "you have to" statement for this specific workflow, in this specific case, raises a red flag for me since I want more robustness. I could of course opt for a stricter templating language and use that language's logic for raising (for example, Mustache), but since the team already has some familiarity with ERB, and Rails endorses ERB-templated-YML as a viable config approach (even though you might not agree with that) it would be nice if I could stick to that workflow too. That's why I would like to alter the behavior of ENV[] locally.
ERB#result takes an optional binding:
require 'erb'
class Foo
ENV = { 'RUBY_VERSION' => '1.2.3' }
def get_binding
binding
end
end
template = "Ruby version: <%= ENV['RUBY_VERSION'] %>"
ERB.new(template).result
#=> "Ruby version: 2.1.3"
b = Foo.new.get_binding
ERB.new(template).result b
#=> "Ruby version: 1.2.3"
You can use ENV.fetch(key) to raise when the key is not present.
Other than that you could create a class and delegate to ENV, such as:
class Configuration
def self.[](key)
ENV.fetch(key)
end
end
But raising an error from #fetch instead #[] is more Ruby-like since this is the same behaviour for Hash.
Finally you could monkey patch ENV, but this is usually not a good thing to do:
def ENV.[](key)
fetch(key)
end
As far as I know you can't use refinements to localise this monkey patch because ENV is an object, not a class and its class is Object.

What is the best way to access Cucumber's instance variables form nested classes inside steps?

It's a simple question. I have Cucumber steps, for example:
Given /We have the test environment/ do
#user = # Create User model instance
#post = # Create Post model instance
# etc...
end
In the Then step I'm using my own classes, they simplify the process of testing:
Then /all should be fine/ do
# MyValidatorClass has been defined somwhere in features/support dir
validator = MyValidatorClass.new
validator.valid?.should be_true
end
Inside the MyValidatorClass instance, I deal with the above instance variables #user, #post, etc.
What is the best and simpliest way to access Cucumber variables from MyValidatorClass class instance?
class MyValidatorClass
def valid?
#post
#user
end
end
Now I have manually passed all arguments to MyValidatorClass instance:
validator = MyValidatorClass.new #user, #post
But I think this purpose is bad. I need something more transparent, because we are using Ruby, that why!
What is the best way to do this?
Instance variables that are defined in World scope are available only inside World. Step definitions belong to World. You should put MyValdatorClass inside World by World{MyValdatorClass.new}. After that instance variables defined previously in this scenario's stepdefs will become available in this class and other step definitions in the same scenario.
Some other thoughts that refer to your question:
If you have a step Given we have the test environment, then:
you will duplicate it in all feaures
your features are becoming longer and less pleasant to read because of those unnecessary for current feature's reading details
setting up not needed environment details will take some time
Easier way to create instances is to add helper method that will create them for you:
module InstanceCreator
def user
#user ||= # Create user instance in World
end
#etc
end
World(InstanceCreator)
Then you just use this user when you need it (without any # or ##).
If you need something else besides creating instances, use hooks
Your scenarios should be natural reading. You shouldn't spoil them with steps that you need just to get your automation layer work.
It's better to have regex starting from ^ and ending with $. Without it step definition becomes too flexible. Your first step definition will match also Given We have the test environment with some specifics.
I have found the posible soultion. You just should migrate from instance variables to class variables:
Given /We have the test environment/ do
##user = # Create User model instance
##post = # Create Post model instance
# etc...
end
Then /all should be fine/ do
# MyValidatorClass has been defined somwhere in features/support dir
validator = MyValidatorClass.new
validator.valid?.should be_true
end
...
class MyValidatorClass
def valid?
##post
##user
end
end

Avoiding globals with dynamically loaded ruby files

I am in a position at the moment where I have a plugins folder where there there could be 1 or 100 plugins to be loaded.
Now the problem is, each plugin requires an instance of a class defined within the startup ruby file.
A really simplified example would be:
#startup.rb
def load_plugins
#... get each plugin file
require each_plugin
end
class MuchUsedClass
def do_something
#...
end
end
muchUsedInstance = MuchUsedClass.new
load_plugins
#some_plugin.rb
class SomePluginClass
def initialize(muchUsedInstance)
#muchUsedInstance = muchUsedInstance
end
def do_something_with_instance
#muchUsedInstance.do_something
end
end
somePluginInstance = SomePluginClass.new(muchUsedInstance)
somePluginInstance.do_something_with_instance
The main problem is that when you call require, it doesn't have any clue about what has happened before it is being required. So I find it nasty making a global variable within the startup file just to satisfy all other required files, but it seems like one of the only ways to be able to pass some data down to an included file, I could also make a singleton class to expose some of this, but that also seems a bit nasty.
As I am still new to ruby and am still looking through the statically typed glasses, I will probably be missing a decent pattern to solve this, in C# I would opt for dependency injection and just hook everything up that way...
Your example code does not have a global variable. Global variables have names that start with $. The code as you wrote it won't work, because muchUsedInstance is just a local variable and will not be shared between different Ruby files.
If you are not going to change the instance ever, you could easily store it as a constant:
MuchUsedInstance = MuchUsedClass.new
You could store it as a nested constant inside the class:
MuchUsedClass::Instance = MuchUsedClass.new
You could store it as an instance variable inside the class object, with a getter method that automatically creates it if it isn't there already:
def MuchUsedClass.instance
#instance ||= MuchUsedClass.new
end

How can I choose which version of a module to include dynamically in Ruby?

I'm writing a small Ruby command-line application that uses fileutils from the standard library for file operations. Depending on how the user invokes the application, I will want to include either FileUtils, FileUtils::DryRun or FileUtils::Verbose.
Since include is private, though, I can't put the logic to choose into the object's initialize method. (That was my first thought, since then I could just pass the information about the user's choice as a parameter to new.) I've come up with two options that seem to work, but I'm not happy with either:
Set a global variable in the app's namespace based on the user's choice, and then do a conditional include in the class:
class Worker
case App::OPTION
when "dry-run"
include FileUtils::DryRun
etc.
Create sub-classes, where the only difference is which version of FileUtils they include. Choose the appropriate one, depending on the user's choice.
class Worker
include FileUtils
# shared Worker methods go here
end
class Worker::DryRun < Worker
include FileUtils::DryRun
end
class Worker::Verbose < Worker
include FileUtils::Verbose
end
The first method seems DRY-er, but I'm hoping that there's something more straightforward that I haven't thought of.
So what if it's private?
class Worker
def initialize(verbose=false)
if verbose
(class <<self; include FileUtils::Verbose; end)
else
(class <<self; include FileUtils; end)
end
touch "test"
end
end
This includes FileUtils::something in particular's Worker's metaclass - not in the main Worker class. Different workers can use different FileUtils this way.
Conditionally including the module through the send methods works for me as in the below tested example:
class Artefact
include HPALMGenericApi
# the initializer just sets the server name we will be using ans also the 'transport' method : Rest or OTA (set in the opt parameter)
def initialize server, opt = {}
# conditionally include the Rest or OTA module
self.class.send(:include, HPALMApiRest) if (opt.empty? || (opt && opt[:using] opt[:using] == :Rest))
self.class.send(:include, HPALMApiOTA) if (opt && opt[:using] opt[:using] == :OTA)
# ... rest of initialization code
end
end
If you would like to avoid the "switch" and inject the module, the
def initialize(injected_module)
class << self
include injected_module
end
end
syntax won't work (the injected_module variable is out of scope). You could use the self.class.send trick, but per object instance extending seems more reasonable to me, not only because it is shorter to write:
def initialize(injected_module = MyDefaultModule)
extend injected_module
end
but also it minimizes the side effects - the shared and easily changable state of the class, which can result in an unexpected behavior in a larger project. In Ruby the is no real "privacy" so to say, but some methods are marked private not without a reason.

Resources