Do Ruby classes get cleared out between Rake tasks - ruby

I have a Rakefile that defines the spec task as
task :spec => [:check_dependencies, :load_backends]
And then runs the actual rspec tests. During the load_backends task, it loads a class called Story, but in the first spec test, defined?(Story) returns false.
I'm assuming that it is intended behavior of Rake to start with a fresh environment at the beginning of each task, but is there a way to override this? Or do I need to re-architect loading the backends into each task?

RSpec's spec task fires up a new Ruby process (mainly to not screw with your Rake process, I think), therefore classes defined in a rake task (even the spec task) are not available in your specs. Consider moving this logic to your spec helper or don't use RSpec's spec task.

Related

BeforeAll and AfterAll hooks in a separate gem

Background:
We have a monorepo of services, and each service has unit tests and runs their own tests using rspec, for example:
repo/appA/spec/*_spec.rb
repo/appB/spec/*_spec.rb
repo/appC/subserviceD/spec/*_spec.rb
repo/appC/subserviceE/spec/*_spec.rb
My goal:
To preface, I'm new to the Ruby ecosystem, but I would like to setup a separate gem (an isolated repo from the monorepo) which is responsible for running some code after specs finish, basically an rspec afterall hook, or any time after the tests are done.
For anyone familiar with the Python/pytest ecosystem, this is basically analogous to writing a "pytest plugin," where you can add extra functionality to existing hooks.
Question:
Is it possible to create an afterall RSpec hook (or something behaviour) in a separate gem, and any code that imports this gem automatically has the hook applied after their tests run?
Thank you in advance!

Iterate over test files synchronously with Mocha

I have a set of mocha test scripts (= files), located in /test together with mocha.opts. It seems mocha is running all test files in parallel, is this correct? This could be a problem if test data are used in different test scripts.
How can I ensure, that each file is executed separately?
It seems mocha is running all test files in parallel, is this correct?
No.
By default, Mocha loads the test files sequentially, and records all tests that must be run, then it runs the test one by one, again sequentially. Mocha will not run two tests at the same time, no matter whether the tests are in the same file or in different files. Note that whether your tests are asynchronous or synchronous makes no difference: when Mocha starts an asynchronous test, it waits for it to complete before moving on to the next test.
There are tools that patch Mocha to run tests in parallel. So you may see demonstrations showing Mocha tests running in parallel, but this requires additional tools, and is not part of Mocha properly speaking.
If you are seeing behavior that suggest tests running in parallel, that's a bug in your code, or perhaps you are misinterpreting the results you are getting. Regarding bugs, it is possible to make mistakes and write code that will indicate to Mocha that your test is over, when in fact there are still asynchronous operations running. However, this is a bug in the test code, not a feature whereby Mocha is running tests in parallel.
Be careful when assigning environment variables outside of mocha hooks since the assignments to that variables are done in all files before any test execution (i.e eny "before*" or "it" hook).
Hence the value assigned to the environment variable in the first file will be overwritten in the second one, before any Mocha test hook execution.
Eg. if you are assigning process.env.PORT=5000 in test1.js file and process.env.PORT=6000 in test2.js outside of any mocha hook, then when the tests from test1.js starts execution the value of the process.env.PORT will be 6000 and not 5000 as you may expect.

What is a good way to run background processes in foreground for tests in Ruby?

Working with a Sinatra application, and found 3 ways to run a background process:
Thread.new
Process.fork
Process.spawn
Figured out how to get the first two to work, but now the challenge is that the tests need to run synchronously (for a few reasons).
What is a good way to run jobs asynchronously in production, but force the tests to run synchronously? Preferably with a call in the spec_helper...?
Ruby 1.9.3, Sinatra app, RSpec.
I recommend the following hybrid approach:
Write simple unit tests and refactor what you're testing to be synchronous. In other words, put all of the background functionality into straightforward classes and methods that can be unit tested easily. Then the background processes can call the same functionality, that's already been unit tested. You shouldn't have to unit test the background thread/process creation itself, since this is already tested in ruby (or another library like god, bluepill or Daemons. This TDD approach has the added benefit of making the codebase more maintainable.
For functional and integration tests, follow the approach of delayed_job and provide a method to do all of the background work synchronously, like with Delayed::Worker.new.work_off
You also may want to consider using EventMachine (as in Any success with Sinatra working together with EventMachine WebSockets? ) over spawning threads or processes, especially if the background processes are IO intensive (making http or database requests, for example).
Here's what I came up with:
process_in_background { slow_method }
def process_in_background
Rails.env == 'test' ? yield : Thread.new { yield }
end
def slow_method
...code that takes a long time to run...
end
I like this solution because it is transparent: it runs the exact same code, just synchronously.
Any suggestions/problems with it? Is it necessary to manage zombies? How?

Sinatra - Register startup and shutdown operations

I'm designing a web service using Sinatra and I need to perform certain operations when the service is started and some other operations when the server is stopped.
How can I register those operations to be fully integrated with sinatra?
Thanks.
The answer depends on how you need to perform your operations. Does they need to be ran for each ruby process or do they need to be ran just once for the service. I suppose it's once for all the service and in the case of the latest :
You might be tempted to run some code before your Sinatra app is starting but this is not really the behavior you might expect. I'll explain why just after. The workaround would be adding code before your sinatra class like
require "sinatra"
puts "Starting"
get "/" do
...
end
You could add some code to your config.ru too btw, would have the same effect but I don't which one is uglier.
Why is this wrong ? Because when you host your web service, many web server instances will be fired and each one will execute the puts method or your "starting" code. This is correct when you want to initialize things that are local to your app instance, like a database connection but not to initialize things which are shared by all of them.
And about the code firing at its end, well you can't (or maybe you could with some really ugly workaround, but you'll end with the same issue you get with the start).
So the best way to handle on and off operations would be to wrap it within your tasks firing your service.
Run some rake task or ruby script that do your initalization stuff
Start your web server
And to stop it
Run a rake task or ruby script that stops the server
Run your rake task or ruby script that does the cleaning operations.
You can wrap those into a single rake task, by starting your app server directly from ruby, like I did there https://github.com/TactilizeTeam/photograph/blob/master/bin/photograph.
This way you can easily add some code to get ran before starting the service, still keeping it into a single task. With some plumbing, I guess you can fire multiple thin instances and then allow you to start your cluster of thin (or whatever you use) instances and have still one task to rely on.
I'd say that adding a handler to the SIGINT signal could allow you to run some code before exiting. See http://www.ruby-doc.org/core-1.9.3/Signal.html for how to do that. You might want to check if Thin isn't already registering a trap for that signal, I'm not sure if this is handled in the library or in the script used to launch thin ( the "thin" executable that gets in your $PATH).
Another way to handle the exit, would be to have a watchdog process, that check if your cluster is running and could ensure the stop code is being ran if no more instances are running.

How do I test DelayedJob with Cucumber?

We use DelayedJob to run some of our long running processes and would like to test with Cucumber/Webrat.
Currently, we are calling Delayed::Job.work_off in a Ruby thread to get work done in the background, but are looking for a more robust solution
What is the best approach for this?
Thanks.
The main problem I see with the Delayed:Job.work_off approach is that you are making explicit in your Cucumber scenarios something that belongs to the internals of your system. Mixing both concerns is against the spirit of functional testing:
When I click some link # Some operation is launched in the background
And Jobs are dispatched # Delayed:Job.work_off invoked here
Then I should see the results...
Another problem is that you populate your Cucumber scenarios with repetitive steps for dispatching jobs when needed.
The approach I am currently using is launching delayed_job in the background while cucumber scenarios are being executed. You can check the Cucumber hooks I am using in that link.

Resources