I am working on a big project, and I realized that several of the components were groups of classes that I could turn into services and strip from Rails. But now that I've done that I realize that the slowness of loading classes without Spork isn't a function of Rails being slow, but a function of Ruby being slow. Is there something like Spork that will work in non Rails projects?
Spork should work just fine for any ruby project, it just requires a bit more setup.
Assuming you're using rspec 2.x and spork 0.9, make a spec_helper.rb that looks something like:
require 'spork'
# the rspec require seems to be necessary,
# without it you get "Missing or uninitialized constant: Object::RSpec" errors
require 'rspec'
Spork.prefork do
# do expensive one-time setup here
require 'mylibrary'
MyLibrary.setup_lots_of_stuff
end
Spork.each_run do
# do setup that must be done on each test run here (setting up external state, etc):
MyLibrary.reset_db
end
Everything in the Spork.prefork block will only be run once (at spork startup), the rest will run on every test invocation.
If you have lots of framework-specific setup, you'd probably be better off making an AppFramework for your library. See the padrino AppFramework for an example.
Related
Here is my ruby spec_helper for rspec:
As you see I'm using database cleaner because I'm writing tests using the DB.
However, I get all this nonsense in my console output:
Is there a way to supress some of this output? Again remember, I'm not in RAils so I can't simply do:
config.logger.level = Logger::ERROR
I have an after_create_commit callback on my model, Foo, which looks like this:
after_create_commit { LogBroadcastJob.perform_later self }
I've reduced my perform method to return nil to simplify things.
When I create a Foo instance in an RSpec test with factory_girl, the test suite freezes. This only happens when I test models with that callback.
FactoryGirl.create :foo
When I Ctl+C out of my test suite it fails to kill the process. I have to find the process, which is still using my database (Postgresql), and kill it, which means that I don't see any errors on the command line. If I run my test suite again, it creates another process that I have to find and kill.
Does this sound familiar to anyone? How would I find useful errors here?
Maybe relevant: I upgraded from Rails 4.2 to 5.0.0.1 a while back.
This was a concurrency issue. Thanks to the resource provided in #coreyward's comment, I was able to clear this up by setting config/environments/test.rb to
config.eager_load = true
This differs from my config in config/environments/development.rb (and everything works in development), so I can't say I understand yet why it works. But I can now run all my tests with bundle exec guard or bundle exec rake spec.
For the past few weeks I've been learning Ruby and I must say that it hasn't been easy to get a grasp of some things.
That leads me to asking this question, I'm trying to setup a project which uses Rubinius as ruby engine, puma as webserver (since Puma states that it's best made to work with Rubinius because of their concurrency implementation), PostgreSQL as database and sequel as toolkit for the database.
What I'm struggling with is making the database connection. As it is, I'm doing it in the config.ru:
require 'rubygems'
require 'bundler/setup'
require 'uri'
require 'yaml'
require 'erb'
Bundler.require :default, ENV['RACK_ENV']
DATABASE.disconnect if defined?(DATABASE)
if ENV['DATABASE_URL']
db_config = URI.parse ENV['DATABASE_URL']
else
#noinspection RubyResolve
db_config = YAML.load(ERB.new(File.read('config/database.yml')).result)[ENV['RACK_ENV']]
end
DATABASE = Sequel.connect db_config
require File.expand_path('../application/api', __FILE__)
run APP::API
But I've been told that it's not the best place to do it if I want concurrency and not a shared connection. If I were using Unicorn I would do it in the before_fork, but Puma does not have such a function.
Though it does provide a on_worker_boot it is not useful with Sequel, because if I preload the app, Sequel requires a database connection before it can create my models (class SomeClass < Sequel::Model).
I am a bit confused now and I'm not sure where to go from this point. I was trying to find some guides or some good practices on this matter, but the only things I found were using ActiveRecord.
Does someone know how to actually do this properly, connecting to the database?
If you haven't set up puma to fork and preload the app (-w and --preload flags), you shouldn't need to do anything. If you have set up puma to fork and preload, then after loading your model classes, call DATABASE.disconnect. You may also want to lobby the puma developers to add a hook similar to before_fork in unicorn.
I am currently planning on using RSpec to continuously monitor some of our services. The plan is to create some tests, run them periodically and automatically alert if (when) errors are found. As many products use the same server it would make seance to create then connection once and then use the same connection for all tests.
I am not using rail, just Ruby and RSpec:
-- spec_helper.rb # Setup server connections, handle errors.
-- test1_spec.rb # Specific tests for product one, uses server connection from spec_helper.
-- test2_spec.rb # Tests for product two, uses same connection as one.
-- test3_spec.rb
Basically, can I create a before :all and after :all that applies to all files in the test, or do I need to repeat my connection in each test file (or put all tests in one big file)?
So using #CDub's useful comment I got it working by adding
RSpec.configure do |config|
config.before(:suite) {$x = 'my_variable'}
end
to the spec_helper file.
Note that the variable must be global and each file that use the variable must import rspec_helper: require_relative 'spec_helper'
I have a padrino server application with datamapper as ORM layer. I have a database migration, say:
migrate 1, :test do
up do
execute 'Some Query'
end
end
This migration is run using the command:
padrino rake dm:migrate -e <env>
Now my problem is that I need access to env in my query (not to choose schema or anything which datamapper does automatically, something very specific to the functionality). I tried debugging the migration to see if there is a variable which stores this value, but no luck. Is there a way?
As it turns out, since I am using Padrino, I can directly use Padrino.env inside up do..end or down do..end blocks:
migrate 1, :test do
up do
env = Padrino.env
execute "Some Query #{env}"
end
end
Although this is Padrino specific, but so is the concept of environment. I am sure something like this would work with other frameworks like Rails as well.