How to access environment in datamapper migration - ruby

I have a padrino server application with datamapper as ORM layer. I have a database migration, say:
migrate 1, :test do
up do
execute 'Some Query'
end
end
This migration is run using the command:
padrino rake dm:migrate -e <env>
Now my problem is that I need access to env in my query (not to choose schema or anything which datamapper does automatically, something very specific to the functionality). I tried debugging the migration to see if there is a variable which stores this value, but no luck. Is there a way?

As it turns out, since I am using Padrino, I can directly use Padrino.env inside up do..end or down do..end blocks:
migrate 1, :test do
up do
env = Padrino.env
execute "Some Query #{env}"
end
end
Although this is Padrino specific, but so is the concept of environment. I am sure something like this would work with other frameworks like Rails as well.

Related

Rails 5: Is there a way to run a single set of tests against the development database?

I would like to run a single set of tests against the development database. My seeds.rb file populates the databse from a CSV and I want to ensure that the data is stored in the database in the way I expect. I don't want to run all tests against the development database, just a particular set.
I created an integration test. I thought I could switch environments in #setup but it looks like Rails.env = 'development' has no effect.
require 'test_helper'
class DbTest < ActionDispatch::IntegrationTest
def setup
Rails.env = 'development'
end
def test_total_settlements
...
Is it possible to run tests in different environments? If so, how is this done?
I'd recommend to create a class to seed the information into a configurable database, and then run tests against that class. In that way, you don't need to run the tests to an operational database and run that tests the number of times you want, without having to manually modify your development database in case the seed failed (like removing leftover records).
Once you have that class, you could add a task to your Rakefile and use your class :)
In my opinion, the simplest solution would be to just seed your test database.
You can call Rails.application.load_seed before the tests you need seed data for.

How do I configure Sinatra to use structure.sql instead of schema.rb?

I've got a Sinatra app that I'm setting up with a database using ActiveRecord.
Due to one of the quirks of this database (namely a string primary key), I want to use a SQL schema (structure.sql) instead of a Ruby one (schema.rb). I don't mind that this restricts me to using a specific database flavour, we use Postgres for everything anyway.
To achieve this in Rails, I would put config.active_record.schema_format = :sql in config/application.rb. How do I do the same thing in Sinatra?
It's easy to configure your database by hand with Sinatra. We like to build our tables in MySQL instead of using ActiveRecord Migrations.
You'll have to create your database models by hand instead of using generators and you'll add this line to manage your connection:
ActiveRecord::Base.establish_connection(database_settings)
This is super easy. We typically read in the settings from a YAML file. It gets complicated when you want to write automated tests. Here's a blog I wrote about how to set up automated tests with Sinatra, MiniTest, and ActiveRecord.
Since you are still using active record, you can just add next line to your config (I put it under config/initializers/active_record.rb).
ActiveRecord::Base.schema_format = :sql

ActiveJob instance freezes RSpec test

I have an after_create_commit callback on my model, Foo, which looks like this:
after_create_commit { LogBroadcastJob.perform_later self }
I've reduced my perform method to return nil to simplify things.
When I create a Foo instance in an RSpec test with factory_girl, the test suite freezes. This only happens when I test models with that callback.
FactoryGirl.create :foo
When I Ctl+C out of my test suite it fails to kill the process. I have to find the process, which is still using my database (Postgresql), and kill it, which means that I don't see any errors on the command line. If I run my test suite again, it creates another process that I have to find and kill.
Does this sound familiar to anyone? How would I find useful errors here?
Maybe relevant: I upgraded from Rails 4.2 to 5.0.0.1 a while back.
This was a concurrency issue. Thanks to the resource provided in #coreyward's comment, I was able to clear this up by setting config/environments/test.rb to
config.eager_load = true
This differs from my config in config/environments/development.rb (and everything works in development), so I can't say I understand yet why it works. But I can now run all my tests with bundle exec guard or bundle exec rake spec.

Output SQL from an ActiveRecord migration without executing it (not rails!)

There's a bunch of questions out there similar to this one that talk about rails plugins as a solution - but I'm not using rails, read on for more
I have a Rakefile in a sinatra project which allows me to rake db:migrate. It'll do my migration perfectly, but I'd like to pass that a flag (or write a new rake task) which does the same thing, but outputs the SQL to STDOUT and doesn't commit the changes to the database. Does anyone know how to do this?
My first thought was to try ActiveRecord logging and see if I could get the SQL out at all, but that doesn't work! Any ideas?
namespace :db do
task :migrate_sql do
require 'logger'
ActiveRecord::Base.logger = Logger.new(STDOUT)
Rake::Task['db:migrate'].invoke
# This does the migration and doesn't output SQL - so no good!
end
end
I think there isn't any easy way to do it, for the following reasons:
up, down, and change are methods which execute other methods; there isn't a global migration query string that gets built and executed
neither the statements methods (add_column, etc) expose their statements as strings; as I understand, they are implemented as connection adapter methods, and for example the mysql adapter has a add_column_sql method, while the postgresql adapter does not, and its sql is a variable inside its add_column method
So, if you really need this functionality, I think your best option is to copy the sql from the log.

How to use Datamapper in conjunction with Heroku Scheduler?

I have a Postgres database that I manipulate using Datamapper on my ruby web server. I am trying to incorporate Heroku Scheduler to update parts of the database every 10 minutes. However when it tries to execute the script it keeps giving this error:
/app/vendor/bundle/ruby/1.9.1/gems/dm-core-1.2.0/lib/dm-core/repository.rb:72:in `adapter': Adapter not set: default. Did you forget to setup? (DataMapper::RepositoryNotSetupError)
The database is initialized when the server starts up, so why can't this script update the database like I would normally do in the rest of the code.
For example the script being called by the scheduler will contain lines such as:
User.update(:user_x => "whatever")
Is there a certain require statement I absolutely need?
First, you need to point the scheduled process to your database, so include the Datamapper.setup, but don't migrate or initialize.
Then you need to read the database. You probably want datamapper to create object models, just like your app uses. I've been using dm-is-reflective for that.
require 'data_mapper'
require 'dm-is-reflective'
dm = DataMapper.setup(:default, ENV['DATABASE_URL'])
class User
include DataMapper::Resource
is :reflective
reflect
end
p User.fields

Resources