I have a Postgres database that I manipulate using Datamapper on my ruby web server. I am trying to incorporate Heroku Scheduler to update parts of the database every 10 minutes. However when it tries to execute the script it keeps giving this error:
/app/vendor/bundle/ruby/1.9.1/gems/dm-core-1.2.0/lib/dm-core/repository.rb:72:in `adapter': Adapter not set: default. Did you forget to setup? (DataMapper::RepositoryNotSetupError)
The database is initialized when the server starts up, so why can't this script update the database like I would normally do in the rest of the code.
For example the script being called by the scheduler will contain lines such as:
User.update(:user_x => "whatever")
Is there a certain require statement I absolutely need?
First, you need to point the scheduled process to your database, so include the Datamapper.setup, but don't migrate or initialize.
Then you need to read the database. You probably want datamapper to create object models, just like your app uses. I've been using dm-is-reflective for that.
require 'data_mapper'
require 'dm-is-reflective'
dm = DataMapper.setup(:default, ENV['DATABASE_URL'])
class User
include DataMapper::Resource
is :reflective
reflect
end
p User.fields
Related
I would like to run a single set of tests against the development database. My seeds.rb file populates the databse from a CSV and I want to ensure that the data is stored in the database in the way I expect. I don't want to run all tests against the development database, just a particular set.
I created an integration test. I thought I could switch environments in #setup but it looks like Rails.env = 'development' has no effect.
require 'test_helper'
class DbTest < ActionDispatch::IntegrationTest
def setup
Rails.env = 'development'
end
def test_total_settlements
...
Is it possible to run tests in different environments? If so, how is this done?
I'd recommend to create a class to seed the information into a configurable database, and then run tests against that class. In that way, you don't need to run the tests to an operational database and run that tests the number of times you want, without having to manually modify your development database in case the seed failed (like removing leftover records).
Once you have that class, you could add a task to your Rakefile and use your class :)
In my opinion, the simplest solution would be to just seed your test database.
You can call Rails.application.load_seed before the tests you need seed data for.
I'm trying to use Redis as my session store, which seem to work just fine. However I can't figure out how to let multiple instances of Sinatra access the same session. This is what I have in my config.ru:
require 'redis-rack'
use Rack::Session::Redis, :redis_server => "redis://#{ENV['REDIS_HOST']}:6379/0"
I must be missing an argument to set this, but the documentation is lacking for this case:
https://github.com/redis-store/redis-rack
Maybe that's not what I want to achieve this behavior?
The end goal is to be deploying my Sinatra application with docker to a clustered environment so I can release new versions without downtime. So whatever let's me share the rack session between multiple instances works. I suppose I could create a redis object manually and not use the session keyword, just seems like the wrong way to do it.
I've got a Sinatra app that I'm setting up with a database using ActiveRecord.
Due to one of the quirks of this database (namely a string primary key), I want to use a SQL schema (structure.sql) instead of a Ruby one (schema.rb). I don't mind that this restricts me to using a specific database flavour, we use Postgres for everything anyway.
To achieve this in Rails, I would put config.active_record.schema_format = :sql in config/application.rb. How do I do the same thing in Sinatra?
It's easy to configure your database by hand with Sinatra. We like to build our tables in MySQL instead of using ActiveRecord Migrations.
You'll have to create your database models by hand instead of using generators and you'll add this line to manage your connection:
ActiveRecord::Base.establish_connection(database_settings)
This is super easy. We typically read in the settings from a YAML file. It gets complicated when you want to write automated tests. Here's a blog I wrote about how to set up automated tests with Sinatra, MiniTest, and ActiveRecord.
Since you are still using active record, you can just add next line to your config (I put it under config/initializers/active_record.rb).
ActiveRecord::Base.schema_format = :sql
I have a padrino server application with datamapper as ORM layer. I have a database migration, say:
migrate 1, :test do
up do
execute 'Some Query'
end
end
This migration is run using the command:
padrino rake dm:migrate -e <env>
Now my problem is that I need access to env in my query (not to choose schema or anything which datamapper does automatically, something very specific to the functionality). I tried debugging the migration to see if there is a variable which stores this value, but no luck. Is there a way?
As it turns out, since I am using Padrino, I can directly use Padrino.env inside up do..end or down do..end blocks:
migrate 1, :test do
up do
env = Padrino.env
execute "Some Query #{env}"
end
end
Although this is Padrino specific, but so is the concept of environment. I am sure something like this would work with other frameworks like Rails as well.
There's a bunch of questions out there similar to this one that talk about rails plugins as a solution - but I'm not using rails, read on for more
I have a Rakefile in a sinatra project which allows me to rake db:migrate. It'll do my migration perfectly, but I'd like to pass that a flag (or write a new rake task) which does the same thing, but outputs the SQL to STDOUT and doesn't commit the changes to the database. Does anyone know how to do this?
My first thought was to try ActiveRecord logging and see if I could get the SQL out at all, but that doesn't work! Any ideas?
namespace :db do
task :migrate_sql do
require 'logger'
ActiveRecord::Base.logger = Logger.new(STDOUT)
Rake::Task['db:migrate'].invoke
# This does the migration and doesn't output SQL - so no good!
end
end
I think there isn't any easy way to do it, for the following reasons:
up, down, and change are methods which execute other methods; there isn't a global migration query string that gets built and executed
neither the statements methods (add_column, etc) expose their statements as strings; as I understand, they are implemented as connection adapter methods, and for example the mysql adapter has a add_column_sql method, while the postgresql adapter does not, and its sql is a variable inside its add_column method
So, if you really need this functionality, I think your best option is to copy the sql from the log.