I'm trying to get migrations set up in Ramaze. I'm coming from doing mostly Rails stuff, but I wanted to give something else a shot. Anyway, I've got a directory in my project called "migrations" with a start.rb file and then my migrations. Here's start.rb:
require File.expand_path('../app.rb', File.dirname(__FILE__))
require 'sequel/extensions/migration.rb'
Sequel::Migrator.apply(DB, '.')
Now, first of all, I don't know why I can't just do
Sequel::Model.plugin(:migration)
instead of that long require, but it seems to be working, so I'm not worrying about it too much. The main problem is that none of my migrations actually run. It creates the schema_info table, so I know it's trying to work, but it just can't find my 000_initial_info.rb file that's right there in the same directory.
I couldn't really find any documentation on this, so this is my own solution. I'd love to hear other solutions as well if I'm just going about this all wrong. Thanks for any help!
You can't use Sequel::Model.plugin :migration because migration is not a model plugin, it is a core extension. This will work:
Sequel.extension :migration
Sequel comes with the bin/sequel tool that you can use to run migrations with the -m switch:
sequel -m /path/to/app/migrations
Unless you have special needs, I recommend using that.
One of the problems with your setup might be that you started your migrations at 000. Start them at 001 and it may work better.
There's rdoc documentation for the Migrator:
http://sequel.rubyforge.org/rdoc-plugins/classes/Sequel/Migrator.html
Here's my solution:
http://github.com/mwlang/ramaze-sequel-proto-experimental
Run "rake -T" to see the various db and migrate tasks I've written."
I use this "experimental" as my ramaze project template at the moment.
Related
just starting testcontainers. I love the idea. thanks for investing in this project.
I am trying to create a simple postgres 14.5 container (and susceeded) and now I am trying to populate it using the .withInitScript() method.
the file I am feeding into the init method is a dump I created with pg_dumpall.
testcontainers fails for many parsing/validation reasons. each time I delete a portion and another reason pops up.
should I be able to succesfully use the withInitScript with pg_dump files?
BTW, using pg_dump for my main DB also has many similar issues.
thanks!
Try copying the script to the container so postgres will execute. Although this comment BTW, using pg_dump for my main DB also has many similar issues. makes me wonder if it will work because it also fails when you are using the database directly if I understood correctly.
new PostgreSQLContainer("postgres:14.5")
.withCopyFileToContainer(
MountableFile.forClasspathResource("init.sql"),
"/docker-entrypoint-initdb.d/init.sql"
);
We recommend to use liquibase or flyway to manage database changes.
hi and thanks for the help
I have managed to make things work by stripping some things from the sql dump and using the copyFileToContainer
thanks
I've got a Sinatra app that I'm setting up with a database using ActiveRecord.
Due to one of the quirks of this database (namely a string primary key), I want to use a SQL schema (structure.sql) instead of a Ruby one (schema.rb). I don't mind that this restricts me to using a specific database flavour, we use Postgres for everything anyway.
To achieve this in Rails, I would put config.active_record.schema_format = :sql in config/application.rb. How do I do the same thing in Sinatra?
It's easy to configure your database by hand with Sinatra. We like to build our tables in MySQL instead of using ActiveRecord Migrations.
You'll have to create your database models by hand instead of using generators and you'll add this line to manage your connection:
ActiveRecord::Base.establish_connection(database_settings)
This is super easy. We typically read in the settings from a YAML file. It gets complicated when you want to write automated tests. Here's a blog I wrote about how to set up automated tests with Sinatra, MiniTest, and ActiveRecord.
Since you are still using active record, you can just add next line to your config (I put it under config/initializers/active_record.rb).
ActiveRecord::Base.schema_format = :sql
I'm attempting to write a simple Ruby/Nokogiri scraper to get event information from multiple pages and then output it to a CSV that is attached to an email sent out weekly.
I have completed the scraping components and the CSV component and it's working perfectly. However, I now realize that I need to know when new events are added, which means I need some sort of database. Ideally I would just store this locally.
I've dabbled a bit with using the ruby gem 'sequel', but the data does not seem to persist beyond the running of the program. Do I need to download some database software to work with 'sequel'? Also I'm not using the Rails framework, just Ruby.
Any and all guidance is deeply appreciated!
I'm guessing you did Sequel.sqlite, as in the first example in the Sequel README, which creates an in-memory SQLite database. To create a database in your filesystem instead of memory, just pass it a path, e.g.:
Sequel.sqlite("./my-database.db")
This is, of course, assuming that you have the sqlite3 gem installed. If the given file doesn't exist, it will be created.
This is covered in the Sequel docs.
There's a bunch of questions out there similar to this one that talk about rails plugins as a solution - but I'm not using rails, read on for more
I have a Rakefile in a sinatra project which allows me to rake db:migrate. It'll do my migration perfectly, but I'd like to pass that a flag (or write a new rake task) which does the same thing, but outputs the SQL to STDOUT and doesn't commit the changes to the database. Does anyone know how to do this?
My first thought was to try ActiveRecord logging and see if I could get the SQL out at all, but that doesn't work! Any ideas?
namespace :db do
task :migrate_sql do
require 'logger'
ActiveRecord::Base.logger = Logger.new(STDOUT)
Rake::Task['db:migrate'].invoke
# This does the migration and doesn't output SQL - so no good!
end
end
I think there isn't any easy way to do it, for the following reasons:
up, down, and change are methods which execute other methods; there isn't a global migration query string that gets built and executed
neither the statements methods (add_column, etc) expose their statements as strings; as I understand, they are implemented as connection adapter methods, and for example the mysql adapter has a add_column_sql method, while the postgresql adapter does not, and its sql is a variable inside its add_column method
So, if you really need this functionality, I think your best option is to copy the sql from the log.
Time ago I discovered a ruby gem that allows you to create database dumps with particular rules.
Inside a file you defined wich tables to dump, which records to skip and which fields to scramble in a nifty ruby DSL.
I can't remember the name of the tool, do you know what I'm talking about?
After searching for an hour I finally found it.
It's called ocelot, here is the homepage: http://exussum.heroku.com/projects/ocelot