Use `created_at` instead of `inserted_at` - phoenix-framework

I want to use an old Rails MySQL database/table without changing it with a new Elixir application. Obviously I run into the created_atand inserted_at problem. https://hexdocs.pm/ecto/Ecto.Schema.html#timestamps/1 says that I can solve it. But I can't get it to work.
Here's what I do:
mix phoenix.new address_book --database mysql
cd address_book
mix phoenix.gen.html User users first_name last_name
mix phoenix.server
web/models/user.ex
defmodule AddressBook.User do
use AddressBook.Web, :model
schema "users" do
field :first_name, :string
field :last_name, :string
timestamps([{:created_at,:updated_at}])
end
[...]
But than I get the following error:
How can I fix this?

You aren't calling the timestamps function with the correct argument. It takes options, so it should be either:
timestamps(inserted_at: :created_at)
Or:
timestamps([{inserted_at: :created_at}])
You are calling:
timestamps(created_at: :updated_at)
Since there is no check for the created_at option, this does not change the timestamps being used.
You can configure this for all your schemas by using:
#timestamps_opts inserted_at: :created_at
In your web.ex file (in the schema section).

In addition to the answer above, the best way to set this across a context is:
defmodule MyApp.Schema do
defmacro __using__(_) do
quote do
use Ecto.Schema
#timestamps_opts inserted_at: :created_at
end
end
end
Then, instead of having to define that in each schema, you can just use MyApp.Schema.
I use this pattern in my phoenix app that uses a coexisting db-schema generated by rails.
https://hexdocs.pm/ecto/Ecto.Schema.html

Related

How to test a GraphQL schema with graphql-ruby?

My goal is to test the types of my GraphQL schema in ruby, I'm using the graphql-ruby gem.
I couldn't find any best practice for this, so I'm wondering what's the best way to test the fields and types of a Schema.
The gem recommends against testing the Schema directly http://graphql-ruby.org/schema/testing.html but I still find valuable to be able to know when the schema changes unexpectedly.
Having a type like this:
module Types
class DeskType < GraphQL::Schema::Object
field :id, ID, 'Id of this Desk', null: false
field :location, String, 'Location of the Desk', null: false
field :custom_id, String, 'Human-readable unique identifier for this desk', null: false
end
end
My first approach has been to use the fields hash in the GraphQL::Schema::Object type, for example:
Types::DeskType.fields['location'].type.to_s => 'String!'
Creating an RSpec matcher, I could come up with tests that look like this:
RSpec.describe Types::DeskType do
it 'has the expected schema fields' do
fields = {
'id': 'ID!',
'location': 'String!',
'customId': 'String!'
}
expect(described_class).to match_schema_fields(fields)
end
end
This approach has some drawbacks though:
The code in the matcher depends on the implementation of the class GraphQL::Schema::Object, any breaking changes will break the test suite after an update.
We're repeating code, the tests asserts the same fields in the type.
Writing these tests get tedious, and that makes devs less likely to write them.
It looks you want to test your schema because you want to know if it is going to break the client. Basically you should avoid this.
Instead you can use gems like: graphql-schema_comparator to print breaking changes.
I suggest to have a rake task for dumping your schema (and commit it in your repo).
You can write some spec to check if the schema was dump - then you will make sure, you have always up-to date schema dump.
Setup your CI to compare schema of current branch with schema on master branch.
Fail your build if schema has dangerous or breaking changes.
You can even generate Schema Changelog using schema-comparator ;) Or you can even use slack notifications to send any schema changes there so your team could easilly track any changes.
What I feel is an improvement over the first approach I took is to use snapshot testing for the GraphQL Schema, instead of testing each of the types/mutation schemas one by one, I created a single test:
RSpec.describe MySchema do
it 'renders the full schema' do
schema = GraphQL::Schema::Printer.print_schema(MySchema)
expect(schema).to match_snapshot('schema')
end
end
This approach uses a slightly modified version of the rspec-snapshot gem, see my PR here.
The gem doesn't let you update the snapshot with a single command like in Jest, so I also created a rake task to delete the current snapshot:
namespace :tests do
desc 'Deletes the schema snapshot'
task delete_schema_snapshot: :environment do
snapshot_path = Rails.root.join('spec', 'fixtures', 'snapshots', 'schema.snap')
File.delete(snapshot_path) if File.exist?(snapshot_path)
end
end
With this you'll get a pretty RSpec diff when the schema has been modified.
The top-level Schema object has an #execute method. You can use this to write tests like
RSpec.describe MySchema do
it 'fetches an object' do
id = 'Zm9vOjE'
query = <<~GRAPHQL
query GetObject($id: ID!) {
node(id: $id) { __typename id }
}
GRAPHQL
res = described_class.execute(
query,
variables: { id: id }
)
expect(res['errors']).to be_nil
expect(res['data']['node']['__typename']).to eq('Foo')
expect(res['data']['node']['id']).to eq(id)
end
end
The return value of the #execute method will be the conventional HTTP-style response, as a string-keyed hash. (Actually it's a GraphQL::Query::Result, but it delegates most things to an embedded hash.)

Rails 4 update Type when migrating to Single Table Inheritance

Rails 4.0.4, Ruby 2.1.2
I want to use STI like so:
User < ActiveRecord::Base
Admin < User
But currently I have:
User < ActiveRecord::Base
Info < ActiveRecord::Base
So I changed my models, and then start writing my migration. In my migration, I first add a column to allow STI:
add_column :users, :type, :string
Then I want to update the Users currently in the database to be Admin
# Place I'm currently stuck
Then I move all my Info records into the Users table
Info.all.each { |info| User.create(name: info.name, email: info.email) }
Everything seems to work except turning the previous Users into Admins. Here are some things I've tried:
# Seems to work, but doesn't actually save type value
User.each do |user|
user.becomes!(Admin)
user.save! # evaluates to true, doesn't have any errors
end
# Seems to work, but doesn't actually save type value
# I've also tried a combo of this one and the above one
User.each do |user|
user.type = "Admin"
user.save! # evaluates to true, doesn't have any errors
end
User.each do |user|
user = user.becomes!(Admin)
user.save! # evaluates to true, doesn't have any errors
end
# Seems to work, but doesn't actually save type value
User.each do |user|
user.update_attributes(type: "Admin")
end
Each time the local user variables seems to have the correct type ("Admin"), along with save evaluating to true, but when I check Admin.count or check Users type value, it is always nil. I know you're not supposed to change them, but this is just to migrate the data over to STI and then I'll be able to start creating Users or Admin with the proper class.
At the very least I think Rails should raise an error, set an error or somehow let the developer know it's failing the save calls.
It turns out that while update_attributes doesn't work for type (I haven't researched why yet), update_column does work.
So the migration simply becomes:
User.each do |user|
user.update_columns(type: "Admin")
end
The reason this works and other updates don't can probably be traced back to either callbacks or validations not being run. I have no callbacks that would prevent it, but maybe there are default Rails ones for type
http://apidock.com/rails/ActiveRecord/Persistence/update_columns
If you had more rows in the database User.each would become quite slow as it makes an SQL call for each user.
Generally you could use User.update_all(field: value) to do this in one SQL call but there is another reason to avoid this: if the User model is later removed the migration will no longer run.
One way to update all rows at once without referencing the model is to use raw SQL in the migration:
def up
execute "UPDATE users SET type = 'Admin' WHERE type IS NULL"
end

time to live doesn't work on mongoid

see the #2443 topic
https://github.com/mongoid/mongoid/blob/master/CHANGELOG.md
In mongoid, time to live (expire_after_seconds option) is supported,
but doesn't work.
I executed its sample code, then I tried to replace Time with DateTime or used Timestamps(created_at). but doesn't.
class Event
include Mongoid::Document
field :created_at, type: DateTime
index({ created_at: 1 }, { expire_after_seconds: 3600 })
end
Mongoid does not "automatically" create indexes of any kind when connecting to the class model. This is considered to be a "separate" task for which there is the following rake command at the bottom of the documentation:
rake db:mongoid:create_indexes
Of course if you are not using this in a "rails" config then you would look at alternate means to create the indexes on collections when you want to. You can either script this externally or directly use the mongo driver ensureIndex method.

Case insensitive like (ilike) in Datamapper with Postgresql

We are using Datamapper in a Sinatra application and would like to use case insensitive like that works on both Sqlite (locally in development) and Postgresql (on Heroku in production).
We have statements like
TreeItem.all(:name.like =>"%#{term}%",:unique => true,:limit => 20)
If termis "BERL" we get the suggestion "BERLIN" from both the Sqlite and Postgresql backends. However if termis "Berl" we only get that result from Sqlite and not Postgresql.
I guess this has to do with the fact that both dm-postgres-adapter and dm-sqlite-adapter outputting a LIKE in the resulting SQL query. Since Postgresql has a case sensitive LIKE we get this (for us unwanted) behavior.
Is there a way to get case insensitive like in Datamapper without resorting to use a raw SQL query to the adapter or patching the adapter to use ILIKEinstead of LIKE?
I could of course use something in between, such as:
TreeItem.all(:conditions => ["name LIKE ?","%#{term}%"],:unique => true,:limit => 20)
but then we would be tied to the use of Postgresql within our own code and not just as a configuration for the adapter.
By writing my own data object adapter that overrides the like_operator method I managed to get Postgres' case insensitive ILIKE.
require 'do_postgres'
require 'dm-do-adapter'
module DataMapper
module Adapters
class PostgresAdapter < DataObjectsAdapter
module SQL #:nodoc:
private
# #api private
def supports_returning?
true
end
def like_operator(operand)
'ILIKE'
end
end
include SQL
end
const_added(:PostgresAdapter)
end
end
Eventually I however decided to port the application in question to use a document database.
For other people who happen to use datamapper wanting support for ilike as well as 'similar to' in PostgreSQL: https://gist.github.com/Speljohan/5124955
Just drop that in your project, and then to use it, see these examples:
Model.all(:column.ilike => '%foo%')
Model.all(:column.similar => '(%foo%)|(%bar%)')

Runtime changing model with mongodb/mongoid

I've to add several fields in a mongoid model, I know there is not migration with MongoDB but if I go on without dropping the DB, making rails to "regenerate" the DB entirely, it doesn't display or use the new fields at all !
What's the best way to go here ? Is there something softer than drop/reopen mongodb ?
Thanks in advance
luca
In general it should be possible to update old documents with the new fields at runtime. There is no need for migrations in MongoDB.
You maybe want to write rake tasks to update your old documents with the new fields and default values.
You could find out these documents by checking those new fields which have per default a nil value.
Update
Easy style:
If you define a new field with a default value, this value should always be used as long as you set a new one:
app/models/my_model.rb
class MyModel
include Mongoid::Document
field :name, type: String
field :data, type: String
# NEW FIELD
field :note, type: String, default: "no note given so far!"
end
If you query your database you should get your default value for documents which haven't this field before your extension:
(rails console)
MyModel.first
#=> #<MyModel …other fields…, note: "no note given so far!">
I tested this with a fresh rails stack with a current mongoid on Ruby 1.9.2 - should work with other stacks, too.
More complicated/complex style:
If you didn't set a default value, you'll get nil for this new field.
app/models/my_model.rb
class MyModel
include Mongoid::Document
field :name, type: String
field :data, type: String
# NEW FIELD
field :note, type: String
end
(rails console)
MyModel.first
#=> #<MyModel …other fields…, note: nil>
Then you could set up a rake task and migration file like in this example:
lib/tasks/my_model_migration.rake:
namespace :mymodel do
desc "MyModel migration task"
task :migrate => :environment do
require "./db/migrate.rb"
end
end
db/migrate.rb:
olds = MyModel.where(note: nil)
# Enumerator of documents without a valid :note field (= nil)
olds.each do |doc|
doc.note = "(migration) no note given yet"
# or whatever your desired default value should be
doc.save! rescue puts "Could not modify doc #{doc.id}/#{doc.name}"
# the rescue is only a failsafe statement if something goes wrong
end
Run this migration with rake mymodel:migrate.
This is only a starting point and you can extend this to a full mongoid migration engine.
The task :migrate => :environment do … is necessary, otherwise rake won't load models.
It is a little ridiculous to say that you don't need migrations with mongodb or mongoid. Any sophisticated app needs to be refactored from time to time and that can mean pulling fields out of disparate documents into a new one.
Writing one off rake tasks is way less convenient and error prone than having migrations be part of your deploy script so that it always gets run on every environment.
https://github.com/adacosta/mongoid_rails_migrations brings AR style migrations to mongoid.
You might need them less often, but you will certainly need them as an app grows.
Below is a nice code example for data migration script with mongoid and the ruby mongo driver - to be used when your updated model no longer match production data.
http://pivotallabs.com/users/lee/blog/articles/1548-mongoid-migrations-using-the-mongo-driver
I whish we would stop using "no migrations with mongoid" as slogan. It'll turn people to MongoDB for the wrong reasons, and it's only partially true. No schema, true, but data still needs to be maintained, which IMO is harder with MongoDB than RDBMs. There are other, great reasons for choosing MongoDB and it depends on your problem.

Resources