My goal is to test the types of my GraphQL schema in ruby, I'm using the graphql-ruby gem.
I couldn't find any best practice for this, so I'm wondering what's the best way to test the fields and types of a Schema.
The gem recommends against testing the Schema directly http://graphql-ruby.org/schema/testing.html but I still find valuable to be able to know when the schema changes unexpectedly.
Having a type like this:
module Types
class DeskType < GraphQL::Schema::Object
field :id, ID, 'Id of this Desk', null: false
field :location, String, 'Location of the Desk', null: false
field :custom_id, String, 'Human-readable unique identifier for this desk', null: false
end
end
My first approach has been to use the fields hash in the GraphQL::Schema::Object type, for example:
Types::DeskType.fields['location'].type.to_s => 'String!'
Creating an RSpec matcher, I could come up with tests that look like this:
RSpec.describe Types::DeskType do
it 'has the expected schema fields' do
fields = {
'id': 'ID!',
'location': 'String!',
'customId': 'String!'
}
expect(described_class).to match_schema_fields(fields)
end
end
This approach has some drawbacks though:
The code in the matcher depends on the implementation of the class GraphQL::Schema::Object, any breaking changes will break the test suite after an update.
We're repeating code, the tests asserts the same fields in the type.
Writing these tests get tedious, and that makes devs less likely to write them.
It looks you want to test your schema because you want to know if it is going to break the client. Basically you should avoid this.
Instead you can use gems like: graphql-schema_comparator to print breaking changes.
I suggest to have a rake task for dumping your schema (and commit it in your repo).
You can write some spec to check if the schema was dump - then you will make sure, you have always up-to date schema dump.
Setup your CI to compare schema of current branch with schema on master branch.
Fail your build if schema has dangerous or breaking changes.
You can even generate Schema Changelog using schema-comparator ;) Or you can even use slack notifications to send any schema changes there so your team could easilly track any changes.
What I feel is an improvement over the first approach I took is to use snapshot testing for the GraphQL Schema, instead of testing each of the types/mutation schemas one by one, I created a single test:
RSpec.describe MySchema do
it 'renders the full schema' do
schema = GraphQL::Schema::Printer.print_schema(MySchema)
expect(schema).to match_snapshot('schema')
end
end
This approach uses a slightly modified version of the rspec-snapshot gem, see my PR here.
The gem doesn't let you update the snapshot with a single command like in Jest, so I also created a rake task to delete the current snapshot:
namespace :tests do
desc 'Deletes the schema snapshot'
task delete_schema_snapshot: :environment do
snapshot_path = Rails.root.join('spec', 'fixtures', 'snapshots', 'schema.snap')
File.delete(snapshot_path) if File.exist?(snapshot_path)
end
end
With this you'll get a pretty RSpec diff when the schema has been modified.
The top-level Schema object has an #execute method. You can use this to write tests like
RSpec.describe MySchema do
it 'fetches an object' do
id = 'Zm9vOjE'
query = <<~GRAPHQL
query GetObject($id: ID!) {
node(id: $id) { __typename id }
}
GRAPHQL
res = described_class.execute(
query,
variables: { id: id }
)
expect(res['errors']).to be_nil
expect(res['data']['node']['__typename']).to eq('Foo')
expect(res['data']['node']['id']).to eq(id)
end
end
The return value of the #execute method will be the conventional HTTP-style response, as a string-keyed hash. (Actually it's a GraphQL::Query::Result, but it delegates most things to an embedded hash.)
Related
I am working on an app that allows Members to take a survey (Member has a one to many relationship with Response). Response holds the member_id, question_id, and their answer.
The survey is submitted all or nothing, so if there are any records in the Response table for that Member they have completed the survey.
My question is, how do I re-write the query below so that it actually works? In SQL this would be a prime candidate for the EXISTS keyword.
def surveys_completed
members.where(responses: !nil ).count
end
You can use includes and then test if the related response(s) exists like this:
def surveys_completed
members.includes(:responses).where('responses.id IS NOT NULL')
end
Here is an alternative, with joins:
def surveys_completed
members.joins(:responses)
end
The solution using Rails 4:
def surveys_completed
members.includes(:responses).where.not(responses: { id: nil })
end
Alternative solution using activerecord_where_assoc:
This gem does exactly what is asked here: use EXISTS to to do a condition.
It works with Rails 4.1 to the most recent.
members.where_assoc_exists(:responses)
It can also do much more!
Similar questions:
How to query a model based on attribute of another model which belongs to the first model?
association named not found perhaps misspelled issue in rails association
Rails 3, has_one / has_many with lambda condition
Rails 4 scope to find parents with no children
Join multiple tables with active records
You can use SQL EXISTS keyword in elegant Rails-ish manner using Where Exists gem:
members.where_exists(:responses).count
Of course you can use raw SQL as well:
members.where("EXISTS" \
"(SELECT 1 FROM responses WHERE responses.member_id = members.id)").
count
You can also use a subquery:
members.where(id: Response.select(:member_id))
In comparison to something with includes it will not load the associated models (which is a performance benefit if you do not need them).
If you are on Rails 5 and above you should use left_joins. Otherwise a manual "LEFT OUTER JOINS" will also work. This is more performant than using includes mentioned in https://stackoverflow.com/a/18234998/3788753. includes will attempt to load the related objects into memory, whereas left_joins will build a "LEFT OUTER JOINS" query.
def surveys_completed
members.left_joins(:responses).where.not(responses: { id: nil })
end
Even if there are no related records (like the query above where you are finding by nil) includes still uses more memory. In my testing I found includes uses ~33x more memory on Rails 5.2.1. On Rails 4.2.x it was ~44x more memory compared to doing the joins manually.
See this gist for the test:
https://gist.github.com/johnathanludwig/96fc33fc135ee558e0f09fb23a8cf3f1
where.missing (Rails 6.1+)
Rails 6.1 introduces a new way to check for the absence of an association - where.missing.
Please, have a look at the following code snippet:
# Before:
Post.left_joins(:author).where(authors: { id: nil })
# After:
Post.where.missing(:author)
And this is an example of SQL query that is used under the hood:
Post.where.missing(:author)
# SELECT "posts".* FROM "posts"
# LEFT OUTER JOIN "authors" ON "authors"."id" = "posts"."author_id"
# WHERE "authors"."id" IS NULL
As a result, your particular case can be rewritten as follows:
def surveys_completed
members.where.missing(:response).count
end
Thanks.
Sources:
where.missing official docs.
Pull request.
Article from the Saeloun blog.
Notes:
where.associated - a counterpart for checking for the presence of an association is also available starting from Rails 7.
See offical docs and this answer.
Please considering following code:
class MyModel
validate my_validation unless ENV["RAILS_ENV"] == "test"
end
We have a validation that is going to have major effect on HUGE parts of the test-suite. I only want it to be executed in prod, not when running the test suite*... EXCEPT for the actual tests regarding this validation.
So when testing the validation I need to set the ENV["RAILS_ENV"] to something else then test. I tried this in my my_model_spec.rb-file:
it "tests the validation" do
ENV["RAILS_ENV"] = "development"
# Tests the validation..
ENV["RAILS_ENV"] = "test"
end
This sets the variable while in the spec file, BUT where the check is made in my_model.rb the ENV["RAILS_ENV"] still returns "test".
Is there a way to achieve the declaration of ENV["RAILS_ENV"] in the SPEC-file and have that still set when the model code is executed during the example run?
Yes yes, please believe me we have this under control (... I think :D). It is during a maintenance window.
Obligatory:
validate my_validation unless ENV["RAILS_ENV"] == "test"
In 99.9% of cases, this is really not a good idea.
Just felt I needed to make that clear, in case future readers see this post and get funny ideas... (It would be much better to update the test suite to remain valid, e.g. by changing the factories.)
Is there a way to achieve the declaration of ENV["RAILS_ENV"] in the SPEC-file
Yes - you can stub the value:
allow(ENV).to receive(:[]).with('RAILS_ENV').and_return('development')
There are also some other approaches you could consider.
For example, why not call the method directly, for the purpose of running this test?
record = MyModel.new # or using FactoryBot.build / whatever
record.my_validation
Or, you could add a model attribute to forcibly-run the validation:
class MyModel
attr_accessor :run_my_validation
validate my_validation if ENV["RAILS_ENV"] != "test" || run_my_validation
end
# and in the test:
record = MyModel.new # or using FactoryBot.build / whatever
record.run_my_validation = true
expect(record.valid?).to be_true
Yet another approach you could consider, to eliminate rails environment check from the production code, would be to set an environment-specific configuration value. Which, again, you could stub in the spec:
class MyModel
validate my_validation if Rails.configuration.run_my_model_validation
end
# and in the test:
allow(Rails.configuration).to receive(:run_my_model_validation).and_return(true)
Another benefit to the above is that you could enable the validation in development mode, without making any code change to the application.
I'll be brief with the code samples, as all of my tests pass except the one below. I got it to pass by changing things up a bit, but I'm not sure why version 1 fails and version 2 works.
My model:
# app/models/person.rb
class Person
validates :contact_number, uniqueness: true
end
Model spec
# spec/models/person_spec.rb
require 'spec_helper'
describe Person do
it 'is a valid factory' do
create(:person).should be_valid # passes
end
it 'has a unique phone number' do
create(:person)
build(:person).should_not be_valid # fails
end
it 'also has a unique phone number' do
person1 = create(:person)
person2 = person1.dup
person2.should_not be_valid # passes
end
end
As far as I can tell, the two uniqueness tests should be doing the same thing, however one passes and one fails.
If it matters, I am using mongoid, though I don't think that should have any effect. I'm also not doing anything with nested contexts or describes in my test, so I think the scope is correct. Any insight is appreciated.
UPDATE 1: I realized in my factory I am adding an initialize_with block like this:
initialize_with { Person.find_or_create_by(contact_number: contact_number) }
I realized that this may be the reason the validation was failing -- I was just getting the same person back. However, commenting out that line gives the following error:
Mongoid::Errors::Validations:
Problem:
Validation of Person failed.
Summary:
The following errors were found: Contact number is already taken
Resolution:
Try persisting the document with valid data or remove the validations.
Which, in theory is good, I suppose, since it won't let me save a second person with the same contact number, but I'd prefer my test to pass.
Probably your person factory has a sequence in contact_number making a diferent contact_number in each person.
Just realize that the build(:person) doesnt validate. The validation occurs only in create.
I strongly suggest use of shoulda-matchers for this kind of validations.
It is possible that your database is being cleaned (do you have database-cleaner in your Gemfile), or your tests are not being run in the order you think they are. (Check for :random in your spec_helper.rb)
While the above answer regarding using shoulda-matchers will help you run this particular test in RSpec more concisely, you probably want to have your unique phone number test be able to be run completely on its own without relying on another spec having executed. Your second test is an example of Obscure Test (and also a little bit of Mystery Guest http://robots.thoughtbot.com/mystery-guest), where it's not clear from the test code what is actually being tested. Your phone number parameter is defined in another file (the factory), and the prior data setup is being run in another spec somewhere else in the file.
Your second test is already better because it is more explicitly showing what you're testing, and doesn't rely on another spec having been run. I would actually write it like this to make it more explicit:
it 'has a unique phone number' do
person1 = create(:person, phone_number: '555-123-4567')
person2 = create(:person, phone_number: '555-123-4567')
# can use 'should' here instead
expect(person2).not_to be_valid
end
If you don't explicitly make it about the phone number, then if you change your factory this test might start failing even though your code is still sound. In addition, if you have other attributes for which you are validating uniqueness, your previous test might pass even though the phone number validation is missing.
I figured it out! On a whim, I checked out the test database and noticed that a Person object was lingering around. So, it actually wasn't the
build(:person).should_not be_valid that was raising the Mongoid exception. It was the create call on the line before. Clearing out the DB and running the spec again passed, but again the data was persisting. I double checked my spec_helper.rb file and noticed I wasn't calling start on the DatabaseCleaner. My updated spec_helper.rb file looks like this, and now everything works:
# Clean up the database
require 'database_cleaner'
config.mock_with :rspec
config.before(:suite) do
DatabaseCleaner.strategy = :truncation
DatabaseCleaner.orm = "mongoid"
end
config.before(:each) do
DatabaseCleaner.start
end
config.after(:each) do
DatabaseCleaner.clean
end
I have 2 models, an example:
class Report ...
belongs_to :answer_sheet
end
class AnswerSheet ...
has_one :report
end
When I do a:
#answersheet.report = Report.create(:data => 'bleah')
#answersheet.save
# and then create another report and assign it to the same #answersheet
# assuming at this stage #answersheet is already reloaded
#answersheet.report = Report.create(:data => 'new data')
#answersheet.save
# (irb) #answersheet.report returns the first report with the data 'bleah' and not
# the one with the new data.
Is this supposed to be the correct behavior?
If I want to update the association to the later report, how should I go about doing it?
It took me a few tries to see what you were talking about. But I got it now.
Take a look at the SQL and you'll find ActiveRecord is doing a select and then adding ASC and LIMIT 1. There can be more than one report records that refer to the same answer_sheet.
You can prevent this situation by adding a validation that checks for uniqueness of answer_sheet_id.
You should also start using save! and create! (note the bang operators) so exceptions are thrown during validation.
Lastly, calling Report.create followed by #answersheet.save performs two database transactions, whereas Report.new followed by #answersheet.save would perform just one.
I've to add several fields in a mongoid model, I know there is not migration with MongoDB but if I go on without dropping the DB, making rails to "regenerate" the DB entirely, it doesn't display or use the new fields at all !
What's the best way to go here ? Is there something softer than drop/reopen mongodb ?
Thanks in advance
luca
In general it should be possible to update old documents with the new fields at runtime. There is no need for migrations in MongoDB.
You maybe want to write rake tasks to update your old documents with the new fields and default values.
You could find out these documents by checking those new fields which have per default a nil value.
Update
Easy style:
If you define a new field with a default value, this value should always be used as long as you set a new one:
app/models/my_model.rb
class MyModel
include Mongoid::Document
field :name, type: String
field :data, type: String
# NEW FIELD
field :note, type: String, default: "no note given so far!"
end
If you query your database you should get your default value for documents which haven't this field before your extension:
(rails console)
MyModel.first
#=> #<MyModel …other fields…, note: "no note given so far!">
I tested this with a fresh rails stack with a current mongoid on Ruby 1.9.2 - should work with other stacks, too.
More complicated/complex style:
If you didn't set a default value, you'll get nil for this new field.
app/models/my_model.rb
class MyModel
include Mongoid::Document
field :name, type: String
field :data, type: String
# NEW FIELD
field :note, type: String
end
(rails console)
MyModel.first
#=> #<MyModel …other fields…, note: nil>
Then you could set up a rake task and migration file like in this example:
lib/tasks/my_model_migration.rake:
namespace :mymodel do
desc "MyModel migration task"
task :migrate => :environment do
require "./db/migrate.rb"
end
end
db/migrate.rb:
olds = MyModel.where(note: nil)
# Enumerator of documents without a valid :note field (= nil)
olds.each do |doc|
doc.note = "(migration) no note given yet"
# or whatever your desired default value should be
doc.save! rescue puts "Could not modify doc #{doc.id}/#{doc.name}"
# the rescue is only a failsafe statement if something goes wrong
end
Run this migration with rake mymodel:migrate.
This is only a starting point and you can extend this to a full mongoid migration engine.
The task :migrate => :environment do … is necessary, otherwise rake won't load models.
It is a little ridiculous to say that you don't need migrations with mongodb or mongoid. Any sophisticated app needs to be refactored from time to time and that can mean pulling fields out of disparate documents into a new one.
Writing one off rake tasks is way less convenient and error prone than having migrations be part of your deploy script so that it always gets run on every environment.
https://github.com/adacosta/mongoid_rails_migrations brings AR style migrations to mongoid.
You might need them less often, but you will certainly need them as an app grows.
Below is a nice code example for data migration script with mongoid and the ruby mongo driver - to be used when your updated model no longer match production data.
http://pivotallabs.com/users/lee/blog/articles/1548-mongoid-migrations-using-the-mongo-driver
I whish we would stop using "no migrations with mongoid" as slogan. It'll turn people to MongoDB for the wrong reasons, and it's only partially true. No schema, true, but data still needs to be maintained, which IMO is harder with MongoDB than RDBMs. There are other, great reasons for choosing MongoDB and it depends on your problem.