Recently I have tried using Hanami, Ruby framework. I would like to execute migration with "bulk insert".
I checked following issue discussion.
Proposal: multi_create method for bulk records #406
But, I don't understand how to call ROM object from Hanami. Would you please explain how to do that and any web site to refer ?
Finally I have realized meaning of code.
At first, I wrote bulk_insert as instance method.
somes represents SQL table's name, I could use this with symbol
Repository sample
class SomeRepository < Hanami::Repository
def bulk_insert(data)
command(:create, somes, use: [:timestamps], result: :many).call(data)
end
end
Bulk insert sample
# we can pass array of hash
SomeRepository.new.bulk_insert(some_array)
SomeRepository.new.bulk_insert([{name: "sample1"}, {name: "sample2"}, {name: "sample3"}])
Related
I am working on an app that allows Members to take a survey (Member has a one to many relationship with Response). Response holds the member_id, question_id, and their answer.
The survey is submitted all or nothing, so if there are any records in the Response table for that Member they have completed the survey.
My question is, how do I re-write the query below so that it actually works? In SQL this would be a prime candidate for the EXISTS keyword.
def surveys_completed
members.where(responses: !nil ).count
end
You can use includes and then test if the related response(s) exists like this:
def surveys_completed
members.includes(:responses).where('responses.id IS NOT NULL')
end
Here is an alternative, with joins:
def surveys_completed
members.joins(:responses)
end
The solution using Rails 4:
def surveys_completed
members.includes(:responses).where.not(responses: { id: nil })
end
Alternative solution using activerecord_where_assoc:
This gem does exactly what is asked here: use EXISTS to to do a condition.
It works with Rails 4.1 to the most recent.
members.where_assoc_exists(:responses)
It can also do much more!
Similar questions:
How to query a model based on attribute of another model which belongs to the first model?
association named not found perhaps misspelled issue in rails association
Rails 3, has_one / has_many with lambda condition
Rails 4 scope to find parents with no children
Join multiple tables with active records
You can use SQL EXISTS keyword in elegant Rails-ish manner using Where Exists gem:
members.where_exists(:responses).count
Of course you can use raw SQL as well:
members.where("EXISTS" \
"(SELECT 1 FROM responses WHERE responses.member_id = members.id)").
count
You can also use a subquery:
members.where(id: Response.select(:member_id))
In comparison to something with includes it will not load the associated models (which is a performance benefit if you do not need them).
If you are on Rails 5 and above you should use left_joins. Otherwise a manual "LEFT OUTER JOINS" will also work. This is more performant than using includes mentioned in https://stackoverflow.com/a/18234998/3788753. includes will attempt to load the related objects into memory, whereas left_joins will build a "LEFT OUTER JOINS" query.
def surveys_completed
members.left_joins(:responses).where.not(responses: { id: nil })
end
Even if there are no related records (like the query above where you are finding by nil) includes still uses more memory. In my testing I found includes uses ~33x more memory on Rails 5.2.1. On Rails 4.2.x it was ~44x more memory compared to doing the joins manually.
See this gist for the test:
https://gist.github.com/johnathanludwig/96fc33fc135ee558e0f09fb23a8cf3f1
where.missing (Rails 6.1+)
Rails 6.1 introduces a new way to check for the absence of an association - where.missing.
Please, have a look at the following code snippet:
# Before:
Post.left_joins(:author).where(authors: { id: nil })
# After:
Post.where.missing(:author)
And this is an example of SQL query that is used under the hood:
Post.where.missing(:author)
# SELECT "posts".* FROM "posts"
# LEFT OUTER JOIN "authors" ON "authors"."id" = "posts"."author_id"
# WHERE "authors"."id" IS NULL
As a result, your particular case can be rewritten as follows:
def surveys_completed
members.where.missing(:response).count
end
Thanks.
Sources:
where.missing official docs.
Pull request.
Article from the Saeloun blog.
Notes:
where.associated - a counterpart for checking for the presence of an association is also available starting from Rails 7.
See offical docs and this answer.
My goal is to test the types of my GraphQL schema in ruby, I'm using the graphql-ruby gem.
I couldn't find any best practice for this, so I'm wondering what's the best way to test the fields and types of a Schema.
The gem recommends against testing the Schema directly http://graphql-ruby.org/schema/testing.html but I still find valuable to be able to know when the schema changes unexpectedly.
Having a type like this:
module Types
class DeskType < GraphQL::Schema::Object
field :id, ID, 'Id of this Desk', null: false
field :location, String, 'Location of the Desk', null: false
field :custom_id, String, 'Human-readable unique identifier for this desk', null: false
end
end
My first approach has been to use the fields hash in the GraphQL::Schema::Object type, for example:
Types::DeskType.fields['location'].type.to_s => 'String!'
Creating an RSpec matcher, I could come up with tests that look like this:
RSpec.describe Types::DeskType do
it 'has the expected schema fields' do
fields = {
'id': 'ID!',
'location': 'String!',
'customId': 'String!'
}
expect(described_class).to match_schema_fields(fields)
end
end
This approach has some drawbacks though:
The code in the matcher depends on the implementation of the class GraphQL::Schema::Object, any breaking changes will break the test suite after an update.
We're repeating code, the tests asserts the same fields in the type.
Writing these tests get tedious, and that makes devs less likely to write them.
It looks you want to test your schema because you want to know if it is going to break the client. Basically you should avoid this.
Instead you can use gems like: graphql-schema_comparator to print breaking changes.
I suggest to have a rake task for dumping your schema (and commit it in your repo).
You can write some spec to check if the schema was dump - then you will make sure, you have always up-to date schema dump.
Setup your CI to compare schema of current branch with schema on master branch.
Fail your build if schema has dangerous or breaking changes.
You can even generate Schema Changelog using schema-comparator ;) Or you can even use slack notifications to send any schema changes there so your team could easilly track any changes.
What I feel is an improvement over the first approach I took is to use snapshot testing for the GraphQL Schema, instead of testing each of the types/mutation schemas one by one, I created a single test:
RSpec.describe MySchema do
it 'renders the full schema' do
schema = GraphQL::Schema::Printer.print_schema(MySchema)
expect(schema).to match_snapshot('schema')
end
end
This approach uses a slightly modified version of the rspec-snapshot gem, see my PR here.
The gem doesn't let you update the snapshot with a single command like in Jest, so I also created a rake task to delete the current snapshot:
namespace :tests do
desc 'Deletes the schema snapshot'
task delete_schema_snapshot: :environment do
snapshot_path = Rails.root.join('spec', 'fixtures', 'snapshots', 'schema.snap')
File.delete(snapshot_path) if File.exist?(snapshot_path)
end
end
With this you'll get a pretty RSpec diff when the schema has been modified.
The top-level Schema object has an #execute method. You can use this to write tests like
RSpec.describe MySchema do
it 'fetches an object' do
id = 'Zm9vOjE'
query = <<~GRAPHQL
query GetObject($id: ID!) {
node(id: $id) { __typename id }
}
GRAPHQL
res = described_class.execute(
query,
variables: { id: id }
)
expect(res['errors']).to be_nil
expect(res['data']['node']['__typename']).to eq('Foo')
expect(res['data']['node']['id']).to eq(id)
end
end
The return value of the #execute method will be the conventional HTTP-style response, as a string-keyed hash. (Actually it's a GraphQL::Query::Result, but it delegates most things to an embedded hash.)
First time here so I'll try to be most readable possible. I have a test in a feature file which uses a datatable for sorting some data as seen below:
Current cucumber test example
Currently I am using scenario.test_steps.map(&:name) to get all the steps (this is necessary because of an integration to an application lifecycle software manager) in an array and this is what I get:
Cucumber steps got in the hooks file
My question is: is it possible to get the datatable information in the Before do |scenario| hook over the hooks file?
Thanks in advance to anyone who helps!
When iterating through scenario.test_steps, each test step has an associated Cucumber::Core::Ast::Step. This contains the step specific information such as the step name, data table, etc. The associated Ast::Step will be the last element of the test step's source:
test_step.source
#=> [
#=> #<Cucumber::Core::Ast::Feature "Feature: Something" (features/something.feature:1)>,
#=> #<Cucumber::Core::Ast::Scenario "Scenario: Only a test" (features/something.feature:3)>,
#=> #<Cucumber::Core::Ast::Step "Given : the fields" (features/something.feature:4)>
#=> ]
To access the Ast::Step multi-line argument, check the multiline_arg. If a data table has been specified, an Ast::DataTable will be returned. Otherwise, an Ast::EmptyMultilineArgument will be returned. You can check if the returned value is a data table by calling data_table?.
As an example, the below would iterate through each test step and output the data table if defined:
Before do |scenario|
scenario.test_steps.each do |test_step|
multiline_arg = test_step.source.last.multiline_arg
puts multiline_arg.raw if multiline_arg.data_table?
end
end
EDIT: Fixed - for ruby use "insert_all" instead of "insertAll" like the api specifies. The api for ruby needs updating.
Im using v 0.6.4 of the google-api-client gem and trying to create a streaming insert, but keep getting the following error:
google_bigquery.rb:233:in undefined method `insertAll' for #<Google::APIClient::Resource:0xcbc974 NAME:tabledata> (NoMethodError)
My code is as follows:
def streaming_insert_data_in_table(table, dataset=DATASET)
body = {"rows"=>[
{"json"=> {"person_id"=>1, "name"=>"john"}},
{"json"=> {"person_id"=>2, "name"=>"doe"}},
]}
result = #client.execute(
:api_method=> #bigquery.tabledata.insert_all,
:parameters=> {
:projectId=> #project_id.to_s,
:datasetId=> dataset,
:tableId=>table},
:body_object=>body,
)
puts result.body
end
Could someone tell me if the insetAll has been created for the google-api-client gem? I have tried 'insert' as that is what table, dataset etc use and get the same error as well.. I can however run tabledata.list perfectly fine.. I've tried digging throught the gem source code and didn't get anywhere with that.
Is the body object that I created correct or do I need to alter it?
Any help is much appreciated.
Thanks in advance and have a great day.
Ok. So fixed it and updated the code in the question. For ruby: the method is called "insert_all". Also note that the table & schema must be created BEFORE the insert_all. This id different when compared to the the "jobs.insert" method which will create the table if it doesn't exist
Is there any way that I can fire a raw mongo query directly in Ruby instead of converting them to the native Ruby objects?
I went through Ruby Mongo Tutorial, but I cannot find such a method anywhere.
If it were mysql, I would have fired a query something like this.
ActiveRecord::Base.connection.execute("Select * from foo")
My mongo query is a bit large and it is properly executing in the MongoDB console. What I want is to directly execute the same inside Ruby code.
Here's a (possibly) better mini-tutorial on how to get directly into the guts of your MongoDB. This might not solve your specific problem but it should get you as far as the MongoDB version of SELECT * FROM table.
First of all, you'll want a Mongo::Connection object. If
you're using MongoMapper then you can call the connection
class method on any of your MongoMapper models to get a connection
or ask MongoMapper for it directly:
connection = YourMongoModel.connection
connection = MongoMapper.connection
Otherwise I guess you'd use the from_uri constructor to build
your own connection.
Then you need to get your hands on a database, you can do this
using the array access notation, the db method, or get
the current one straight from MongoMapper:
db = connection['database_name'] # This does not support options.
db = connection.db('database_name') # This does support options.
db = MongoMapper.database # This should be configured like
# the rest of your app.
Now you have a nice shiny Mongo::DB instance in your hands.
But, you probably want a Collection to do anything interesting
and you can get that using either array access notation or the
collection method:
collection = db['collection_name']
collection = db.collection('collection_name')
Now you have something that behaves sort of like an SQL table so
you can count how many things it has or query it using find:
cursor = collection.find(:key => 'value')
cursor = collection.find({:key => 'value'}, :fields => ['just', 'these', 'fields'])
# etc.
And now you have what you're really after: a hot out of the oven Mongo::Cursor
that points at the data you're interested in. Mongo::Cursor is
an Enumerable so you have access to all your usual iterating
friends such as each, first, map, and one of my personal
favorites, each_with_object:
a = cursor.each_with_object([]) { |x, a| a.push(mangle(x)) }
There are also command and eval methods on Mongo::DB that might do what you want.
In case you are using mongoid you will find the answer to your question here.
If you're using Mongoid 3, it provides easy access to its MongoDB driver: Moped. Here's an example of accessing some raw data without using Models to access the data:
db = Mongoid::Sessions.default
# inserting a new document
collection = db[:collection_name]
collection.insert(name: 'my new document')
# finding a document
doc = collection.find(name: 'my new document').first
# "select * from collection"
collection.find.each do |document|
puts document.inspect
end