When I try to run the following code, DataMapper calls for 3 queries in just these two lines. Can anyone explain why it would do this?
#u = User.first(:uid => 1, :fields => [:uid, :name])
json #u
This calls the following queries:
SELECT "uid", "name" FROM "users" WHERE "uid" = 1 ORDER BY "uid" LIMIT 1
SELECT "uid", "email" FROM "users" WHERE "uid" = 1 ORDER BY "uid"
SELECT "uid", "accesstoken" FROM "users" WHERE "uid" = 1 ORDER BY "uid"
It is worth noting that datamapper has a validation on name for being => unique
Also, the accesstoken is lazily loaded so it should only be queried when asked for specifically, which must be happening when serializing it to a json object.
EDIT:
I have added my model class for clarification. I just want one query made for the uid and name without having to extract them individually from the object. Maybe this is the only way?
property :uid, Serial
property :name, String
property :email, String
property :accesstoken, Text
ANSWER:
Use the dm-serializer gem that has this support built-in
https://github.com/datamapper/dm-serializer
The first query is invoked by your User.first... call. Notice the fields it's selecting are what you requested - uid and name
The second and third queries are getting run in the json serialization, as it's lazy loading each property you didn't already load.
So you either need to do a custom serialization to only output uid and name for your users, or you should just remove the field selection from your initial query so it all gets loaded at once.
Update:
To do a custom serialization with datamapper, you can use the dm-serializer gem https://github.com/datamapper/dm-serializer and call #u.to_json(only: [:uid, :name])
Alternatively in this simple case you could just build the serialized object you want yourself, for which there are many examples: Rails3: Take controll over generated JSON (to_json with datamapper ORM)
Related
I'm storing a user's profile fields in a separate table, and want to look up a user by email address (for password reset). Trying to determine the best approach, and ran into this unexpected behaviour inconsistency.
Schema
create_table(:users) do
String :username, primary_key: true
...
end
create_table(:user_fields) do
primary_key :id
foreign_key :user_id, :users, type: String, null: false
String :label, null: false
String :value, null: false
end
Console Session
This version works (look up field, eager load it's associated user, call .all, take the first one):
irb(main):005:0> a = UserField.where(label: 'email', value: 'testuser#test.com').eager(:user).all[0]
I, [2015-09-29T17:54:06.273263 #147] INFO -- : (0.000176s) SELECT * FROM `user_fields` WHERE ((`label` = 'email') AND (`value` = 'testuser#test.com'))
I, [2015-09-29T17:54:06.273555 #147] INFO -- : (0.000109s) SELECT * FROM `users` WHERE (`users`.`username` IN ('testuser'))
=> #<UserField #values={:id=>2, :user_id=>"testuser", :label=>"email", :value=>"testuser#test.com"}>
irb(main):006:0> a.user
=> #<User #values={:username=>"testuser"}>
You can see both queries (field and user) are kicked off together, and when you try to access a.user, the data's already loaded.
But when I try calling .first in place of .all:
irb(main):007:0> b = UserField.where(label: 'email', value: 'testuser#test.com').eager(:user).first
I, [2015-09-29T17:54:25.832064 #147] INFO -- : (0.000197s) SELECT * FROM `user_fields` WHERE ((`label` = 'email') AND (`value` = 'testuser#test.com')) LIMIT 1
=> #<UserField #values={:id=>2, :user_id=>"testuser", :label=>"email", :value=>"testuser#test.com"}>
irb(main):008:0> b.user
I, [2015-09-29T17:54:27.887718 #147] INFO -- : (0.000172s) SELECT * FROM `users` WHERE (`username` = 'testuser') LIMIT 1
=> #<User #values={:username=>"testuser"}>
The eager load fails -- it doesn't kick off the second query for the user object until you try to reference it with b.user.
What am I failing to understand about the sequel gem API here? And what's the best way to load a model instance based on the attributes of it's associated models? (find user by email address)
Eager loading only makes sense when loading multiple objects. And in order to eager load, you need all of the current objects first, in order to get all associated objects in one query. With each, you don't have access to all current objects first, since you are iterating over them.
You can use the eager_each plugin if you want Sequel to handle things internally for you, though note that it makes dataset.first do something similar to dataset.all.first for eagerly loaded datasets. But it's better to not eager load if you only need one object, and to call all if you need to eagerly load multiple ones.
I can get some data with where() method, but if some records were deleted with Paranoia delete() method (the deleted_at field is set with the date of deletion) they are not returned in the results.
I can get those records using collection.deleted.entries.find() with Moped, but I need it as usual Mongoid criteria data.
The paranoia plugin sets a default_scope on the model.
included do
field :deleted_at, type: Time
class_attribute :paranoid
self.paranoid = true
default_scope where(deleted_at: nil)
scope :deleted, ne(deleted_at: nil)
define_model_callbacks :restore
end
You can tell Mongoid not to apply the default scope by using unscoped, which can be inline or take a block.
Band.unscoped.where(name: "Depeche Mode")
Band.unscoped do
Band.where(name: "Depeche Mode")
end
I'm using the mongoid gem in Ruby. Each time I upsert, save or insert the same unique document in a collection, the Ruby instance shows a different id. For example, I have a script like so:
class User
include Mongoid::Document
field :email, type: String
field :name, type: String
index({ email: 1}, { unique: true })
create_indexes
end
u=User.new(email: 'test#testers.edu', name: "Mr. Testy")
u.upsert
puts u.to_json
The first time I run it against an empty or non-existent collection, I get this output
{"_id":"52097dee5feea8384a000001","email":"test#testers.edu","name":"Mr. Testy"}
If I run it again, I get this:
{"_id":"52097e805feea8575a000001","email":"test#testers.edu","name":"Mr. Testy"}
But the document in MongoDB still shows the first id (52097dee5feea8384a000001), so I know we are operating on the same record. If I always follow the upsert with a find_by operation, I get the right id consistently, but it feels inefficient to have to run an upsert followed by a query.
Am I doing something wrong? I'm concerned that I will be getting the wrong id back in an operation where someone is, say, updating his profile repeatedly.
I have a table named subs which has many articles. The articles table has a timestamp column called published.
Sub.select( "subs.*,MAX(articles.published) published").joins("LEFT OUTER JOIN articles ON subs.id=articles.sub_id").group("subs.id").first.published.class
=> String
Article.select("max(published) published").group("id").first.published.class
=> ActiveSupport::TimeWithZone
I want to get an ActiveSupport::TimeWithZone object back from the first query.
Rails 3
Rails determines how to type cast attributes based on their database column definitions. For example, say you have a created_at method on your Sub model. When a record is loaded read_attribute is used (ActiveRecord::AttributeMethods::Read). This uses type_cast_attribute which determines how to cast the value based on the column info. For example, if you are using PostgreSQL it may use:
Sub.columns.detect { |c| c.name == "created_at" }.type_cast_code("v")
=> "ActiveRecord::ConnectionAdapters::PostgreSQLColumn.string_to_time(v)"
But Rails doesn't know what to do with columns that aren't on the Sub model. So it just gives back a String. If you need to work with a ActiveSupport::TimeWithZone object, you can cast the value with:
published = Sub.select( "subs.*,MAX(articles.published) published").joins("LEFT OUTER JOIN articles ON subs.id=articles.sub_id").group("subs.id").first.published
published.present? ? Time.zone.parse(published) : nil
Rails 4
In Rails 4, Rails is smarter about this kind of type-casting. When the SQL is executed, ActiveRecord::Result is created and the column_types are passed to the initializer. In your example Sub.select query, the published column would be cast as a Time object.
Is there a Ruby, or Activerecord method that can write and read a hash to and from a database field?
I need to write a web utility to accept POST data and save it to a database, then later on pull it from the database in its original hash form. But ideally without 'knowing' what the structure is. In other words, my data store needs to be independent of any particular set of hash keys.
For example, one time the external app might POST to my app:
"user" => "Bill",
"city" => "New York"
But another time the external app might POST to my app:
"company" => "Foo Inc",
"telephone" => "555-5555"
So my utility needs to save an arbitrary hash to a text field in the database, then, later, recreate the hash from what was saved.
Rails 4 adds support for the Postgres hstore data type which will let you add hashes directly into your (postgres) database.
If you are using Rails 4 and Postgres, you can use hstore in your migration:
def up
execute "create extension hstore"
add_column :table, :column, :hstore
end
def down
remove_column :table, :column
end
That execute command will enable hstore in Postgres, so you only have to do that once.
This will enable you to store a hash in :column just like you would any other data type.
There are two ways to do this:
Serialize your hash and store it in a text field.
Split the hash and store each key in a separate row.
The problem with the first approach is that finding and manipulating is difficult and expensive. For example, prefix a "0" before the telephone number of all employees working in Foo Inc. will be a nightmare, compared to storing the data in regular tabular format.
Your schema would be:
employees (id, created_at, updated_at)
employee_details (id, employee_id, key, value)
So, to store
"company" => "Foo Inc",
"telephone" => "555-5555"
you would do:
employees: 1, 2012-01-01, 2012-01-01
employee_details (1, 1, "company", "Foo Inc"), (2, 1, "telephone", "555-5555")
Drawbacks of this approach: Rails does not natively support such kind of a schema.
You can use serialization with 3 options: Marshal in binary format, YAML and JSON human-readable formats of data store.
Once you are trying each of methods, do not forget to measure time to serialize and deserialize as well. If you need to pull data back in origin format, JSON is the good choice to use, because you don't need to deserialize it, but use it as a string itself.
You're looking for serialization. It will help you to do exactly what you want.
Rails 4 has a new feature called Store, so you can easily use it to solve your problem. You can define an accessor for it and it is recommended you declare the database column used for the serialized store as a text, so there's plenty of room. The original example:
class User < ActiveRecord::Base
store :settings, accessors: [ :color, :homepage ], coder: JSON
end
u = User.new(color: 'black', homepage: '37signals.com')
u.color # Accessor stored attribute
u.settings[:country] = 'Denmark' # Any attribute, even if not specified with an accessor
# There is no difference between strings and symbols for accessing custom attributes
u.settings[:country] # => 'Denmark'
u.settings['country'] # => 'Denmark'