When inserting using a changeset in Ecto, it calls my debug function, and properly checks that everything is in order according to the changeset:
iex> User.changeset(%User{}, %{username: "username"})
#Ecto.Changeset<
action: nil,
changes: %{},
errors: [
email: {"can't be blank", [validation: :required]}
],
data: #Api.User<>,
valid?: false
>
However, when using update, my changeset functions never get called. I assume this is on purpose, as it wouldn't need to check if a field is required if only updating one field, however I have other functions in there such as auto_hash which uses a virtual password field to hash the password on update, and this cannot be called on update if the changeset functions never get called.
So my question is: is there a way to set up a similar thing to a changeset function that only gets called on update?
By 'on update', I mean when doing the following:
iex> user = Repo.get_by(User, email: "email")
iex> changeset = Ecto.Changeset.change(user, [ password: "password", password_confirmation: "password" ])
iex> Repo.update(changeset)
It looks like you aren't calling your changeset function. Ecto.Changeset.change/2 doesn't magically know where your specific function is: you have to call your own changeset function e.g. User.changeset(existing_user_data, new_data) -- this assumes that you've named your function changeset and you've defined it inside your User module. You might maintain multiple changeset functions, e.g. one for inserting new data and another for updating data.
Related
I have a ActiveRecord backend model where I am inserting records. However when I am assigning value to certain attributes i would like to assign them based on a certain condition being satisified or not. How can I go about doing it? Have attached an example below for better sake of clarity.
#user = User.find_by_name("John")
Store.create(
name: "Some Store",
email: "store#example.com",
user_id: #user.id if #user.applicant?
)
Yes, you can't apply suffix if to keyword arguments / hash elements this way. I normally do something along these lines:
store_params = {
name: "Some Store",
email: "store#example.com",
}
store_params[:user_id] = #user.id if #user.applicant?
Store.create(store_params)
This also works well if there's an existing user_id value that needs to be preserved if #user is not an applicant. For example, when you're updating records. For creation, simple parenthesizing should work, as pointed out by others
user_id: (#user.id if #user.applicant?)
Caveat: this assumes that the default value for user_id is nil, so a nil produced from the expression when user is not an applicant is equal to the value set when user_id was not supplied at all.
You can use safe-operator here
Just use brackets to avoid interpreter error
Store.create(
name: "Some Store",
email: "store#example.com",
user_id: (#user.id if #user&.applicant?)
)
I have setup a asyncValidation for my redux-form. I just figured out, that the changed field always is undefined in the values object that gets passed to the asyncValidate function.
Lets say i change the field firstname from "abc" to "abcd" everything works and the state gets updated. I get the following actions: (redux-form/)FOCUS, CHANGE, BLUR, START_ASYNC_VALIDATION and STOP_ASYNC_VALIDATION.
However, in the asyncValidate function:
handledAsyncValidate = values => {
console.log('values', values)
}
Directly after the change i get: {firstname: undefined, lastname: ''}
But when i trigger an asyncValidation on lastname, firstname is defined with "abcd".
Im using redux-form 6.5. Is there a change i have implemented something the wrong way? I dont use asyncValidateFields but i tested it and it does not change the described effect.
Edit:
Its a fairly large codebase.
Here i create my asyncValidation function and pass it to the form:
https://github.com/tocco/tocco-client/blob/pr/entity-browser/form-refactoring/packages/entity-browser/src/routes/detail/components/DetailView/DetailView.js
When i log values there, the changed field will be undefined.
The form you can find here:
https://github.com/tocco/tocco-client/blob/pr/entity-browser/form-refactoring/packages/entity-browser/src/routes/detail/components/DetailForm/DetailForm.js
More than happy so help you find something specific.
I am upgrading my app from Rails 4.0 to 4.1 (on my way to 4.2) and have tests like this that are failing:
describe "#restore_disabled_account" do
let(:username) { 'username' }
let(:email) { 'email#domain.com' }
let(:m3_user) { create(:user, username: username,
email: email,
archived: false,
propagate_in_test_mode: true) }
let(:m2_user) { Maestro2::User.where(UserID: m3_user.id).first }
before do
m3_user.disable_account
end
it "unarchives the account" do
m3_user.restore_disabled_account
m3_user.reload
expect(m3_user.archived).to be_falsey
end
end
I have verified that the database contains a record with the expected attributes at the beginning of this test. When debugging the test and I stop before the reload statement, I see that the db record has been updated as expected (archived is false and other attributes are updated as expected). I can also see that the m3_user object has been updated with the same attributes. When I step forward to run the reload step and query the db, I can see the record has returned to its original state, as has the object in memory. The test then fails because m3_user.archived is true.
Can anybody tell me why? All tests in my suite were passing before starting the upgrade. The app is currently using Ruby 2.2.4, Rails 4.1.16, rspec 3.5.3 and rspec-rails 3.5.2
For reference, the two User class method calls are below:
def disable_account
self.update_attributes(username: "disabled_#{id}_#{username}",
email: "disabled_#{id}_#{email}",
archived: true)
end
def restore_disabled_account
self.update_attributes(username: username.gsub(/^disabled_#{id}_/, ''),
email: email.gsub(/^disabled_#{id}_/, ''),
archived: false)
end
The short answer: It turns out that ActiveRecord 4.1 changed the reload method to call a new private method called reset_changes. My User class also had a reset_changes method so my method overrode what should have been called.
The long answer: The more I looked at this, the more I suspected something unique to my User model. Other tests using reload performed as expected. I then came across a reset_changes method in User. After seeing what it did (update the db record with the previous values), I dropped a puts statement into it so I would see if it was called. Once that was confirmed, I dumped the caller backtrace which pointed me to https://github.com/rails/rails/blob/4-1-stable/activerecord/lib/active_record/attribute_methods/dirty.rb#L37
I've changed my User model method to undo_changes and my specs are green.
Let's say I have defined my model Person with a couple of indexes:
class Person
include Mongoid::Document
field :email
field :ssn
index({ email: 1 }, { unique: true })
index({ ssn: 1 }, { unique: true })
end
However, only the email index already exists in the database, so when I call
Person.collection.indexes.each {|i| puts i.inspect}
I get the following response:
{"v"=>1, "key"=>{"_id"=>1}, "name"=>"_id_", "ns"=>"x.person"}
{"v"=>1, "unique"=>true, "key"=>{"email"=>1}, "name"=>"email_1", "ns"=>"x.person"}
The question is, how can I get the list of defined indexes in the model, even if they are not already created in mongo ?
In my case, such list should include the definition for the field "ssn"
In other words...How to get those indexes that haven't been created yet ?
Person.index_specifications
shows the indexes defined in the model regardless of its existence in the database.
And
Person.collection.indexes
only shows the index that actually exists in the database.
So there is something else that is worth paying attention to:
rake db:mongoid:create_indexes
will create the indexes defined in the model in the database, and it uses the method 'index_specifications' in deed.
While this removes all the indexes other than the index of the primary key:
rake db:mongoid:remove_indexes
So when you want to only remove the indexes that exists in the database but no longer defined in the database, you should use this:
rake db:mongoid:remove_undefined_indexes
which use the method 'undefined_indexes' in deed.
I hope this can be helpful.
The docs are here:
https://mongoid.github.io/en/mongoid/docs/indexing.html
http://www.rubydoc.info/github/mongoid/mongoid/Mongoid/Tasks/Database#create_indexes-instance_method
Just found it...
We can get the list of all index definitions into the model as follows:
Person.index_specifications
This is an array populated when the application is loaded and is used by the "create_indexes" method as can be seen here:
https://github.com/mongodb/mongoid/blob/master/lib/mongoid/indexable.rb
I'm using the mongoid gem in Ruby. Each time I upsert, save or insert the same unique document in a collection, the Ruby instance shows a different id. For example, I have a script like so:
class User
include Mongoid::Document
field :email, type: String
field :name, type: String
index({ email: 1}, { unique: true })
create_indexes
end
u=User.new(email: 'test#testers.edu', name: "Mr. Testy")
u.upsert
puts u.to_json
The first time I run it against an empty or non-existent collection, I get this output
{"_id":"52097dee5feea8384a000001","email":"test#testers.edu","name":"Mr. Testy"}
If I run it again, I get this:
{"_id":"52097e805feea8575a000001","email":"test#testers.edu","name":"Mr. Testy"}
But the document in MongoDB still shows the first id (52097dee5feea8384a000001), so I know we are operating on the same record. If I always follow the upsert with a find_by operation, I get the right id consistently, but it feels inefficient to have to run an upsert followed by a query.
Am I doing something wrong? I'm concerned that I will be getting the wrong id back in an operation where someone is, say, updating his profile repeatedly.