How does save method called with commit=True param change the underlying object - django-forms

Recently, I have noticed that Django ModelForms actually change the underlying python instance of the model before save method is called. The example code:
class MyModelForm(forms.ModelForm):
def save(self, commit=True):
print(self.instance) # Surprise! The instance is actually updated according to cleaned_data
super().save(commit=False)
To my surprise, self.instance is already updated according to what has been submitted in clean_data. Then my question is, what super().save(commit=False) actually does? How does it change the state of the self.instance object ?

Related

Active Record Overrided Setter method gets called twice on update

I am getting this weird behaviour in model in Rails 6. I have a datetime attribute in rails model.
When I override setter method in model as:
def start_date=(epoch_time)
super(Time.at(epoch_time).utc)
end
and create a new record, works perfectly fine.
But when I call update on the record, this setter method gets called twice, first time with passed value as epoch_time argument and second time with nil upon Update Query which breaks it.
EDITED:
I found out using "caller" inside method that it is being called from counter_culture gem. So, need to avoid it somehow?

Django REST Framework: does source kwarg work in SerializerMethodField?

Do SerializerMethodFields accept the source= kwarg?
I have been running to a bug where I've been consistently passing in a value to the source=in SerializerMethodFields but it is always ignored. That is, the argument passed as obj to my SerializerMethodField is always the instance I am trying to serialize itself (e.g. source='*').
The DRF Documentation says there are certain core arguments that all field types should accept include the source=argument.
With that said, the DRF Documentation says this about SerializerMethodField:
SerializerMethodField
This is a read-only field. It gets its value by calling a method on the serializer class it is attached to. It can be used to add any sort of data to the serialized representation of your object.
Signature: SerializerMethodField(method_name=None)
method_name - The name of the method on the serializer to be called. If not included this defaults to get_<field_name>.
The serializer method referred to by the method_name argument should accept a single argument (in addition to self), which is the object being serialized. It should return whatever you want to be included in the serialized representation of the object.
Which did not leave me with a convincing answer as to what the expected behavior should be like for source= since it does not say that no other core kwargs are applicable.
Any insight with respect to what the expected behavior is for the source= in SerializerMethodField would be greatly appreciated!
I did a bit of snooping into the source code for SerializerMethodField and saw this
class SerializerMethodField(Field):
# ...ignoring useful docstrings for brevity...
def __init__(self, method_name=None, **kwargs):
self.method_name = method_name
kwargs['source'] = '*' # <-- and here's our answer
kwargs['read_only'] = True
super().__init__(**kwargs)
I would have been nice if the DRF documentation for SerializerMethodField was more explicit in saying that none of the other core arguments applied here but such is life.
Answer: No, the source= is not a respected argument.

Get notified when a method gets redefined

I'm implementing an application which tracks all methods which are added
to a class. So I need to be notified everytime a new method gets added
to a class. I do it this way:
class Class
def singleton_method_added(method_name)
puts "new singleton method added"
do_something_with_this_info(self, method_name)
end
def method_added(method_name)
puts "new method added"
do_something_with_this_info(self, method_name)
end
end
Very simple, but very effective yet.
However, I have one concern doing it this way: When another Ruby
library/application also implements method_added/singleton_method_added
in the class Class, then my implementation would be overridden. This is
very bad since my whole application relies on the code above. I know
that I can't prevent other applications from redefining my implementation of method_added/singleton_method_added and thats ok since I don't want to break the implementation of those applications.
However, at least I need to be notified when
method_added/singleton_method_added methods get overridden so that I can
react properly.
So my question: Does some kind of hook/callback exist in Ruby which gets
called each time a method has been redefined?

Ruby deserializing YAML

I'm working with DelayedJob and I need to override a method that I thought was being used when an object is deserialized from YAML: self.yaml_new (defined in in delayed/serialization/active_record)
My impression was that when YAML deserialized some data, it would call the yaml_new method on the class of the type of that data
DJ's yaml_new method fetches the object from the database using the passed in id
I'm unable to achieve this behaviour with my own classes. When I set a self.yaml_new method on a class and try to YAML.load on a serialized instance, it doesn't seem to call yaml_new so I must obviously be mistaken.
What then is this method for?
Searching for yaml_new doesn't yield much (just API docs of other people using it). So I'm wondering what exactly this method is.
I figured yaml_new would be some hook method called when an object is found of some type if that method existed on the class. But again I can't actually get this to work. Below is a sample:
class B
def self.yaml_new(klass, tag, val)
puts "I'm in yaml new!"
end
end
b = B.new
YAML.load b.to_yaml # expected "I'm in yaml new!" got nothing
updates
So after playing around in my Rails application, it appears that yaml_new does actually get called from YAML.load. I have a file in there like so:
module ActiveRecord
class Base
def self.yaml_new(klass, tag, val)
puts "\n\n yaml_new!!!\n\n"
klass.find(val['attributes']['id'])
rescue ActiveRecord::RecordNotFound
raise Delayed::DeserializationError
end
def to_yaml_properties
['#attributes', '#database'] # add in database attribute for serialization
end
end
end
Which is just what DJ does, except I'm logging the action.
YAML.load Contact.first.to_yaml
# => yaml_new!!!
I actually get the logged output!!
So what am I doing wrong outside of my Rails app?? Is there some other way of getting this method to trigger?? I ask because I'm trying to test this in my own gem and the yaml_new method doesn't trigger, so my tests fail, and yet it actually does work inside Rails
You need to add something like
yaml_as "tag:ruby.yaml.org,2002:B"
before the definition of the self.yaml_new method. According to the comments in yaml/tag.rb, yaml_as:
Adds a taguri tag to a class, used when dumping or loading the class
in YAML. See YAML::tag_class for detailed information on typing and
taguris.

Good semantics, Subclass or emulate?

I have been using python for a while now and Im happy using it in most forms but I am wondering which form is more pythonic. Is it right to emulate objects and types or is it better to subclass or inherit from these types. I can see advantages for both and also the disadvantages. Whats the correct method to be doing this?
Subclassing method
class UniqueDict(dict):
def __init__(self, *args, **kwargs):
dict.__init__(self, *args, **kwargs)
def __setitem__(self, key, value):
if key not in self:
dict.__setitem__(self, key, value)
else:
raise KeyError("Key already exists")
Emulating method
class UniqueDict(object):
def __init__(self, *args, **kwargs):
self.di = dict(*args, **kwargs)
def __setitem__(self, key, value):
if key not in self.di:
self.di[key] = value
else:
raise KeyError("Key already exists")
Key question you have to ask yourself here is:
"How should my class change if the 'parent' class changes?"
Imagine new methods are added to dict which you don't override in your UniqueDict. If you want to express that UniqueDict is simply a small derivation in behaviour from dict's behaviour, then you'd go with inheritance since you will get changes to the base class automatically. If you want to express that UniqueDict kinda looks like a dict but actually isn't, you should go with the 'emulation' mode.
Subclassing is better as you won't have to implement a proxy for every single dict method.
I would go for subclass, and for the reason I would refer to the motivation of PEP 3119:
For example, if asking 'is this object
a mutable sequence container?', one
can look for a base class of 'list',
or one can look for a method named
'getitem'. But note that although
these tests may seem obvious, neither
of them are correct, as one generates
false negatives, and the other false
positives.
The generally agreed-upon remedy is to
standardize the tests, and group them
into a formal arrangement. This is
most easily done by associating with
each class a set of standard testable
properties, either via the inheritance
mechanism or some other means. Each
test carries with it a set of
promises: it contains a promise about
the general behavior of the class, and
a promise as to what other class
methods will be available.
In short, it is sometimes desirable to be able to check for mapping properties using isinstance.

Resources