Django REST Framework: does source kwarg work in SerializerMethodField? - django-rest-framework

Do SerializerMethodFields accept the source= kwarg?
I have been running to a bug where I've been consistently passing in a value to the source=in SerializerMethodFields but it is always ignored. That is, the argument passed as obj to my SerializerMethodField is always the instance I am trying to serialize itself (e.g. source='*').
The DRF Documentation says there are certain core arguments that all field types should accept include the source=argument.
With that said, the DRF Documentation says this about SerializerMethodField:
SerializerMethodField
This is a read-only field. It gets its value by calling a method on the serializer class it is attached to. It can be used to add any sort of data to the serialized representation of your object.
Signature: SerializerMethodField(method_name=None)
method_name - The name of the method on the serializer to be called. If not included this defaults to get_<field_name>.
The serializer method referred to by the method_name argument should accept a single argument (in addition to self), which is the object being serialized. It should return whatever you want to be included in the serialized representation of the object.
Which did not leave me with a convincing answer as to what the expected behavior should be like for source= since it does not say that no other core kwargs are applicable.
Any insight with respect to what the expected behavior is for the source= in SerializerMethodField would be greatly appreciated!

I did a bit of snooping into the source code for SerializerMethodField and saw this
class SerializerMethodField(Field):
# ...ignoring useful docstrings for brevity...
def __init__(self, method_name=None, **kwargs):
self.method_name = method_name
kwargs['source'] = '*' # <-- and here's our answer
kwargs['read_only'] = True
super().__init__(**kwargs)
I would have been nice if the DRF documentation for SerializerMethodField was more explicit in saying that none of the other core arguments applied here but such is life.
Answer: No, the source= is not a respected argument.

Related

In sorbet, can you specify that a type is a descendent of a class?

I have a method that returns an object which could be one of many different types of object but which are all part of the same ancestor class. The precise object type is inferred dynamically.
However, I'm confused as to what to put for the return value in the signature. I've put a placeholder below using instance_of to illustrate the problem:
sig{params(instance_class: String).returns(instance_of ParentClass)}
def build_instance instance_class
klass = Object.const_get(instance_class)
return klass.new
end
Given that I don't know which precise class will be returned (and I'd prefer not to list them explicitly) but I do know that it will be a subclass of ParentClass is there a way in Sorbet to specify this? I could use T.untyped but it's unnecessarily loose.
Through trial and error I've discovered that checking that the object includes the type in its ancestors is, if I understand correctly, sorbet's default behaviour.
Sorbet won't check that the object precisely matches the specified Type, only that it includes that type in its ancestors (perhaps this is what Typechecking in general means but I'm fairly new to the game).
To avoid the following error though:
Returning value that does not conform to method result type https://srb.help/7005
you also need to T.cast() the object that you return to the ParentClass:
sig{params(instance_class: String).returns(ParentClass)}
def build_instance instance_class
klass = Object.const_get(instance_class)
# NB instance is a descendent of ParentClass, not an instance...
return T.cast(klass.new, ParentClass)
end
This seems to work but I'd love to know whether it's the correct way to solve the problem.

What is the point of making an argument an array by default?

Sometimes, when a method needs to be given an array as an argument, I see the method defined like this:
def method(argument = [])
...
end
I don't understand why = [] is used. As far as I can see, it adds nothing. If you did supply an array as an argument, the method would run either way, and if you didn't, it would throw an error either way. Is it just convention? Or is it perhaps a visual aid so the programmer can easily see what type of data a method requires?
If you set default argument here, it won't raise an error if you call this method without arguments:
method
# => []
Specifying a default value allows you to call the method with AND without that param.
I have found it helpful while adding a new variable to an existing method. If I specify a default value to the new variable, I don't have to worry about changing the previous calls to that method which have only the previous set of variables. If, however, I am not able to specify a default value for the new variable, I will have to go through the code and hunt down all the method calls, and modify it to include the new variable.

Ruby nested send

Say I have an object with a method that accesses an object:
def foo
#foo
end
I know I can use send to access that method:
obj.send("foo") # Returns #foo
Is there a straightforward way to do a recursive send to get a parameter on the #foo object, like:
obj.send("foo.bar") # Returns #foo.bar
You can use instance_eval:
obj.instance_eval("foo.bar")
You can even access the instance variable directly:
obj.instance_eval("#foo.bar")
While OP has already accepted an answer using instance_eval(string), I would strongly urge OP to avoid string forms of eval unless absolutely necessary. Eval invokes the ruby compiler -- it's expensive to compute and dangerous to use as it opens a vector for code injection attacks.
As stated there's no need for send at all:
obj.foo.bar
If indeed the names of foo and bar are coming from some non-static calculation, then
obj.send(foo_method).send(bar_method)
is simple and all one needs for this.
If the methods are coming in the form of a dotted string, one can use split and inject to chain the methods:
'foo.bar'.split('.').inject(obj, :send)
Clarifying in response to comments: String eval is one of the riskiest things one can do from a security perspective. If there's any way the string is constructed from user supplied input without incredibly diligent inspection and validation of that input, you should just consider your system owned.
send(method) where method is obtained from user input has risks too, but there's a more limited attack vector. Your user input can cause you to execute any 0-arghument method dispatchable through the receiver. Good practise here would be to always whitelist the methods before dispatching:
VALID_USER_METHODS = %w{foo bar baz}
def safe_send(method)
raise ArgumentError, "#{method} not allowed" unless VALID_USER_METHODS.include?(method.to_s)
send(method)
end
A bit late to the party, but I had to do something similar that had to combine both 'sending' and accessing data from a hash/array in a single call. Basically this allows you to do something like the following
value = obj.send_nested("data.foo['bar'].id")
and under the hood this will do something akin to
obj.send(data).send(foo)['bar'].send(id)
This also works with symbols in the attribute string
value = obj.send_nested('data.foo[:bar][0].id')
which will do something akin to
obj.send(data).send(foo)[:bar][0].send(id)
In the event that you want to use indifferent access you can add that as a parameter as well. E.g.
value = obj.send_nested('data.foo[:bar][0].id', with_indifferent_access: true)
Since it's a bit more involved, here is the link to the gist that you can use to add that method to the base Ruby Object. (It also includes the tests so that you can see how it works)

Why use Rails public_method?

I am reading through Avdi Grimm's book 'Objects in Rails' and he uses the method public_method and I dont understand why. Here is the code example:
class Blog
# ...
attr_writer :post_source
# ...
private
def post_source
#post_source ||= Post.public_method(:new)
end
end
Why would you call Post.public_method(:new) and not Post.new? Do these methods do anything different or are they exactly the same? Thanks for the help.
Post.new
is not equivalent to
Post.public_method(:new)
The former is an invocation of method new, which, by default, creates a new Post object. The latter, however, does not call new immediately. It merely prepares it to be called later. I haven't read that particular book, but if you look around in the associated source code, you'll see this line
#post_source.call # maybe some params are passed here
This is where Post#new finally gets called.
Documentation: Object#public_method, Object#method.
Post.public_method(:new) and Post.new are different things. The latter creates an instance of Post. The former creates an instance of Method, which is not the result of applying such method but is an abstraction of the method itself. You can take out the result of it by doing call on it later.
Post.public_method(:new) may be replaced by Post.method(:new), unless there is a private or protected method named new. It is just making sure not to refer to such methods if there are any.

Good semantics, Subclass or emulate?

I have been using python for a while now and Im happy using it in most forms but I am wondering which form is more pythonic. Is it right to emulate objects and types or is it better to subclass or inherit from these types. I can see advantages for both and also the disadvantages. Whats the correct method to be doing this?
Subclassing method
class UniqueDict(dict):
def __init__(self, *args, **kwargs):
dict.__init__(self, *args, **kwargs)
def __setitem__(self, key, value):
if key not in self:
dict.__setitem__(self, key, value)
else:
raise KeyError("Key already exists")
Emulating method
class UniqueDict(object):
def __init__(self, *args, **kwargs):
self.di = dict(*args, **kwargs)
def __setitem__(self, key, value):
if key not in self.di:
self.di[key] = value
else:
raise KeyError("Key already exists")
Key question you have to ask yourself here is:
"How should my class change if the 'parent' class changes?"
Imagine new methods are added to dict which you don't override in your UniqueDict. If you want to express that UniqueDict is simply a small derivation in behaviour from dict's behaviour, then you'd go with inheritance since you will get changes to the base class automatically. If you want to express that UniqueDict kinda looks like a dict but actually isn't, you should go with the 'emulation' mode.
Subclassing is better as you won't have to implement a proxy for every single dict method.
I would go for subclass, and for the reason I would refer to the motivation of PEP 3119:
For example, if asking 'is this object
a mutable sequence container?', one
can look for a base class of 'list',
or one can look for a method named
'getitem'. But note that although
these tests may seem obvious, neither
of them are correct, as one generates
false negatives, and the other false
positives.
The generally agreed-upon remedy is to
standardize the tests, and group them
into a formal arrangement. This is
most easily done by associating with
each class a set of standard testable
properties, either via the inheritance
mechanism or some other means. Each
test carries with it a set of
promises: it contains a promise about
the general behavior of the class, and
a promise as to what other class
methods will be available.
In short, it is sometimes desirable to be able to check for mapping properties using isinstance.

Resources