Django form __init__() got multiple values for keyword argument - django-forms

Hello, I'm trying to use a modified __init__ form method, but I am encountering the following error:
TypeError
__init__() got multiple values for keyword argument 'vUserProfile'
I need to pass UserProfile to my form, to get to dbname field, and I think this is a solution (my form code):
class ClienteForm(ModelForm):
class Meta:
model = Cliente
def __init__(self, vUserProfile, *args, **kwargs):
super(ClienteForm, self).__init__(*args, **kwargs)
self.fields["idcidade"].queryset = Cidade.objects.using(vUserProfile.dbname).all()
Calls to constructor ClienteForm() without POST are successful and show me the correct form. But when the form is submitted and the constructor is called with POST, I get the previously described error.

You've changed the signature of the form's __init__ method so that vUserProfile is the first argument. But here:
formPessoa = ClienteForm(request.POST, instance=cliente, vUserProfile=profile)
you pass request.POST as the first argument - except that this will be interpreted as vUserProfile. And then you also try to pass vUserProfile as a keyword arg.
Really, you should avoid changing the method signature, and just get the new data from kwargs:
def __init__(self, *args, **kwargs):
vUserProfile = kwargs.pop('vUserProfile', None)

For the help of those others who Google to here: the error comes from init picking up the argument from both a positional argument and the default argument. Daniel Roseman's is accurate for the question as asked.
This can be either:
You put the argument by position and then by keyword:
class C():
def __init__(self, arg): ...
x = C(1, arg=2) # you passed arg twice!
You forgot to put self as the first argument:
class C():
def __init__(arg): ...
x = C(arg=1) # but a position argument (for self) is automatically
# added by __new__()!

I think this is the case with ModelForm, but need to check. For me, the solution was:
def __init__(self, *args, **kwargs):
self.vUserProfile = kwargs.get('vUserProfile', None)
del kwargs['vUserProfile']
super(ClienteForm, self).__init__(*args, **kwargs)
self.fields["idcidade"].queryset = Cidade.objects.using(self.vUserProfile.dbname).all()

Related

Django Rest Framework ignoring custom field

I have a model with a nullable boolean field that I'd like to have serialized in a way that converts null in the output to false.
My model:
class UserPreferences(models.Model):
receive_push_notifications = models.BooleanField(
null=True, blank=True,
help_text=("Receive push notifications))
I'm trying to do it with a custom field like so:
class StrictlyBooleanField(serializers.Field):
def to_representation(self, value):
# Force None to False
return bool(value)
def to_internal_value(self, data):
return bool(data)
class UserPreferencesSerializer(serializers.ModelSerializer):
class Meta(object):
model = UserPreferences
fields = ('receive_push_notifications',)
receive_push_notifications = StrictlyBooleanField()
but this isn't working, I'm still seeing null in my API responses.
I think I must be missing something simple in wiring it up because I don't even get an error if I replace my to_representation with:
def to_representation(self, value):
raise
DRF doesn't seem to be calling my method at all... What am I missing here?
Explanation
After looking into rest framework's Serializer's to_representation method, you will find that it iterates through all of the fields and calls field.get_attribute method for each field. If the value returned from that method is None it skips calling field.to_representation entirely and set None as the field value.
# Serializer's to_representation method
def to_representation(self, instance):
"""
Object instance -> Dict of primitive datatypes.
"""
ret = OrderedDict()
fields = self._readable_fields
for field in fields:
try:
attribute = field.get_attribute(instance)
except SkipField:
continue
# We skip `to_representation` for `None` values so that fields do
# not have to explicitly deal with that case.
#
# For related fields with `use_pk_only_optimization` we need to
# resolve the pk value.
check_for_none = attribute.pk if isinstance(attribute, PKOnlyObject) else attribute
if check_for_none is None:
ret[field.field_name] = None
else:
ret[field.field_name] = field.to_representation(attribute)
return ret
Solution
Override field.get_attribute by calling super().get_attribute and return False if the value is None
class StrictlyBooleanField(serializers.Field):
def get_attribute(self, instance):
attribute = super().get_attribute(instance)
return bool(attribute)
def to_representation(self, value):
return value
def to_internal_value(self, data):
return bool(data)
You can just write a simple function inside your serializer
class UserPreferencesSerializer(serializers.ModelSerializer):
yourField = serializers.SerializerMethodField(read_only=True)
class Meta(object):
model = UserPreferences
fields = ['receive_push_notifications', 'yourField']
def get_yourField(self, obj):
if obj.receive_push_notifications == null:
return False
else:
return True

Python: Register hooks to existing code to get control when a function is called

I'd like to get control (to execute some pre-emptive tasks) when a function is called in Python without modifying the source program, e.g., when calling test()
def test(i : int, s: str) -> int:
pass
I'd like a function myobserver to be called, and have some way to inspect (maybe even modify?!) the parameters? Think of it sorta like a mini-debugger, e.g., to add logging to an existing program that can't/shouldn't be modified?
def myobserver(handle)
name = get_name(handle)
for n, arg in enumerate(get_arg_iterator(handle)):
print("Argument {n} of function {name}: {arg}")
ETA: I am not looking for the traditional decorator, because adding a decorator requires changing the source code. (In this sense, decorators are nicer than adding a print, but still similar because they require changes to source.)
Your are looking for python decorators:
from functools import wraps
def debugger(func):
#wraps(func)
def with_logging(*args, **kwargs):
print('"'+func.__name__+'({},{})"'.format(*args, **kwargs)+" was invoked")
# -------------------
# Your logic here
# -------------------
return func(*args, **kwargs)
return with_logging
#debugger
def test(i : int, s: str) -> int:
print('We are in test', i, s)
test(10, 'hello')
EDIT
Since the decorator method mentioned above interferes with the source code (have to apply the # decorators), I propose the following:
# This is source code to observe, should not be _touched_!
class SourceCode:
def __init__(self, label):
self.label = label
def test1(self, i, s):
print('For object labeled {}, we are in {} with param {}, {}'.format(self.label, 'test1', i, s))
def test2(self, k):
print('For object labeled {}, we are in {} with param {}'.format(self.label, 'test2', k))
What I propose is perform some manual effort in writing the hooks, I am not sure if this is feasible (just occured to me, hence adding):
from functools import wraps
# the following is pretty much syntactic and generic
def hook(exist_func, debugger):
#wraps(exist_func)
def run(*args, **kwargs):
return debugger(exist_func, *args, **kwargs)
return run
# here goes your debugger
def myobserver(orig_func, *args, **kwargs):
# -----------------------
# Your logic goes here
# -----------------------
print('Inside my debugger')
return orig_func(*args, **kwargs)
# ---------------------------------
obj = SourceCode('Test')
# making the wrapper ready to receive
no_iterference_hook1 = hook(obj.test1, myobserver)
no_iterference_hook2 = hook(obj.test2, myobserver)
# call your debugger on the function of your choice
no_iterference_hook1(10, 'hello')
no_iterference_hook2('Another')

Why the ruby code will generate an ArgumentError?

Hi I'm working on Ruby Koans. I was wondering why the ArgumentErrorwould be raised if the Dog6.new is returned in the code down below?
class Dog6
attr_reader :name
def initialize(initial_name)
#name = initial_name
end
end
def test_initialize_provides_initial_values_for_instance_variables
fido = Dog6.new("Fido")
assert_equal "Fido", fido.name
end
def test_args_to_new_must_match_initialize
assert_raise(ArgumentError) do
Dog6.new
end
end
Is it because Dog6.newdoesn't have any arguments? Thank you!!
Yes, your assumption is correct.
Dog6.new implicitly calls Dog6#initialize to initialize the newly created instance (one might think about MyClass#initialize as about the constructor for this class,) which apparently has one required argument. Since no argument was given to the call to Dog6.new, the ArgumentError is being raised.
Just adding that if you want to have a constructor with no arguments (after all - some dogs don't have a name....) you could have a default value for the name parameter.
def initialize(name = nil)
#name = name
end
In the initializer for the Dog6 class, initial_name is defined as a parameter required for object construction. If this class were to be instantiated without this argument, an ArgumentError would be raised because the class definition has a method signature such that Dog6.new is invalid, like you guessed. In this case the error you would see would be:
ArgumentError: wrong number of arguments (0 for 1)
Read more about the ArgumentError exception here.

Why does "instance.send(:initialize, *args, **kwargs, &block)" fail only from within Class#new?

I've been stuck on this for quite a while now. Take a look at this:
class SuperClass
def self.new(*args, **kwargs, &block)
i = allocate()
# Extra instance setup code here
i.send(:initialize, *args, **kwargs, &block)
return i
end
end
class Test < SuperClass
def initialize
puts "No args here"
end
end
The class SuperClass basically "reimplements" the default new method so that some extra initialization can happen before initialize.
Now, the following works just fine:
t = Test.allocate
t.send(:initialize, *[], **{}, &nil)
However, this does not:
t = Test.new
ArgumentError: wrong number of arguments (1 for 0)
from (pry):7:in `initialize'
It fails on this line in SuperClass:
i.send(:initialize, *args, **kwargs, &block)
But apparently it only fails if called within the new method. I have confirmed that args == [], kwargs == {} and block == nil.
Is anybody able to explain this?
Ruby version:
ruby 2.2.3p173 (2015-08-18 revision 51636) [x86_64-linux]
Please refrain from suggesting that I don't overload Class.new. I am aware I can use Class.inherited and Class.append for the same result. This question is only about why the call to initialize fails.
Let's examine a simpler example, especially because the problem isn't as specific as the question and its title make it look like but see for yourself.
def m # takes no arguments
end
m(**{}) # no argument is passed
h = {}
m(**h) # an argument is passed => ArgumentError is raised
This inconsistency was introduced in 2.2.1 by a commit intended to fix a segmentation fault involving **{} (Bug #10719). The commit special-cases **{} to not pass an argument. Other ways like **Hash.new and h={};**h still pass an empty hash as argument.
Previous versions consistently raise ArgumentError (demo). I could be wrong but I believe that's the intended behavior. However it may or may not be the one actually wants. So if you think double-splatting an empty hash shouldn't pass an argument (like **{} at the moment) and therefore work similar to splatting an empty array, there is an open issue about that (Bug #10856). It also mentions this relatively new inconsistency.
A simple *args will capture all arguments including keyword arguments, in case you don't need to reference kwargs separately in the new method:
class SuperClass
def self.new(*args, &block)
i = allocate
# Extra instance setup code here
i.send(:initialize, *args, &block)
i
end
end

How does automatic currying with self when assigning a method into a var work in Python 3?

I am writing a context manager to wrap the builtins.print function. And this works fine. However I encountered a Python behaviour that I can't wrap my head around:
Whenever a classes' method is assigned into a variable for later calling, the first "self" argument seems to be automatically stored as well and used for all later calls.
Here's an example illustrating the point:
import functools
class Wrapper:
def wrap(self):
return self._wrapped #functools.partial(self._wrapped, self)
def _wrapped(self, *args, **kwargs):
print('WRAPPED!', *args, **kwargs)
print('..knows about self:', self)
wrapped = Wrapper().wrap()
wrapped('expect self here', 'but', 'get', 'all', 'output')
The output:
WRAPPED! expect self here but get all output
..knows about self: <__main__.Wrapper object at 0x2aaaab2d9f50>
Of course for normal functions (outside of classes) this magic does not happen. I can even assign that method in the example above directly without going through instantiation:
wrapped = Wrapper._wrapped
wrapped('expect self here', 'but', 'get', 'all', 'output')
And now I get what I first expected:
WRAPPED! but get all output
..knows about self: expect self here
In my original code, I used the functools.partial to curry-in the self, but then discovered that this is not even required.
I like the current behaviour, but I'm not yet understanding the reasoning with respect to consistency and "being obvious".
I'm working with Python 3.1.2 here.
Is this question with the answer to use types.MethodType related? Searching here and in the 'net largely results in basic info on currying/partial function calls and packing/unpacking of arg lists. Maybe I used inadequate search terms (e.g. "python currying methods".)
Can anyone shed some light into this behaviour?
Is this the same in Py2 and Py3?
Whenever you take the method from an instance (as in return self._wrapped) then self will be remembered.
Whenever you take the method from a class (as in Wrapper._wrapped) then self is not (cannot be) remembered.
As an example, try this:
upper = 'hello'.upper
print(upper())
upper = str.upper
print(upper())
You'll see HELLO, followed by TypeError: descriptor 'upper' of 'str' object needs an argument
When an instance method is called, that call will automatically pass in the instance as the first parameter. This is what happens here.
When you do
return self._wrapped
You will return an instance method. Calling it will pass in the instance as the first parameter, that is self. But in the second case you call the method on the class, and hence there exists no instance to get passed in, so no instance gets passed in.
The "storage" of this is simply that instance methods know which instance they belong to. If you don't want that behavior return the unbound class method instead.
class Wrapper:
def wrap(self):
return Wrapper._wrapped
def _wrapped(self, *args, **kwargs):
print('WRAPPED!', *args, **kwargs)
print('..knows about self:', self)

Resources