TypeError while using positional argument in decorators - python-decorators

I am getting following error while performing a simple decorator code
"decorator_func() missing 1 required positional argument: 'original_func'"
Appreciate if someone points at the issue, thanks.
Here is the code:
def decorator_func(original_func):
def wrapper_func(*args, **kwargs):
return original_func(*args, **kwargs)
return wrapper_func()
#decorator_func() #also tried without calling i.e. #decorater_func
def displayInfo_func(name, age):
print('Display Info func ran with arguments ({}, {})'.format(name))
displayInfo_func
Thanks in advance.

The problem is while returning wrapper function you are calling it and that too without any arguments.
def decorator_func(original_func):
def wrapper_func(*args, **kwargs):
return original_func(*args, **kwargs)
return wrapper_func #instead of wrapper_func()
#decorator_func() #also tried without calling i.e. #decorater_func
def displayInfo_func(name, age):
print('Display Info func ran with arguments ({}, {})'.format(name))

Related

python code d-linked list. however i am getting this error- 'insert_at_begining() missing 1 required positional argument: 'data' here's my code

please Resolve the issue i have no idea why I am getting error
class Node:
def __init__(self,perv=None,data=None,Next=None):
self.perv=perv
self.data=data
self.Next=Next
class dll:
def __init__(self):
self.head=None
def insert_at_begining(self,data):
node=Node(None,data,self.head)
self.head=node
def insert_at_end(self,data):
itr=self.head
while itr.next:
itr=itr.Next
node=Node(itr,data,None)
itr.Next=node
def printlist(self):
llstr=''
itr=self.head
while itr:
llstr +=str(itr.data)+'-->'
itr=itr.Next
print(llstr)
Please resolve my problem for linked list . why my code is not working i have not idea

Why using "fork" works but using "spawn" fails in Python3.8+ `multiprocessing`?

I work on macOS and lately got bitten by the "fork" to "spawn" change in Python 3.8 multiprocessing (see doc). Below shows a simplified working example where using "fork" succeeds but using "spawn" fails. The purpose of the code is to create a custom queue object that supports calling size() under macOS, hence the inheritance from the Queue object and getting multiprocessing's context.
import multiprocessing
from multiprocessing import Process
from multiprocessing.queues import Queue
from time import sleep
class Q(Queue):
def __init__(self):
super().__init__(ctx=multiprocessing.get_context())
self.size = 1
def call(self):
return print(self.size)
def foo(q):
q.call()
if __name__ == '__main__':
multiprocessing.set_start_method('spawn') # this would fail
# multiprocessing.set_start_method('fork') # this would succeed
q = Q()
p = Process(target=foo, args=(q,))
p.start()
p.join(timeout=1)
The error message output when using "spawn" is shown below.
Process Process-1:
Traceback (most recent call last):
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Users/fanchen/Private/python_work/sandbox.py", line 23, in foo
q.call()
File "/Users/fanchen/Private/python_work/sandbox.py", line 19, in call
return print(self.size)
AttributeError: 'Q' object has no attribute 'size'
It seems that the child process deems self.size not necessary for code execution, so it is not copied. My question is why does this happen?
Code snippet tested under macOS Catalina 10.15.6, Python 3.8.5
The problem is that spawned processes do not have shared resources, so to properly recreate the queue instance for each process you need to add serialization and deserialization methods.
Here is a working code:
# Portable queue
# The idea of Victor Terron used in Lemon project (https://github.com/vterron/lemon/blob/master/util/queue.py).
# Pickling/unpickling methods are added to share Queue instance between processes correctly.
import multiprocessing
import multiprocessing.queues
class SharedCounter(object):
""" A synchronized shared counter.
The locking done by multiprocessing.Value ensures that only a single
process or thread may read or write the in-memory ctypes object. However,
in order to do n += 1, Python performs a read followed by a write, so a
second process may read the old value before the new one is written by the
first process. The solution is to use a multiprocessing.Lock to guarantee
the atomicity of the modifications to Value.
This class comes almost entirely from Eli Bendersky's blog:
http://eli.thegreenplace.net/2012/01/04/shared-counter-with-pythons-multiprocessing/
"""
def __init__(self, n = 0):
self.count = multiprocessing.Value('i', n)
def __getstate__(self):
return (self.count,)
def __setstate__(self, state):
(self.count,) = state
def increment(self, n = 1):
""" Increment the counter by n (default = 1) """
with self.count.get_lock():
self.count.value += n
#property
def value(self):
""" Return the value of the counter """
return self.count.value
class Queue(multiprocessing.queues.Queue):
""" A portable implementation of multiprocessing.Queue.
Because of multithreading / multiprocessing semantics, Queue.qsize() may
raise the NotImplementedError exception on Unix platforms like Mac OS X
where sem_getvalue() is not implemented. This subclass addresses this
problem by using a synchronized shared counter (initialized to zero) and
increasing / decreasing its value every time the put() and get() methods
are called, respectively. This not only prevents NotImplementedError from
being raised, but also allows us to implement a reliable version of both
qsize() and empty().
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs, ctx=multiprocessing.get_context())
self._counter = SharedCounter(0)
def __getstate__(self):
return super().__getstate__() + (self._counter,)
def __setstate__(self, state):
super().__setstate__(state[:-1])
self._counter = state[-1]
def put(self, *args, **kwargs):
super().put(*args, **kwargs)
self._counter.increment(1)
def get(self, *args, **kwargs):
item = super().get(*args, **kwargs)
self._counter.increment(-1)
return item
def qsize(self):
""" Reliable implementation of multiprocessing.Queue.qsize() """
return self._counter.value
def empty(self):
""" Reliable implementation of multiprocessing.Queue.empty() """
return not self.qsize()
You can also use multiprocessing.manager.Queue

Returning from DRF serializers' create method without actually creating any instance

I have a situation where on certain exception case, I should continue catch the error and continue the execution of the code. Currently my code looks like this:
s = TestSerializer(...)
if s.is_valid():
instance = s.save()
perform_some_query() # Django yells here
class TestSerializer(serializers.ModelSerializer):
def create(self, request, *args, **kwargs):
try:
return super().create(request, *args, **kwargs)
except SomeError as err:
if this_is_the_super_exotic_case:
return None # Don't shout, just continue silently and don't create an object
raise
The problem is that Django yells at me when executing perform_some_query() (see line 3 of the code snippet), saying that:
django.db.transaction.TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block. What is the reason for this?

Wrong Number of Arguments in Initialize (given 0, expected 1)

I am following along with a tutorial at:
http://neurogami.com/content/neurogami-10_minutes_to_your_first_Ruby_app/#sidebar4
I have checked and rechecked the code, and I do not understand why ruby is not reading my variable app_map as a valid argument.
I have searched online for similar questions, and they exist, yet I can not understand why this variable is not working. I also am not exactly sure what initialize means, as I am an absolute beginner with Ruby. Any insight would be greatly appreciated.
#!/usr/bin/env ruby
class Launcher
def initialize (app_map)
#app_map = app_map
end
#execute the given file using the associate app
def run file_name
application = select_app file_name
system "#{application} #{file_name}"
end
#given a file, lookup the matching application
def select_app file_name
ftype = file_type file_name
#app_map[ ftype ]
end
#return the part of the file name string after the last '.'
def file_type file_name
File.extname( file_name ).gsub( /^\./, '' ).downcase
end
end
launcher = Launcher.new
end
I am not sure what this code is supposed to run, but I have multiple error messages.
tinyapp.rb:8:in `initialize': wrong number of arguments (given 0, expected 1) (ArgumentError)
from tinyapp.rb:30:in `new'
from tinyapp.rb:30:in `<main>'
In this line, you are instantiating a Launcher:
launcher = Launcher.new
That will call the initialize method on it. That method expects an argument:
def initialize (app_map)
#app_map = app_map
end
In order to resolve the error, you will need to pass in a parameter for the app_map argument. I don't know what it's supposed to actually be here, but that'll look something like this:
launcher = Launcher.new(the_app_map)

How can a ChoiceField.choices callable know what choices to return?

In Django 1.8, the ChoiceField's choices argument can accept a callable:
def get_choices():
return [(1, "one"), (2, "two")]
class MyForm(forms.Form):
my_choice_field = forms.ChoiceField(choices=get_choices)
In the above example, get_choices() always returns the same choices. However, being able to assign a callable to choices does not make much sense unless that callable knows something like, say, an object id, each time it is called. How can I pass such a thing to it?
You can't do it in the form declaration because the CallableChoiceIterator calls the function without arguments that he gets from here.
Doing in the __init__ Form method is easier than creating your own ChoiceField I guess. Here is what I suggest:
class MyForm(forms.Form):
my_choice_field = forms.ChoiceField(choices=())
def __init__(self, *args, **kwargs):
# Let's pass the object id as a form kwarg
self.object_id = kwargs.pop('object_id')
# django metaclass magic to construct fields
super().__init__(*args, **kwargs)
# Now you can get your choices based on that object id
self.fields['my_choice_field'].choices = your_get_choices_function(self.object_id)
That supposes that you have some Class Based View that looks that has a method like this :
class MyFormView(FormView):
# ...
def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
kwargs['object_id'] = 'YOUR_OBJECT_ID_HERE'
return kwargs
# ...
P.S : The super() function call supposes you are using python 3
The reason it's possible to set a callable like that is to avoid situations where you're using models before they're ready.
forms.py
class Foo(ModelForm):
choice_field = ChoiceField(choices=[
user.username.lower() for user in User.objects.all()
])
Were forms.py imported before models were ready, (which it probably is because views.py generally likes to import it, and urls.py generally likes to import that, and urls.py is imported by the startup machinery), it will raise an exception due to trying to do ORM stuff before all the apps are imported.
The correct way is to use a callable like so:
def lower_case_usernames():
return [user.username.lower() for user in User.objects.all()]
class Foo(ModelForm):
choice_field = ChoiceField(choices=lower_case_usernames)
This also has the benefit of being able to change without restarting the server.

Resources