Returning from DRF serializers' create method without actually creating any instance - django-rest-framework

I have a situation where on certain exception case, I should continue catch the error and continue the execution of the code. Currently my code looks like this:
s = TestSerializer(...)
if s.is_valid():
instance = s.save()
perform_some_query() # Django yells here
class TestSerializer(serializers.ModelSerializer):
def create(self, request, *args, **kwargs):
try:
return super().create(request, *args, **kwargs)
except SomeError as err:
if this_is_the_super_exotic_case:
return None # Don't shout, just continue silently and don't create an object
raise
The problem is that Django yells at me when executing perform_some_query() (see line 3 of the code snippet), saying that:
django.db.transaction.TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block. What is the reason for this?

Related

Why using "fork" works but using "spawn" fails in Python3.8+ `multiprocessing`?

I work on macOS and lately got bitten by the "fork" to "spawn" change in Python 3.8 multiprocessing (see doc). Below shows a simplified working example where using "fork" succeeds but using "spawn" fails. The purpose of the code is to create a custom queue object that supports calling size() under macOS, hence the inheritance from the Queue object and getting multiprocessing's context.
import multiprocessing
from multiprocessing import Process
from multiprocessing.queues import Queue
from time import sleep
class Q(Queue):
def __init__(self):
super().__init__(ctx=multiprocessing.get_context())
self.size = 1
def call(self):
return print(self.size)
def foo(q):
q.call()
if __name__ == '__main__':
multiprocessing.set_start_method('spawn') # this would fail
# multiprocessing.set_start_method('fork') # this would succeed
q = Q()
p = Process(target=foo, args=(q,))
p.start()
p.join(timeout=1)
The error message output when using "spawn" is shown below.
Process Process-1:
Traceback (most recent call last):
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Users/fanchen/Private/python_work/sandbox.py", line 23, in foo
q.call()
File "/Users/fanchen/Private/python_work/sandbox.py", line 19, in call
return print(self.size)
AttributeError: 'Q' object has no attribute 'size'
It seems that the child process deems self.size not necessary for code execution, so it is not copied. My question is why does this happen?
Code snippet tested under macOS Catalina 10.15.6, Python 3.8.5
The problem is that spawned processes do not have shared resources, so to properly recreate the queue instance for each process you need to add serialization and deserialization methods.
Here is a working code:
# Portable queue
# The idea of Victor Terron used in Lemon project (https://github.com/vterron/lemon/blob/master/util/queue.py).
# Pickling/unpickling methods are added to share Queue instance between processes correctly.
import multiprocessing
import multiprocessing.queues
class SharedCounter(object):
""" A synchronized shared counter.
The locking done by multiprocessing.Value ensures that only a single
process or thread may read or write the in-memory ctypes object. However,
in order to do n += 1, Python performs a read followed by a write, so a
second process may read the old value before the new one is written by the
first process. The solution is to use a multiprocessing.Lock to guarantee
the atomicity of the modifications to Value.
This class comes almost entirely from Eli Bendersky's blog:
http://eli.thegreenplace.net/2012/01/04/shared-counter-with-pythons-multiprocessing/
"""
def __init__(self, n = 0):
self.count = multiprocessing.Value('i', n)
def __getstate__(self):
return (self.count,)
def __setstate__(self, state):
(self.count,) = state
def increment(self, n = 1):
""" Increment the counter by n (default = 1) """
with self.count.get_lock():
self.count.value += n
#property
def value(self):
""" Return the value of the counter """
return self.count.value
class Queue(multiprocessing.queues.Queue):
""" A portable implementation of multiprocessing.Queue.
Because of multithreading / multiprocessing semantics, Queue.qsize() may
raise the NotImplementedError exception on Unix platforms like Mac OS X
where sem_getvalue() is not implemented. This subclass addresses this
problem by using a synchronized shared counter (initialized to zero) and
increasing / decreasing its value every time the put() and get() methods
are called, respectively. This not only prevents NotImplementedError from
being raised, but also allows us to implement a reliable version of both
qsize() and empty().
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs, ctx=multiprocessing.get_context())
self._counter = SharedCounter(0)
def __getstate__(self):
return super().__getstate__() + (self._counter,)
def __setstate__(self, state):
super().__setstate__(state[:-1])
self._counter = state[-1]
def put(self, *args, **kwargs):
super().put(*args, **kwargs)
self._counter.increment(1)
def get(self, *args, **kwargs):
item = super().get(*args, **kwargs)
self._counter.increment(-1)
return item
def qsize(self):
""" Reliable implementation of multiprocessing.Queue.qsize() """
return self._counter.value
def empty(self):
""" Reliable implementation of multiprocessing.Queue.empty() """
return not self.qsize()
You can also use multiprocessing.manager.Queue

TypeError while using positional argument in decorators

I am getting following error while performing a simple decorator code
"decorator_func() missing 1 required positional argument: 'original_func'"
Appreciate if someone points at the issue, thanks.
Here is the code:
def decorator_func(original_func):
def wrapper_func(*args, **kwargs):
return original_func(*args, **kwargs)
return wrapper_func()
#decorator_func() #also tried without calling i.e. #decorater_func
def displayInfo_func(name, age):
print('Display Info func ran with arguments ({}, {})'.format(name))
displayInfo_func
Thanks in advance.
The problem is while returning wrapper function you are calling it and that too without any arguments.
def decorator_func(original_func):
def wrapper_func(*args, **kwargs):
return original_func(*args, **kwargs)
return wrapper_func #instead of wrapper_func()
#decorator_func() #also tried without calling i.e. #decorater_func
def displayInfo_func(name, age):
print('Display Info func ran with arguments ({}, {})'.format(name))

Rspec error in ruby code testing

Rspec code is
it "calls calculate_word_frequency when created" do
expect_any_instance_of(LineAnalyzer).to receive(:calculate_word_frequency)
LineAnalyzer.new("", 1)
end
Code of class is
def initialize(content,line_number)
#content = content
#line_number = line_number
end
def calculate_word_frequency
h = Hash.new(0)
abc = #content.split(' ')
abc.each { |word| h[word.downcase] += 1 }
sort = h.sort_by {|_key, value| value}.reverse
puts #highest_wf_count = sort.first[1]
a = h.select{|key, hash| hash == #highest_wf_count }
puts #highest_wf_words = a.keys
end
This test gives an error
LineAnalyzer calls calculate_word_frequency when created
Failure/Error: DEFAULT_FAILURE_NOTIFIER = lambda { |failure, _opts| raise failure }
Exactly one instance should have received the following message(s) but didn't: calculate_word_frequency
How I resolve this error.How I pass this test?
I believe you were asking "Why do I get this error message?" and not "Why does my spec not pass?"
The reason you're getting this particular error message is you used expect_any_instance_of in your spec, so RSpec raised the error within its own code rather than in yours essentially because it reached the end of execution without an exception, but without your spy being called either. The important part of the error message is this: Exactly one instance should have received the following message(s) but didn't: calculate_word_frequency. That's why your spec failed; it's just that apparently RSpec decided to give you a far less useful exception and backtrace.
I ran into the same problem with one of my specs today, but it was nothing more serious than a failed expectation. Hopefully this helps clear it up for you.
The entire point of this test is to insure that the constructor invokes the method. It's written very clearly, in a very straight forward way.
If you want the test to pass, modify the constructor so it invokes the method.

Python tornado, gives an error inside open()

I'm implementing a web socket server with tornado (currently version 3.1).
Inside the open() function I check GET argument, then based on it - I want to raise an error.
Something like this:
def open(self):
token = self.get_argument('token')
if ...:
??? # raise an error
How do I raise an error inside the open function? I did not find the way to do this.
Thanks
You can just raise an exception like you normally would:
class EchoWebSocket(websocket.WebSocketHandler):
def open(self):
if some_error:
raise Exception("Some error occurred")
Tornado will abort the connection when an unhandled exception occurs in open. Here's how open is scheduled to run in the tornado source:
self._run_callback(self.handler.open, *self.handler.open_args,
**self.handler.open_kwargs)
Here is _run_callback:
def _run_callback(self, callback, *args, **kwargs):
"""Runs the given callback with exception handling.
On error, aborts the websocket connection and returns False.
"""
try:
callback(*args, **kwargs)
except Exception:
app_log.error("Uncaught exception in %s",
self.request.path, exc_info=True)
self._abort()
def _abort(self):
"""Instantly aborts the WebSocket connection by closing the socket"""
self.client_terminated = True
self.server_terminated = True
self.stream.close() # forcibly tear down the connection
self.close() # let the subclass cleanup
As you can see, it aborts the connection when an exception occurs.

Can I stub "raise" in Ruby?

I have a class called RemoteError with a self.fatal method on it. This methods job is basically to catch an exception, send the details of the exception to a server and then propagate the exception in order to kill the program.
class RemoteError
def initialize(label, error)
#label = label
#error = error
end
def self.fatal(label, error)
object = new(label, error)
object.send
raise error
end
def send
# send the error to the server
end
end
I'm trying to write tests for the RemoteError.fatal method. This is difficult because of the call to raise within the method. Every time I run my tests, raise obviously raises an exception and I can't test that send was called.
describe "fatal" do
it "should send a remote error" do
error = stub
RemoteError.stub(:new) { error }
error.should_receive(:send)
RemoteError.fatal(stub, stub)
end
end
Is there a way that I can stub or somehow circumvent raise for this specific test?
You could wrap the method that raises the error in a lambda...
it "should send a remote error" do
...
lambda { RemoteError.fatal(stub, stub) }.should raise_error(error)
end
This essentially allows the method to be called, and you get the return value or raised error from it, which you then assert against with .should raise_error(error). This also makes is so that if no error is raised from that call, the test will fail normally.
To say it another way, you don't need, nor want, to stub raise. Simply wrap it in a lambda and your code will continue to execute, you should be able to make sure that your message is sent, and your test won't exit/crash.
In your tests you are testing the method that raises an error and this is an expected result of method execution. You should write expectation for exception with this syntax:
lambda{ RemoteError.fatal(stub, stub) }.should raise_error
In this case your spec will fail if exception won't be raised and also will fail if all other expectations (like should_receive(:sen)) won't be met.

Resources