Python unit testing: How to stop gevent.monkey.patch_all() affecting asyncio tests? - python-asyncio

We have a python test suite that tests code that uses gevent.monkey.patch_all(). The tests run fine.
In the same code base we have an alternative entrypoint that uses asyncio. There are also tests for this, which run fine on their own, with this kind of setup:
import asyncio
from our_module import main
class AsyncioTests(unittest.TestCase):
"""Test some asyncio stuff."""
def test_something(self):
asyncio.run(main())
However, if they run after the tests that import the module with the monkey patching, they hang forever. It seems to be because of the monkey patching.
Is there a way to stop this, by reversing the monkey patching?

I found this https://emptysqua.re/blog/undoing-gevents-monkey-patching/ via Gevent monkey unpatch but the suggestion didn't work. Seemingly the problem is a bit deeper than the one module, perhaps specifically related to asyncio, (I also tried reloading several).
However, there is an undocumented but public variable in the gevent.monkey module called saved:
# maps module name -> {attribute name: original item}
# e.g. "time" -> {"sleep": built-in function sleep}
# NOT A PUBLIC API. However, third-party monkey-patchers may be using
# it? TODO: Provide better API for them.
saved = {}
Using this, I could undo all the patches that gevent introduced in the tearDownClass of the test suite that uses that code:
class SomeTests(unittest.TestCase):
"""Tests using code imported from a module gevent.monkey.patch_all'd."""
#classmethod
def tearDownClass(cls):
"""Undo monkeypatching so that other tests don't get stuck.
Note: this is needed because of asyncio.
"""
import importlib
from gevent import monkey
for modname in monkey.saved.keys():
try:
mod = __import__(modname)
importlib.reload(mod)
for key in monkey.saved[modname].keys():
setattr(mod, key, monkey.saved[modname][key])
except ImportError:
pass
Pretty horrid...? Maybe...

Related

How to Document Classmethod Properties in Python 3.9 / 3.10?

A small add-on to python 3.9 was that
Changed in version 3.9: Class methods can now wrap other descriptors such as property().
Which is useful in several contexts, for example as a solution for Using property() on classmethods or in order to create what is essentially a lazily evaluated class-attribute via the recipe
class A:
#classmethod
#property
#cache
def lazy_class_attribute(cls):
"""Method docstring."""
return expensive_computation(cls)
This works well and fine within python, importing the class, instantiating it or subclasses of it will not cause expensive_computation to occur, however it seems that both pydoc and sphinx will not only cause execution expensive_computation when they try to obtain the docstring, but not display any docstring for this #classmethod whatsoever.
Question: Is it possible - from within python (*) - to have lazily evaluated class-attributes/properties that do not get executed when building documentation?
(*) One workaround presented in Stop Sphinx from Executing a cached classmethod property, thanks to /u/jsbueno, consists of modifying the function body based on an environment variable:
def lazy_class_attribute(cls):
"""Method docstring."""
if os.environ.get("GENERATING_DOCS", False):
return
return expensive_computation(cls)
I like this workaround a lot, since in particular, it allows one to present a different output when documenting. (For example, my classes have attributes which are paths based on the class-name. If a different user executes the same script, the path will be different since their home folder is different.)
There are 2 problems, however:
This approach depends on doing things outside of python that, imo, should, in an ideal world, not be necessary to do outside of python itself
To get the docstring in documentation, it seems we end up having to do something unintuitive like
class MetaClass:
#property
#cache
def _expensive_function(cls):
"""some expensive function"""
class BaseClass(metaclass=MetaClass):
lazy_attribute: type = classmethod(MetaClass._expensive_funcion)
"""lazy_attribute docstring"""
PS: By the way, is there any functional difference between a #classmethod#property and an attribute? They seem very similar. The only difference I can make out at the moment is that if the attribute needs access to other #classmethods, we need to move everything in the metaclass as above.

In a Ruby module, how do you test if a method exists in the context which use the module?

Some context
I'm playing with Ruby to deepen my knowledge and have fun while at the same time improving my knowledge of Esperanto with a just starting toy project called Ĝue. Basically, the aim is to use Ruby facilities to implement a DSL that matches Esperanto traits that I think interesting in the context of a programming language.
The actual problem
So a first trait I would like to implement is inflection of verbs, using infinitive in method declaration (ending with -i), and jussive (ending with -u) for call to the method.
A first working basic implementation is like that:
module Ĝue
def method_missing(igo, *args, &block)
case igo
when /u$/
celo = igo.to_s.sub(/u$/, 'i').to_s
send(celo)
else
super
end
end
end
And it works. Now the next step is to make it more resilient, because there is no guaranty that celo will exists when the module try to call it. That is, the module should implement the respond_to? method. Thus the question, how do the module know if the context where module was required include the corresponding infinitive method? Even after adding extend self at the beginning of the module, inside of the module methods.include? :testi still return false when tested with the following code, although the testu call works perfectly:
#!/usr/bin/env ruby
require './teke/ĝue.rb'
include Ĝue
def testi; puts 'testo!' ;end
testu
Note that the test is run directly into the main scope. I don't know if this makes any difference with using a dedicated class scope, I would guess that no, as to the best of my knowledge everything is an object in Ruby.
Found a working solution through In Ruby, how do I check if method "foo=()" is defined?
So in this case, this would be checkable through
eval("defined? #{celo}") == 'method'

Subclassing a Java class in JRuby, with Generics information

I'm trying to create an actor, using the newest Akka version (2.3.2 right now) using JRuby. Problem is, I keep getting the error:
Java::JavaLang::IllegalArgumentException: erased Creator types are unsupported, use Props.create(actorClass, creator) instead
Basically, I'm following the code here on Akka Documentation
I cannot create a akka.japi.Creator, because this requires Generic information, and these are erased at run-time (and JRuby is basically a run-time-everywhere). What I already tried:
class GreetingActor < UntypedActor
def onReceive(message)
if (message.is_a? Greeting)
puts("Hello " + message.who)
end
end
end
system = ActorSystem.create("MySystem")
greeter = system.actorOf(Props.create(GreetingActor))
The last line fails with erased Creator types are unsupported. I tried to wrap it under a akka.japi.Creator, but with the same error (as Creator needs generics information, and JRuby doesn't provide it). I've tried to use "become_java!" on GreetingActor, but it returns nil (JRuby can't create new java classes from Ruby classes if the Ruby class extends from a Java class).
Is there a way to declare the Creator passing generics information?
I was trying to do the same thing and while I don't have an answer to the title's question I do have an answer for getting the example working. You have to use a different means of constructing the Props instance.
You need to use a factory:
java_import 'akka.actor.UntypedActorFactory'
class GreetingActorFactory
include UntypedActorFactory
def create
GreetingActor.new
end
end
Then you can use it as follows:
system = ActorSystem.create("GreetingSystem")
props = Props.create(GreetingActorFactory.new)
greeter = system.actorOf(props, "greeter")
greeter.tell(Greeting.new("John Weathers"), nil)
Hopefully, this helps you get further along!
I've already solved the Akka problem using the following gist: https://gist.github.com/mauricioszabo/6a713fd416c512e49f70
Still, the JRuby implementation may fail on other libraries that depends on not-erased types, but for now, I can work around in Akka using the code above.

Running Selenium Tests Sequentially on Same Firefox Instance

I'm trying to set up some tests for a website that has a forum. As a result, many of these things need to be done in succession, including login, creating and removing threads, etc.
At the moment I've devised test cases using Selenium, and exported to Python with webdriver.
Is it possible to use only one webdriver instance between all tests so they can be executed in succession? I am rather new to python (coming from Java) so the only idea I had was to create a base class that would instantiate the webdriver, have all tests subclass the base, and have the webdriver passed to the tests. (I want to just do this in Java but I'm forcing myself to learn some Python).
Or is there another feature built into Selenium that will accomplish this?
Thanks for your time.
I managed to get it working by having a base class my tests would subclass. This base class would create a static driver that would be setup once by the setUp() method and returned to following tests.
from selenium import webdriver
import unittest, time, re
class TestBase(unittest.TestCase):
driver = None
rand = str(random.uniform(1,10))
base_url = "desiredtestURLhere"
def setUp(self):
if (TestBase.driver==None):
TestBase.driver = webdriver.Firefox()
TestBase.driver.implicitly_wait(30)
return TestBase.driver
And then the two tests I ran...
import unittest, time, re
from testbase import TestBase
class Login(TestBase):
def test_login(self):
driver = TestBase.driver
base_url = TestBase.base_url
driver.get(base_url)
# etc
Test #2 to be run in succession...
import random
import unittest, time, re
from testbase import TestBase
class CreateThread(TestBase):
def test_create_thread(self):
driver = TestBase.driver
base_url = TestBase.base_url
rand = TestBase.rand
driver.get(base_url + "/forum.php")
# etc
My testsuite.py I would run...
import unittest, sys
# The following imports are my test cases
import login
import create_thread
def suite():
tsuite = unittest.TestSuite()
tsuite.addTest(unittest.makeSuite(login.Login))
tsuite.addTest(unittest.makeSuite(create_thread.CreateThread))
return tsuite
if __name__ == "__main__":
result = unittest.TextTestRunner(verbosity=2).run(suite())
sys.exit(not result.wasSuccessful())
This is my first exposure to Python so if there are any glaring issues, I'd appreciate feedback.

Importing a Ruby class from a socket connection?

I have this idea for a client/server archetype where the server would hold a hash of Marshal.dump'ed class objects along with their version numbers. Then the client could query the server concerning the version number and import the newer version of the class before instantiating it:
class Stuff
def methods
gibberish
end
end
$obj_hash["Stuff"] = [3.0, Marshal.dump(Stuff)]
The problem I'm running into is that Ruby doesn't seem to want to allow me to Marshal.load the data once I've downloaded it from the server because the class and its methods don't exist in the client. If I bypass this by creating a 'dummy' class I'm then unable to replace the dummy class with the Marshal.load'ed data. If I simply try to use the loaded data as a class it functions according to the contents of the dummy class rather than the downloaded one.
Is there another way to go about this? If not then I guess I could just gz the code and then eval it at the other end, but I'm trying to avoid using eval or sending easily decipherable code over the line.
Thanks in advance for any advice.
Look at what happens.
class Stuff
def methods
"foo"
end
end
ruby-1.8.7-p352 :001 > Marshal.dump(Stuff)
=> "\004\bc\bStuff"
Notice how it says nothing about "methods" or "foo." If the server isn't sending that code down the wire, how is the client supposed to know what Stuff#methods should do?
It won't. :)
To do what you want to do, you'll have to send down the code itself and eval it. You'll have to implement the versioning logic yourself, of course, and "really re-define" the classes (not just monkey-patch) them.
See are you allowed to redefine a class in ruby? or is this just in irb

Resources