We write many PySide scripts for Maya 2015, and we save the settings using QSettings. Normally we create the QSettings object in our "readSettings" and "writeSettings" functions. Today I tried making the QSettings object a class variable. But that caused some strange effects. Certain values that normally come back as <type 'unicode'> started coming back as <type 'bool'>, but not all the time!
Here is a test script I wrote to illustrate the problem:
import shiboken
from PySide import QtGui, QtCore
from maya import OpenMayaUI
#------------------------------------------------------------------------------
def getMayaMainWindow():
parentWindow = OpenMayaUI.MQtUtil.mainWindow()
if parentWindow:
return shiboken.wrapInstance(long(parentWindow), QtGui.QWidget)
#------------------------------------------------------------------------------
class TestQSettingsWin(QtGui.QMainWindow):
def __init__(self, parent=getMayaMainWindow()):
super(TestQSettingsWin, self).__init__(parent)
self.setWindowTitle('Test QSettings')
self.setObjectName('testAllMMessagesWindow')
self.centralWidget = QtGui.QWidget()
self.setCentralWidget(self.centralWidget)
self.mainLayout = QtGui.QVBoxLayout(self.centralWidget)
self.checkBox = QtGui.QCheckBox('check box')
self.mainLayout.addWidget(self.checkBox)
self.readSettings()
def closeEvent(self, event):
self.writeSettings()
def getQSettingsLocation(self):
raise NotImplementedError('Subclasses of TestQSettingsWin need to '
'implement "getQSettingsLocation"".')
def readSettings(self):
setting = self.getQSettingsLocation()
self.restoreGeometry(setting.value('geometry'))
self.restoreState(setting.value('windowState'))
print type(setting.value('checkBox'))
def writeSettings(self):
setting = self.getQSettingsLocation()
setting.setValue('geometry', self.saveGeometry())
setting.setValue('windowState', self.saveState())
setting.setValue('checkBox', self.checkBox.isChecked())
#------------------------------------------------------------------------------
class TestQSettingsClassVar(TestQSettingsWin):
savedSettings = QtCore.QSettings(QtCore.QSettings.IniFormat,
QtCore.QSettings.UserScope,
"Test",
"TestQSettings1")
def getQSettingsLocation(self):
return self.savedSettings
#------------------------------------------------------------------------------
class TestQSettingsDefScope(TestQSettingsWin):
def getQSettingsLocation(self):
setting = QtCore.QSettings(QtCore.QSettings.IniFormat,
QtCore.QSettings.UserScope,
"Test",
"TestQSettings3")
return setting
#------------------------------------------------------------------------------
def showTestWindows():
test1 = TestQSettingsClassVar()
test1.setAttribute(QtCore.Qt.WA_DeleteOnClose, True)
test1.show()
test2 = TestQSettingsDefScope()
test2.setAttribute(QtCore.Qt.WA_DeleteOnClose, True)
test2.show()
And here are the results from running it in an interactive session:
>>> import testQSettings
>>> testQSettings.showTestWindows()
<type 'NoneType'>
<type 'NoneType'>
>>> testQSettings.showTestWindows()
<type 'bool'>
<type 'unicode'>
>>> testQSettings.showTestWindows()
<type 'bool'>
<type 'unicode'>
>>> reload(testQSettings)
# Result: <module 'testQSettings' from 'C:/Users/becca/Documents/maya/2015-x64/scripts\testQSettings.pyc'> #
>>> testQSettings.showTestWindows()
<type 'unicode'>
<type 'unicode'>
>>> testQSettings.showTestWindows()
<type 'bool'>
<type 'unicode'>
>>> testQSettings.showTestWindows()
<type 'bool'>
<type 'unicode'>
As you can see, creating the QSettings object whenever it is needed consistently returns a <type 'unicode'> result for the data value. But creating the QSettings object as a class variable returns a <type 'bool'> result except when the module is reloaded, and then it returns a <type 'unicode'>.
Can anyone explain this strange behavior? Is there a rule that I should not make the QSettings object a class variable?
The settings object has to serialize the various different types of value to bytes before writing it to disk. This is usually done when the settings object is deleted (or, if an event loop is running, unsaved data may be periodically flushed to disk).
All the time that the unsaved data has not been flushed to disk, whenever you call settings.value(), it will return the unserialized value in its original type.
It may be possible to forcibly flush the data yourself by calling settings.sync(), but I would strongly advise against trying this. You should always create a new QSettings object whenever you want to read or write values, and ensure that it gets deleted after you've used it. That should be enough to guarantee consistency.
Related
I understand reformer is able to handle a large number of tokens. However it does not appear to support the summarization task:
>>> from transformers import ReformerTokenizer, ReformerModel
>>> from transformers import pipeline
>>> summarizer = pipeline("summarization", model="reformer")
404 Client Error: Not Found for url: https://huggingface.co/reformer/resolve/main/config.json
...
How would you construct the pipeline "manually" to use reformer for summarization?
Try this:
summarizer = pipeline("summarization", model="google/reformer-enwik8")
via here.
However, this produces...
/lib/python3.7/site-packages/sentencepiece.py", line 177, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
TypeError: not a string
Python 3.9.7, Numba 0.54.0
I wrote this simple code, that works well:
import numpy as np
import numba
from collections import namedtuple
Deal = namedtuple('Deal' , ['DateTime', 'Price', 'Quantity'])
dtype_deals = [('DateTime', 'datetime64[s]'), ('Price', 'float64'), ('Quantity', 'uint64')]
# #numba.njit
def my_func():
deals = []
deals.append(Deal(np.datetime64('2015-02-27T15:12:12'), 12.48, 10))
deals.append(Deal(np.datetime64('2015-03-17T15:08:36'), 1.15, 100))
deals.append(Deal(np.datetime64('2015-04-02T15:14:32'), 11.01, 20))
return np.array(deals, dtype=dtype_deals)
print(str(my_func()))
But when I remove the comment before #numba.njit, I get a mistake:
numba.core.errors.TypingError: Failed in nopython mode pipeline (step:
nopython frontend) No implementation of function
Function(datetime64[]) found for signature:
(Literalstr) There are 2 candidate implementations:
Of which 1 did not match due to: Overload in function 'make_callable_template..generic': File:
numba/core/typing/templates.py: Line 174. With argument(s):
'(unicode_type)': Rejected as the implementation raised a specific
error:
TypingError: Casting unicode_type to datetime64[] directly is unsupported. raised from
/home/ivan/.local/lib/python3.9/site-packages/numba/core/typing/builtins.py:818
Of which 1 did not match due to: Overload in function 'make_callable_template..generic': File:
numba/core/typing/templates.py: Line 174. With argument(s):
'(Literalstr)': Rejected as the
implementation raised a specific error:
TypingError: Casting Literalstr to datetime64[] directly is unsupported. raised from
/home/ivan/.local/lib/python3.9/site-packages/numba/core/typing/builtins.py:818
During: resolving callee type: class(datetime64[])
During: typing of call at /home/ivan/eclipse-workspace/MarketAnalysis/Experiment.py (11)
File "Experiment.py", line 11: def my_func():
deals = []
deals.append(Deal(np.datetime64('2015-02-27T15:12:12'), 12.48, 10))
^
I'm working in a Python 3.8+ Django/Rest-Framework environment enforcing types in new code but built on a lot of untyped legacy code and data. We are using TypedDicts extensively for ensuring that data we are generating passes to our TypeScript front-end with the proper data type.
MyPy/PyCharm/etc. does a great job of checking that our new code spits out data that conforms, but we want to test that the output of our many RestSerializers/ModelSerializers fits the TypeDict. If I have a serializer and typed dict like:
class PersonSerializer(ModelSerializer):
class Meta:
model = Person
fields = ['first', 'last']
class PersonData(TypedDict):
first: str
last: str
email: str
and then run code like:
person_dict: PersonData = PersonSerializer(Person.objects.first()).data
Static type checkers don't be able to figure out that person_dict is missing the required email key, because (by design of PEP-589) it is just a normal dict. But I can write something like:
annotations = PersonData.__annotations__
for k in annotations:
assert k in person_dict # or something more complex.
assert isinstance(person_dict[k], annotations[k])
and it will find that email is missing from the data of the serializer. This is well and good in this case, where I don't have any changes introduced by from __future__ import annotations (not sure if this would break it), and all my type annotations are bare types. But if PersonData were defined like:
class PersonData(TypedDict):
email: Optional[str]
affiliations: Union[List[str], Dict[int, str]]
then isinstance is not good enough to check if the data passes (since "Subscripted generics cannot be used with class and instance checks").
What I'm wondering is if there already exists a callable function/method (in mypy or another checker) that would allow me to validate a TypedDict (or even a single variable, since I can iterate a dict myself) against an annotation and see if it validates?
I'm not concerned about speed, etc., since the point of this is to check all our data/methods/functions once and then remove the checks later once we're happy that our current data validates.
The simplest solution I found works using pydantic.
from typing import cast, TypedDict
import pydantic
class SomeDict(TypedDict):
val: int
name: str
# this could be a valid/invalid declaration
obj: SomeDict = {
'val': 12,
'name': 'John',
}
# validate with pydantic
try:
obj = cast(SomeDict, pydantic.create_model_from_typeddict(SomeDict)(**obj).dict())
except pydantic.ValidationError as exc:
print(f"ERROR: Invalid schema: {exc}")
EDIT: When type checking this, it currently returns an error, but works as expected. See here: https://github.com/samuelcolvin/pydantic/issues/3008
You may want to have a look at https://pypi.org/project/strongtyping/. This may help.
In the docs you can find this example:
from typing import List, TypedDict
from strongtyping.strong_typing import match_class_typing
#match_class_typing
class SalesSummary(TypedDict):
sales: int
country: str
product_codes: List[str]
# works like expected
SalesSummary({"sales": 10, "country": "Foo", "product_codes": ["1", "2", "3"]})
# will raise a TypeMisMatch
SalesSummary({"sales": "Foo", "country": 10, "product_codes": [1, 2, 3]})
A little bit of a hack, but you can check two types using mypy command line -c options. Just wrap it in a python function:
import subprocess
def is_assignable(type_to, type_from) -> bool:
"""
Returns true if `type_from` can be assigned to `type_to`,
e. g. type_to := type_from
Example:
>>> is_assignable(bool, str)
False
>>> from typing import *
>>> is_assignable(Union[List[str], Dict[int, str]], List[str])
True
"""
code = "\n".join((
f"import typing",
f"type_to: {type_to}",
f"type_from: {type_from}",
f"type_to = type_from",
))
return subprocess.call(("mypy", "-c", code)) == 0
You could do something like this:
def validate(typ: Any, instance: Any) -> bool:
for property_name, property_type in typ.__annotations__.items():
value = instance.get(property_name, None)
if value is None:
# Check for missing keys
print(f"Missing key: {property_name}")
return False
elif property_type not in (int, float, bool, str):
# check if property_type is object (e.g. not a primitive)
result = validate(property_type, value)
if result is False:
return False
elif not isinstance(value, property_type):
# Check for type equality
print(f"Wrong type: {property_name}. Expected {property_type}, got {type(value)}")
return False
return True
And then test some object, e.g. one that was passed to your REST endpoint:
class MySubModel(TypedDict):
subfield: bool
class MyModel(TypedDict):
first: str
last: str
email: str
sub: MySubModel
m = {
'email': 'JohnDoeAtDoeishDotCom',
'first': 'John'
}
assert validate(MyModel, m) is False
This one prints the first error and returns bool, you could change that to exceptions, possibly with all the missing keys. You could also extend it to fail on additional keys than defined by the model.
I like your solution!. In order to avoid iteration fixes for some user, I added some code to your solution :D
def validate_custom_typed_dict(instance: Any, custom_typed_dict:TypedDict) -> bool|Exception:
key_errors = []
type_errors = []
for property_name, type_ in my_typed_dict.__annotations__.items():
value = instance.get(property_name, None)
if value is None:
# Check for missing keys
key_errors.append(f"\t- Missing property: '{property_name}' \n")
elif type_ not in (int, float, bool, str):
# check if type is object (e.g. not a primitive)
result = validate_custom_typed_dict(type_, value)
if result is False:
type_errors.append(f"\t- '{property_name}' expected {type_}, got {type(value)}\n")
elif not isinstance(value, type_):
# Check for type equality
type_errors.append(f"\t- '{property_name}' expected {type_}, got {type(value)}\n")
if len(key_errors) > 0 or len(type_errors) > 0:
error_message = f'\n{"".join(key_errors)}{"".join(type_errors)}'
raise Exception(error_message)
return True
some console output:
Exception:
- Missing property: 'Combined_cycle'
- Missing property: 'Solar_PV'
- Missing property: 'Hydro'
- 'timestamp' expected <class 'str'>, got <class 'int'>
- 'Diesel_engines' expected <class 'float'>, got <class 'int'>
I have yaml data like the input below and i need output as key value pairs
Input
a="""
--- !ruby/hash:ActiveSupport::HashWithIndifferentAccess
code:
- '716'
- '718'
id:
- 488
- 499
"""
ouput needed
{'code': ['716', '718'], 'id': [488, 499]}
The default constructor was giving me an error. I tried adding new constructor and now its not giving me error but i am not able to get key value pairs.
FYI, If i remove the !ruby/hash:ActiveSupport::HashWithIndifferentAccess line from my yaml then it gives me desired output.
def new_constructor(loader, tag_suffix, node):
if type(node.value)=='list':
val=''.join(node.value)
else:
val=node.value
val=node.value
ret_val="""
{0}
""".format(val)
return ret_val
yaml.add_multi_constructor('', new_constructor)
yaml.load(a)
output
"\n [(ScalarNode(tag=u'tag:yaml.org,2002:str', value=u'code'), SequenceNode(tag=u'tag:yaml.org,2002:seq', value=[ScalarNode(tag=u'tag:yaml.org,2002:str', value=u'716'), ScalarNode(tag=u'tag:yaml.org,2002:str', value=u'718')])), (ScalarNode(tag=u'tag:yaml.org,2002:str', value=u'id'), SequenceNode(tag=u'tag:yaml.org,2002:seq', value=[ScalarNode(tag=u'tag:yaml.org,2002:int', value=u'488'), ScalarNode(tag=u'tag:yaml.org,2002:int', value=u'499')]))]\n "
Please suggest.
This is not a solution using PyYAML, but I recommend using ruamel.yaml instead. If for no other reason, it's more actively maintained than PyYAML. A quote from the overview
Many of the bugs filed against PyYAML, but that were never acted upon, have been fixed in ruamel.yaml
To load that string, you can do
import ruamel.yaml
parser = ruamel.yaml.YAML()
obj = parser.load(a) # as defined above.
I strongly recommend following #Andrew F answer, but in case you
wonder why your code did not get the proper result, that is because
you don't correctly process the node under the tag in your tag
handling.
Although the node's value is a list (of tuples with key value pairs),
you should test for the type of the node itself (using isinstance)
and then hand it over to the "normal" mapping processing routine as
the tag is on a mapping:
import yaml
from yaml.loader import SafeLoader
a = """\
--- !ruby/hash:ActiveSupport::HashWithIndifferentAccess
code:
- '716'
- '718'
id:
- 488
- 499
"""
def new_constructor(loader, tag_suffix, node):
if isinstance(node, yaml.nodes.MappingNode):
return loader.construct_mapping(node, deep=True)
raise NotImplementedError
yaml.add_multi_constructor('', new_constructor, Loader=SafeLoader)
data = yaml.load(a, Loader=SafeLoader)
print(data)
which gives:
{'code': ['716', '718'], 'id': [488, 499]}
You should not use PyYAML's yaml.load(), it is documented to be potentially unsafe
and above all it is not necessary. Just add the new constructor to the SafeLoader.
from pexpect import pxssh
import getpass
import time
import sys
s=pxssh.pxssh()
class Testinstall:
def setup_class(cls):
cls.s=pxssh.pxssh()
cls.s.login('10.10.62.253', 'User','PW',auto_prompt_reset=False)
def teardown_class(cls):
cls.s.logout()
def test_cleanup(cls):
cls.s.sendline('cat test.py')
cls.s.prompt(timeout=10)
cls.s.sendline('cat profiles.conf')
cls.s.prompt(timeout=10)
print('s.before')
print (cls.s.before)
print('s.after')
print(cls.s.after)
In above code print(cls.s.before) prints, output of both cat commands.
As per expectation it should only print output of the 2nd cat command i.e cat profiles.conf.
When tried in python session in shell it shows output of only second cat command ( as per expectation)
If you use auto_prompt_reset=False for pxssh.login() then you cannot use pxssh.prompt(). According to the doc:
pxssh uses a unique prompt in the prompt() method. If the original prompt is not reset then this will disable the prompt() method unless you manually set the PROMPT attribute.
So for your code, both prompt() would time out and .before would have all output and .after would be pexpect.exceptions.TIMEOUT.
The doc also says that
Calling prompt() will erase the contents of the before attribute even if no prompt is ever matched.
but this is NOT true based on my testing:
>>> from pexpect import pxssh
>>> ssh = pxssh.pxssh()
>>> ssh.login('127.0.0.1', 'root', 'passwd')
True
>>> ssh.PROMPT = 'not-the-real-prompt'
>>> ssh.sendline('hello')
6
>>> ssh.prompt(timeout=1)
False
>>> ssh.before
'hello\r\n-bash: hello: command not found\r\n[PEXPECT]# '
>>> ssh.after
<class 'pexpect.exceptions.TIMEOUT'>
>>> ssh.sendline('world')
6
>>> ssh.prompt(timeout=1)
False
>>> ssh.before
'hello\r\n-bash: hello: command not found\r\n[PEXPECT]# world\r\n-bash: world: command not found\r\n[PEXPECT]# '
>>> ssh.after
<class 'pexpect.exceptions.TIMEOUT'>
>>>
From the result you can see .before is not erased for the 2nd prompt(). Instead it's appended with the new output.
You can use ssh.sync_original_prompt() instead of ssh.prompt().