How to import a module with "-" in its name in transcrypt? - transcrypt

I'm writing my first transcrypt code (python transpiled to JS for the browser -- what a great idea!) and I'm stuck already, translating this JS code into python:
import { saveAs } from 'file-saver';
import inspect from 'browser-util-inspect';
As you can see, both those third-party modules (which are in my node_modules/ dir, from npm) have hyphens in their name. But it's not legal python to say
from node_modules.file-saver import saveAs
(because python module names have to be identifiers) so I'm not sure how to proceed.

It should be possible to use module names containing a -, using aliases, but it was broken with TS 3.7.1
It'll be fixed in the next minor release, an example will be included in the autotest.
Sorry for the inconvenience.
TS 3.7.8 is out, containing a fix for this.
[EDIT]
# __pragma__ ('alias', 'specific_module_a', 'specific-module-a')
# __pragma__ ('alias', 'S', '$')
# __pragma__ ('alias', 'test_modules_b', 'test-modules-b')
# __pragma__ ('alias', 'specific_module_b', 'specific-module-b')
from test_modules_a.__specific_module_a import the__S__Function
from test_modules_b__.__specific_module_b import theBFunction as aBFunction
the__S__Variable = 3
print (the__S__Variable)
the__S__Function (print)
aBFunction (print)
Note that the pragma's are needed to conform original Python syntax and Python scanner/parser, that don't allow $ and - in names.

Related

Is There a tutorial how to suppress Pylint warnings for Squish?

I am trying to suppress Pylint warnings from Squish, but not have same code written in front of the code like is described here: https://kb.froglogic.com/display/KB/Example+-+Using+PyLint+with+Squish+test+scripts+that+use+source%28%29
I would like to know if is a file that I can configure and uploaded into Squish
The article describes the only option, to define the Squish functions and symbols yourself.
However, it is showing what to do in a single file Squish test script file only for sake of simplicity.
You should of course put those Squish function definitions in a separate, re-usable file, and use import to "load" the definitions into your test.py file:
from squish_definitions import *
def main():
...
in squish_definitions.py:
# Trick Pylint and Python IDEs into accepting the
# definitions in this block, whereas upon execution
# none of these definitions will take place:
if -0:
class ApplicationContext:
pass
def startApplication(aut_path_or_name, optional_squishserver_host, optional_squishserver_port):
return ApplicationContext
# etc.
Also, you should generally switch over to using Python's import in favor of Squish's source() function.

Python 3.5 - How to print a value with double quotes in YAML?

I wanted to print a YAML file (with keys and values) and with some values between double quotes.
I wanted to use the solution provide here: How to print a value with double quotes and spaces in YAML?
Unfortunately when I installed ruamel.yaml for python 3.5 (sudo apt install python3-ruamel.yaml) I was not able to find the function DoubleQuotedScalarString() in the script scalarstring.py .
Here is what it looks like:
from __future__ import absolute_import
from __future__ import print_function
__all__ = ["ScalarString", "PreservedScalarString"]
try:
from .compat import text_type
except (ImportError, ValueError): # for Jython
from ruamel.yaml.compat import text_type
class ScalarString(text_type):
def __new__(cls, *args, **kw):
return text_type.__new__(cls, *args, **kw)
class PreservedScalarString(ScalarString):
def __new__(cls, value):
return ScalarString.__new__(cls, value)
def preserve_literal(s):
return PreservedScalarString(s.replace('\r\n', '\n').replace('\r', '\n'))
def walk_tree(base):
"""
the routine here walks over a simple yaml tree (recursing in
dict values and list items) and converts strings that
have multiple lines to literal scalars
"""
from ruamel.yaml.compat import string_types
if isinstance(base, dict):
for k in base:
v = base[k]
if isinstance(v, string_types) and '\n' in v:
base[k] = preserve_literal(v)
else:
walk_tree(v)
elif isinstance(base, list):
for idx, elem in enumerate(base):
if isinstance(elem, string_types) and '\n' in elem:
print(elem)
base[idx] = preserve_literal(elem)
else:
walk_tree(elem)
Currently this is what I obtain when using ruamel.yamp.dump() to get my yaml file:
key1: 0,0,0,0
key2: 0,0,0,0
And here is what I would like in my yaml file:
key1: "0,0,0,0"
key2: "0,0,0,0"
How am I suppose to solve this?
The class DoubleQuotedScalarString was added 2016-07-06.
You should update your version of ruamel.yaml e.g. using pip install -U ruamel.yaml. You can see what version you have by looking at the __init__.py in the yaml directory. The current version (June 2018) has:
version_info=(0, 15, 37),
In general it is best not to install additional python packages under Linux, using the package manager. Instead create a virtualenv for each program you want to develop and install all packages necessary for that program in such a virtualenv.
So I found a quick and dirty solution to complete Anthon's answer:
First of all, Anthon is right I should work in a virtual environment. With my current app, I was lazy, so I didn't, that's a mistake.
I found that sudo apt install python3-ruamel.yaml was not installing the package in the same dir than sudo pip3 install ruamel.yaml.
The package versions are also not the same which is the reason why I got confused.
Here is the quick and dirty solution that worked for me:
copy all ".py" files from the source (https://pypi.org/project/ruamel.yaml/#files) inside the dir where your ruamel.yaml package is installed (/my/default/python3-lib/ruamel/yaml/) with e.g. mv /home/user/Downloads/ruamel.yaml-0.15.38/*.py /my/default/python3-lib/ruamel/yaml/
It solved the problem for me.
As Anthon already answered, the clean solution is to use a virtual env.

VSC temporarily turn off yaml lintin

Trying to find a way to turn off the red lines temporarily for that file only.
maybe try to disable the yaml.schemaStore ?
go in in settings.json and add :
"yaml.schemaStore.enable": false
Since this is not valid YAML at all, but you want to edit this as YAML,
you should make it into valid YAML. If you turn of the errors,
instead you probably would not have all of the advantage of the YAML
editing mode.
If saltstate allows you to change the block_start_string and
variable_start_string jinja2 uses you can change {% into #% (or
###% if #% and ###% naturally occur in your source), and also
change {{ into <{ (or <<{, you get the idea). If you would call
jinja2 directly you would then then pass to the FireSystemLoader:
block_start_string='<{' and variable_start_string='#%' If the
above is possible, then you have to change your input file only once,
do that with an editor.
If you cannot control saltstate to do the sane thing, your still not
stuck but you have to do a bit more using Python,
ruamel.yaml and some
support packages (disclaimer: I am the author of those packages).
Install with:
pip install ruamel.yaml[jinja2] ruamel.std.pathlib
Then before editing run the program:
from ruamel.yaml import YAML
from ruamel.std.pathlib import Path
yamlj2 = YAML(typ='jinja2')
yamlrt = YAML()
yaml_flow_style = YAML()
yaml_flow_style.default_flow_style = True
in_file = Path('init.sls')
backup_file = Path('init.sls.org')
in_file.copy(backup_file)
data = yamlj2.load(in_file)
with in_file.open('w') as fp:
# write the header with info needed for revers
fp.write('# ruamel.yaml.jinja2: ') # no EOL
yaml_flow_style.dump(yamlj2._plug_in_jinja2, fp)
yamlrt.dump(data, fp)
which changes the offending jinja2 sequences and add a one-line header comment with the actual patterns used to the file. You should then be able
to edit the init.sls file without getting all those errors.
Before calling saltstate, do run the following:
from ruamel.yaml import YAML
from ruamel.std.pathlib import Path
in_file = Path('init.sls')
yamlj2 = YAML(typ='jinja2')
yamlrt = YAML()
yamlnort = YAML(typ='safe')
with in_file.open() as fp:
yamlj2._plug_in_jinja2 = yamlnort.load(fp.readline().split(':', 1)[1])
data = yamlrt.load(fp)
yamlj2.dump(data, in_file)
If you have multiple of these files, you probably want to take your
filename from sys.argv[1]. You might actually call the salstate program from this second Python program (i.e. decode and run).

How to clear cache (or force recompilation) in numba

I have a fairly large codebase written in numba, and I have noticed that when the cache is enabled for a function calling another numba compiled function in another file, changes in the called function are not picked up when the called function is changed. The situation occurs when I have two files:
testfile2:
import numba
#numba.njit(cache=True)
def function1(x):
return x * 10
testfile:
import numba
from tests import file1
#numba.njit(cache=True)
def function2(x, y):
return y + file1.function1(x)
If in a jupyter notebook, I run the following:
# INSIDE JUPYTER NOTEBOOK
import sys
sys.path.insert(1, "path/to/files/")
from tests import testfile
testfile.function2(3, 4)
>>> 34 # good value
However, if I change then change testfile2 to the following:
import numba
#numba.njit(cache=True)
def function1(x):
return x * 1
Then I restart the jupyter notebook kernel and rerun the notebook, I get the following
import sys
sys.path.insert(1, "path/to/files/")
from tests import testfile
testfile.function2(3, 4)
>>> 34 # bad value, should be 7
Importing both files into the notebook has no effect on the bad result. Also, setting cache=False only on function1 also has no effect. What does work is setting cache=False on all njit'ted functions, then restarting the kernel, then rerunning.
I believe that LLVM is probably inlining some of the called functions and then never checking them again.
I looked in the source and discovered there is a method that returns the cache object numba.caching.NullCache(), instantiated a cache object and ran the following:
cache = numba.caching.NullCache()
cache.flush()
Unfortunately that appears to have no effect.
Is there a numba environment setting, or another way I can manually clear all cached functions within a conda env? Or am I simply doing something wrong?
I am running numba 0.33 with Anaconda Python 3.6 on Mac OS X 10.12.3.
I "solved" this with a hack solution after seeing Josh's answer, by creating a utility in the project method to kill off the cache.
There is probably a better way, but this works. I'm leaving the question open in case someone has a less hacky way of doing this.
import os
def kill_files(folder):
for the_file in os.listdir(folder):
file_path = os.path.join(folder, the_file)
try:
if os.path.isfile(file_path):
os.unlink(file_path)
except Exception as e:
print("failed on filepath: %s" % file_path)
def kill_numba_cache():
root_folder = os.path.realpath(__file__ + "/../../")
for root, dirnames, filenames in os.walk(root_folder):
for dirname in dirnames:
if dirname == "__pycache__":
try:
kill_files(root + "/" + dirname)
except Exception as e:
print("failed on %s", root)
This is a bit of a hack, but it's something I've used before. If you put this function in the top-level of where your numba functions are (for this example, in testfile), it should recompile everything:
import inspect
import sys
def recompile_nb_code():
this_module = sys.modules[__name__]
module_members = inspect.getmembers(this_module)
for member_name, member in module_members:
if hasattr(member, 'recompile') and hasattr(member, 'inspect_llvm'):
member.recompile()
and then call it from your jupyter notebook when you want to force a recompile. The caveat is that it only works on files in the module where this function is located and their dependencies. There might be another way to generalize it.

How to validate Jinja syntax without variable interpolation

I have had no success in locating a good precommit hook I can use to validate that a Jinja2 formatted file is well-formed without attempting to substitute variables. The goal is something that will return a shell code of zero if the file is well-formed without regard to whether variable are available, 1 otherwise.
You can do this within Jinja itself, you'd just need to write a script to read and parse the template.
Since you only care about well-formed templates, and not whether or not the variables are available, it should be fairly easy to do:
#!/usr/bin/env python
# filename: check_my_jinja.py
import sys
from jinja2 import Environment
env = Environment()
with open(sys.argv[1]) as template:
env.parse(template.read())
or something that iterates over all templates
#!/usr/bin/env python
# filename: check_my_jinja_recursive.py
import sys
import os
from jinja2 import Environment, FileSystemLoader
env = Environment(loader=FileSystemLoader('./mytemplates'))
templates = [x for x in env.list_templates() if x.endswith('.jinja2')]
for template in templates:
t = env.get_template(template)
env.parse(t)
If you have incorrect syntax, you will get a TemplateSyntaxError
So your precommit hook might look like
python check_my_jinja.py template.jinja2
python check_my_jinja_recursive.py /dir/templates_folder

Resources