How to validate Jinja syntax without variable interpolation - validation

I have had no success in locating a good precommit hook I can use to validate that a Jinja2 formatted file is well-formed without attempting to substitute variables. The goal is something that will return a shell code of zero if the file is well-formed without regard to whether variable are available, 1 otherwise.

You can do this within Jinja itself, you'd just need to write a script to read and parse the template.
Since you only care about well-formed templates, and not whether or not the variables are available, it should be fairly easy to do:
#!/usr/bin/env python
# filename: check_my_jinja.py
import sys
from jinja2 import Environment
env = Environment()
with open(sys.argv[1]) as template:
env.parse(template.read())
or something that iterates over all templates
#!/usr/bin/env python
# filename: check_my_jinja_recursive.py
import sys
import os
from jinja2 import Environment, FileSystemLoader
env = Environment(loader=FileSystemLoader('./mytemplates'))
templates = [x for x in env.list_templates() if x.endswith('.jinja2')]
for template in templates:
t = env.get_template(template)
env.parse(t)
If you have incorrect syntax, you will get a TemplateSyntaxError
So your precommit hook might look like
python check_my_jinja.py template.jinja2
python check_my_jinja_recursive.py /dir/templates_folder

Related

How do I pass a custom argument to non-test module?

I am trying to set a new argument in my test suite and I would like to pass it along pytest commands (e.g. pytest --arg=foo, or pytest --arg=bar path/to/test.py::test_func). Then, those arguments must be used inside some other module. Not test one.
I tried with sys.argv as it is the easiest method and I only need one argument.
To import into Python Scripts you need to parse arguments.
#!/usr/bin/env python
import argparse
def main(def_args=sys.argv[1:]):
args = arguments(def_args)
user_name = args.user_name
....
//Rest of code
def arguments(argsval):
parser = argparse.ArgumentParser()
parser.add_argument('-un', '--user_name', type=str, required=False,
help="""Overrides user.name git config.
If not specified, the global config is used.""")
Then pass your args in your pytest file.
How to pass arguments in pytest by command line

How do I read a yaml file and outputting the yaml file values as environment variables?

I have my environment variables stored in a YAML file. The YAML file is used by a third party service for deployment.
I was wondering if there is a way to source the YAML file I am using, so that I can get access to my database credentials to run a migration once the app has been deployed?
example YAML:
env_variables:
DATABASE_CONNECTION_ADDRESS: 'localhost'
DATABASE_PORT: '5432'
DATABASE_NAME: 'a-db'
DATABASE_USERNAME: 'user'
DATABASE_PASSWORD: 'password'
IS_DEBUG: 'false'
GS_BUCKET_NAME: image-bucket
My main motivation is that this deployment is running in a pipeline and I do not want to maintain the duplication of each of these environment variables in their own secret, and storing this YAML file as a secret so the third party service has access to it.
If you have Python installed in your environment and can install ruamel.yaml in there you can source the output of the following one-liner:
python -c 'from pathlib import Path; from ruamel.yaml import YAML; print("".join([f"{k}={v}\n" for k, v in YAML().load(Path("example.yaml"))["env_variables"].items()]))'
Its output is:
DATABASE_CONNECTION_ADDRESS=localhost
DATABASE_PORT=5432
DATABASE_NAME=a-db
DATABASE_USERNAME=user
DATABASE_PASSWORD=password
IS_DEBUG=false
GS_BUCKET_NAME=image-bucket
As Jeff Schaller suggested you probably want to quote the values and escape any single quotes that might occur in the string. This can easily be achieved by changing {v} into {v!r} in the one-liner.
As program:
#!/usr/bin/env python3
from pathlib import Path
from ruamel.yaml import YAML
file_in = Path("example.yaml")
yaml = YAML()
env_data = yaml.load(file_in)["env_variables"]
print("".join([f"{k}={v!r}\n" for k, v in env_data.items()]))

move argparse use into separate function

I have a question about the functionality of argparse. I use argparse for custom functions and it's great, but sometimes I'd like to move the use of argparse and supplemental code into a separate function and use it there to reduce boilerplate / visual noise.
This is a partial example of what I'd like to do:
function A
set --local options ... # some definition.
argparse_wrapper --name A $options -- $argv; or return 1
end
instead of
function A
set --local options ... # some definition.
argparse --name A $options -- $argv; or return 1
# Code validating flags set by argparse in some way that argparse is unable to do,
# i.e. validation that requires values from two flags (so f/flag!script would not
# work).
#
# Or, changing flag names to names more appropriate inside the function.
#
# Other boilerplate related to options, but
# unrelated to the purpose of the function.
#
end
But, I'm unable to set values inside of a function and transfer those values seamlessly to the caller. As in, argparse sets values in the outer scope (the function calling argparse), but I'm unable to do the same with a custom argparse wrapper of my own. At least, I'm unsure of how to do so if there is a clean way. In particular, argparse can set local variables in its outer scope, and I want to keep that functionality in the supposed argparse wrapper. Is that possible?
I'm the person who designed and implemented argparse. The approach I recommend is the one you'll find in the share/functions/fish_opt.fish module. Execute the argparse in the function that implements the command. Define a helper function with the --no-scope-shadowing flag to give it direct access to the vars in the parent function. Then call that function to validate the args (or do whatever is needed) after argparse returns.

Is There a tutorial how to suppress Pylint warnings for Squish?

I am trying to suppress Pylint warnings from Squish, but not have same code written in front of the code like is described here: https://kb.froglogic.com/display/KB/Example+-+Using+PyLint+with+Squish+test+scripts+that+use+source%28%29
I would like to know if is a file that I can configure and uploaded into Squish
The article describes the only option, to define the Squish functions and symbols yourself.
However, it is showing what to do in a single file Squish test script file only for sake of simplicity.
You should of course put those Squish function definitions in a separate, re-usable file, and use import to "load" the definitions into your test.py file:
from squish_definitions import *
def main():
...
in squish_definitions.py:
# Trick Pylint and Python IDEs into accepting the
# definitions in this block, whereas upon execution
# none of these definitions will take place:
if -0:
class ApplicationContext:
pass
def startApplication(aut_path_or_name, optional_squishserver_host, optional_squishserver_port):
return ApplicationContext
# etc.
Also, you should generally switch over to using Python's import in favor of Squish's source() function.

VSC temporarily turn off yaml lintin

Trying to find a way to turn off the red lines temporarily for that file only.
maybe try to disable the yaml.schemaStore ?
go in in settings.json and add :
"yaml.schemaStore.enable": false
Since this is not valid YAML at all, but you want to edit this as YAML,
you should make it into valid YAML. If you turn of the errors,
instead you probably would not have all of the advantage of the YAML
editing mode.
If saltstate allows you to change the block_start_string and
variable_start_string jinja2 uses you can change {% into #% (or
###% if #% and ###% naturally occur in your source), and also
change {{ into <{ (or <<{, you get the idea). If you would call
jinja2 directly you would then then pass to the FireSystemLoader:
block_start_string='<{' and variable_start_string='#%' If the
above is possible, then you have to change your input file only once,
do that with an editor.
If you cannot control saltstate to do the sane thing, your still not
stuck but you have to do a bit more using Python,
ruamel.yaml and some
support packages (disclaimer: I am the author of those packages).
Install with:
pip install ruamel.yaml[jinja2] ruamel.std.pathlib
Then before editing run the program:
from ruamel.yaml import YAML
from ruamel.std.pathlib import Path
yamlj2 = YAML(typ='jinja2')
yamlrt = YAML()
yaml_flow_style = YAML()
yaml_flow_style.default_flow_style = True
in_file = Path('init.sls')
backup_file = Path('init.sls.org')
in_file.copy(backup_file)
data = yamlj2.load(in_file)
with in_file.open('w') as fp:
# write the header with info needed for revers
fp.write('# ruamel.yaml.jinja2: ') # no EOL
yaml_flow_style.dump(yamlj2._plug_in_jinja2, fp)
yamlrt.dump(data, fp)
which changes the offending jinja2 sequences and add a one-line header comment with the actual patterns used to the file. You should then be able
to edit the init.sls file without getting all those errors.
Before calling saltstate, do run the following:
from ruamel.yaml import YAML
from ruamel.std.pathlib import Path
in_file = Path('init.sls')
yamlj2 = YAML(typ='jinja2')
yamlrt = YAML()
yamlnort = YAML(typ='safe')
with in_file.open() as fp:
yamlj2._plug_in_jinja2 = yamlnort.load(fp.readline().split(':', 1)[1])
data = yamlrt.load(fp)
yamlj2.dump(data, in_file)
If you have multiple of these files, you probably want to take your
filename from sys.argv[1]. You might actually call the salstate program from this second Python program (i.e. decode and run).

Resources