VSC temporarily turn off yaml lintin - yaml

Trying to find a way to turn off the red lines temporarily for that file only.

maybe try to disable the yaml.schemaStore ?
go in in settings.json and add :
"yaml.schemaStore.enable": false

Since this is not valid YAML at all, but you want to edit this as YAML,
you should make it into valid YAML. If you turn of the errors,
instead you probably would not have all of the advantage of the YAML
editing mode.
If saltstate allows you to change the block_start_string and
variable_start_string jinja2 uses you can change {% into #% (or
###% if #% and ###% naturally occur in your source), and also
change {{ into <{ (or <<{, you get the idea). If you would call
jinja2 directly you would then then pass to the FireSystemLoader:
block_start_string='<{' and variable_start_string='#%' If the
above is possible, then you have to change your input file only once,
do that with an editor.
If you cannot control saltstate to do the sane thing, your still not
stuck but you have to do a bit more using Python,
ruamel.yaml and some
support packages (disclaimer: I am the author of those packages).
Install with:
pip install ruamel.yaml[jinja2] ruamel.std.pathlib
Then before editing run the program:
from ruamel.yaml import YAML
from ruamel.std.pathlib import Path
yamlj2 = YAML(typ='jinja2')
yamlrt = YAML()
yaml_flow_style = YAML()
yaml_flow_style.default_flow_style = True
in_file = Path('init.sls')
backup_file = Path('init.sls.org')
in_file.copy(backup_file)
data = yamlj2.load(in_file)
with in_file.open('w') as fp:
# write the header with info needed for revers
fp.write('# ruamel.yaml.jinja2: ') # no EOL
yaml_flow_style.dump(yamlj2._plug_in_jinja2, fp)
yamlrt.dump(data, fp)
which changes the offending jinja2 sequences and add a one-line header comment with the actual patterns used to the file. You should then be able
to edit the init.sls file without getting all those errors.
Before calling saltstate, do run the following:
from ruamel.yaml import YAML
from ruamel.std.pathlib import Path
in_file = Path('init.sls')
yamlj2 = YAML(typ='jinja2')
yamlrt = YAML()
yamlnort = YAML(typ='safe')
with in_file.open() as fp:
yamlj2._plug_in_jinja2 = yamlnort.load(fp.readline().split(':', 1)[1])
data = yamlrt.load(fp)
yamlj2.dump(data, in_file)
If you have multiple of these files, you probably want to take your
filename from sys.argv[1]. You might actually call the salstate program from this second Python program (i.e. decode and run).

Related

How do I read a yaml file and outputting the yaml file values as environment variables?

I have my environment variables stored in a YAML file. The YAML file is used by a third party service for deployment.
I was wondering if there is a way to source the YAML file I am using, so that I can get access to my database credentials to run a migration once the app has been deployed?
example YAML:
env_variables:
DATABASE_CONNECTION_ADDRESS: 'localhost'
DATABASE_PORT: '5432'
DATABASE_NAME: 'a-db'
DATABASE_USERNAME: 'user'
DATABASE_PASSWORD: 'password'
IS_DEBUG: 'false'
GS_BUCKET_NAME: image-bucket
My main motivation is that this deployment is running in a pipeline and I do not want to maintain the duplication of each of these environment variables in their own secret, and storing this YAML file as a secret so the third party service has access to it.
If you have Python installed in your environment and can install ruamel.yaml in there you can source the output of the following one-liner:
python -c 'from pathlib import Path; from ruamel.yaml import YAML; print("".join([f"{k}={v}\n" for k, v in YAML().load(Path("example.yaml"))["env_variables"].items()]))'
Its output is:
DATABASE_CONNECTION_ADDRESS=localhost
DATABASE_PORT=5432
DATABASE_NAME=a-db
DATABASE_USERNAME=user
DATABASE_PASSWORD=password
IS_DEBUG=false
GS_BUCKET_NAME=image-bucket
As Jeff Schaller suggested you probably want to quote the values and escape any single quotes that might occur in the string. This can easily be achieved by changing {v} into {v!r} in the one-liner.
As program:
#!/usr/bin/env python3
from pathlib import Path
from ruamel.yaml import YAML
file_in = Path("example.yaml")
yaml = YAML()
env_data = yaml.load(file_in)["env_variables"]
print("".join([f"{k}={v!r}\n" for k, v in env_data.items()]))

keep source unformatted while YAML dump

My YAML file looks like below:
info_block:
enable: null
start: "12:00"
server_type: linux
I have loaded and dumped using ruamel.yaml.dump
But, the output is getting formatted like below: (like null replaced to empty, double quotes gets removed from the start value)
info_block:
enable:
start: 12:00
server_type: linux
How can I retain my source here
I know there is something like this to retain the null but I want my complete source unformatted.
If you want to retain your source, you're best bet is to keep track of whether something changes and
not over write the source when not changes, as in your example.
ruamel.yaml will always normalize output and if that is not what you want,
your only hope is to do exact string substitutions on the file, potentially
using the line information on the loaded data. I recommend against doing that,
and if you're retaining is for minizing diffs you should just byte the bullet
once, like what you would do when using some source formatter.
However if you work with YAML 1.1 only parsers although that version was
replaced more than 10 years ago, I can see that 12:00 instead of "12:00" can
be a problem as those kind of strings are interpreted as sexagesimals.
In ruamel.yaml, you can either set the output to be YAML 1.1, and then 12:00
will be quoted, but you'll get a document header stating that it is conform to
the outdated version.
The other thing you can do is preserve any quotes using the .preserve_quotes attribute:
import sys
import ruamel.yaml
yaml_str = """\
info_block:
enable: null
start: "12:00"
server_type: linux
"""
def my_represent_none(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:null', u'null')
yaml = ruamel.yaml.YAML()
yaml.representer.add_representer(type(None), my_represent_none)
yaml.indent(mapping=2, sequence=2, offset=0)
yaml.preserve_quotes = True
data = yaml.load(yaml_str)
yaml.dump(data, sys.stdout)
which gives a complete retained version if combined with the alternative representer for the null node:
info_block:
enable: null
start: "12:00"
server_type: linux

Is There a tutorial how to suppress Pylint warnings for Squish?

I am trying to suppress Pylint warnings from Squish, but not have same code written in front of the code like is described here: https://kb.froglogic.com/display/KB/Example+-+Using+PyLint+with+Squish+test+scripts+that+use+source%28%29
I would like to know if is a file that I can configure and uploaded into Squish
The article describes the only option, to define the Squish functions and symbols yourself.
However, it is showing what to do in a single file Squish test script file only for sake of simplicity.
You should of course put those Squish function definitions in a separate, re-usable file, and use import to "load" the definitions into your test.py file:
from squish_definitions import *
def main():
...
in squish_definitions.py:
# Trick Pylint and Python IDEs into accepting the
# definitions in this block, whereas upon execution
# none of these definitions will take place:
if -0:
class ApplicationContext:
pass
def startApplication(aut_path_or_name, optional_squishserver_host, optional_squishserver_port):
return ApplicationContext
# etc.
Also, you should generally switch over to using Python's import in favor of Squish's source() function.

How to validate Jinja syntax without variable interpolation

I have had no success in locating a good precommit hook I can use to validate that a Jinja2 formatted file is well-formed without attempting to substitute variables. The goal is something that will return a shell code of zero if the file is well-formed without regard to whether variable are available, 1 otherwise.
You can do this within Jinja itself, you'd just need to write a script to read and parse the template.
Since you only care about well-formed templates, and not whether or not the variables are available, it should be fairly easy to do:
#!/usr/bin/env python
# filename: check_my_jinja.py
import sys
from jinja2 import Environment
env = Environment()
with open(sys.argv[1]) as template:
env.parse(template.read())
or something that iterates over all templates
#!/usr/bin/env python
# filename: check_my_jinja_recursive.py
import sys
import os
from jinja2 import Environment, FileSystemLoader
env = Environment(loader=FileSystemLoader('./mytemplates'))
templates = [x for x in env.list_templates() if x.endswith('.jinja2')]
for template in templates:
t = env.get_template(template)
env.parse(t)
If you have incorrect syntax, you will get a TemplateSyntaxError
So your precommit hook might look like
python check_my_jinja.py template.jinja2
python check_my_jinja_recursive.py /dir/templates_folder

what is the encoding of the subprocess module output in Python 2.7?

I'm trying to retrieve the content of a zipped archive with python2.7 on 64bit windows vista. I tried by making a system call to 7zip (my favourite archive manager) using the subprocess module:
# -*- coding: utf-8 -*-
import sys, os, subprocess
Extractor = r'C:\Program Files\7-Zip\7z.exe'
ArchiveName = r'C:\temp\bla.zip'
output = subprocess.Popen([Extractor,'l','-slt',ArchiveName],stdout=subprocess.PIPE).stdout.read()
This works fine as long as the archive content contains only ascii filenames, but when I try it with non-ascii I get an encoded output string variable where ä, ë, ö, ü have been replaced by \x84, \x89, \x94, \x81 (etcetera). I've tried all kinds of decode/encode calls but I'm just too inexperienced with python (and generally too stupid) to reproduce the original characters with umlaut (which is required if I would like to follow-up this step with e.g. an extraction subprocess call to 7z).
Simply put my question is: How do I get this to work also for archives with non-ascii content?
... or to put it in a more convoluted way: Is the output of subprocess always of a fixed encoding or not?
In the former case -> Which encoding is it?
In the latter case -> How can I control or uncover the encoding of the output of subprocess? Inspired by similar questions on this blog I've tried adding
import codecs
sys.stdout = codecs.getwriter('utf8')(sys.stdout)
and I've also tried
my_env = os.environ
my_env['PYTHONIOENCODING'] = 'utf-8'
output = subprocess.Popen([Extractor,'l','-slt',ArchiveName],stdout=subprocess.PIPE,env=my_env).stdout.read()
but neither seems to alter the encoding of the output variable (or to reproduce the umlaut).
You can try using the -sccUTF-8 switch from 7zip to force the output in utf-8.
Here is ref page: http://en.helpdoc-online.com/7-zip_9.20/source/cmdline/switches/scc.htm

Resources