I am trying to create a textfsm template with the Netmiko library. While it works for most of the commands, it does not work when I try performing "inc" operation in the network device. The textfsm index file seems like it is not recognizing the same command for 2 different templates; for instance:
If I am giving the command - show running | inc syscontact
And give another command - show running | inc syslocation
in textfsm index; the textfsm template seems like it is recognizing only the first command; and not the second command.
I understand that I can get the necessary data by the regex expression for syscontact and syslocation for the commands( via the template ), however I want to achieve this by the "inc" command from the device itself. Is there a way this can be done?
you need to escape the pipe in the index file. e.g. sh[[ow]] ru[[nning]] \| inc syslocation
There is a different way to parse that you want all datas which is called TTP module. You can take the code I wrote below as an example. You can create your own templates.
from pprint import pprint
from ttp import ttp
import json
import time
with open("showSystemInformation.txt") as f:
data_to_parse = f.read()
ttp_template = """
<group name="Show_System_Information">
System Name : {{System_Name}}
System Type : {{System_Type}} {{System_Type_2}}
System Version : {{Version}}
System Up Time : {{System_Uptime_Days}} days, {{System_Uptime_HR_MIN_SEC}} (hr:min:sec)
Last Saved Config : {{Last_Saved_Config}}
Time Last Saved : {{Last_Time_Saved_Date}} {{Last_Time_Saved_HR_MIN_SEC}}
Time Last Modified : {{Last_Time_Modified_Date}} {{Last_Time_Modifed_HR_MIN_SEC}}
</group>
"""
parser = ttp(data=data_to_parse, template=ttp_template)
parser.parse()
# print result in JSON format
results = parser.result(format='json')[0]
print(results)
Example run:
[appadmin#ryugbz01 Nokia]$ python3 showSystemInformation.py
[
{
"Show_System_Information": {
"Last_Saved_Config": "cf3:\\config.cfg",
"Last_Time_Modifed_HR_MIN_SEC": "11:46:57",
"Last_Time_Modified_Date": "2022/02/09",
"Last_Time_Saved_Date": "2022/02/07",
"Last_Time_Saved_HR_MIN_SEC": "15:55:39",
"System_Name": "SR7-2",
"System_Type": "7750",
"System_Type_2": "SR-7",
"System_Uptime_Days": "17",
"System_Uptime_HR_MIN_SEC": "05:24:44.72",
"Version": "C-16.0.R9"
}
}
]
Related
I work with a Mac. I have been trying to make a multiple sequence alignment in Python using Muscle. This is the code I have been running:
from Bio.Align.Applications import MuscleCommandline
cline = MuscleCommandline(input="testunaligned.fasta", out="testunaligned.aln", clwstrict=True)
print(cline)
from Bio import AlignIO
align = AlignIO.read(open("testunaligned.aln"), "clustal")
print(align)
I keep getting the following error:
FileNotFoundError: [Errno 2] No such file or directory: 'testunaligned.aln'
Does anyone know how I could fix this? I am very new to Python and computer science in general, and I am totally at a loss. Thanks!
cline in your code is an instance of MuscleCommandline object that you initialized with all the parameters. After the initialization, this instance can run muscle, but it will only do that if you call it. That means you have to invoke cline()
When you simply print the cline object, it will return a string that corresponds to the command you can manually run on the command line to get the same result as when you invoke cline().
And here the working code:
from Bio.Align.Applications import MuscleCommandline
cline = MuscleCommandline(
input="testunaligned.fasta",
out="testunaligned.aln",
clwstrict=True
)
print(cline)
cline() # this is where mucle runs
from Bio import AlignIO
align = AlignIO.read(open("testunaligned.aln"), "clustal")
print(align)
We have a large project that has multiple separate declarative pipeline file definitions. This is used to build different apps and installers from the single code base.
Right now, all of these files contain a large block of "code" used to generate the email body and JIRA update messages. examples:
// Get a JIRA's to add Comments to
// Return map of JIRA id to comment text from all commits for that JIRA
#NonCPS
def getJiraMap() {
a bunch of stuff
return jiraset
}
// Get the body text for the emails
def getMailBody1() {
return "See: ${BUILD_URL}\n\nChanges:\n" + getChangeString() + "\n" + testStatuses()
}
etc...
What I would like to do is have all these common methods in a separate file that all the other pipeline files can include. This seems like it SHOULD be easy, but all examples I've found appear to be rather complex involving a separate SCM - which is NOT what I want.
Updates:
Going through the various suggestions given in that link, I make the following file - BuildTools.groovy: Note that this file is in the same directory as the jenkins pipeline file that uses it.
import hudson.tasks.test.AbstractTestResultAction
import hudson.model.Actionable
Class BuildTools {
// Get a JIRA's to add Comments to
// Return map of JIRA id to comment text from all commits for that JIRA
#NonCPS
def getJiraMap() {
def jiraset = [:]
.. whole bunch of stuff ..
Here are the various things I've tried, and the results.
File sourceFile = new File("./AutomatedBuild/BuildTools.groovy");
Class gcl = new GroovyClassLoader(getClass().getClassLoader()).parseClass(sourceFile);
GroovyObject bt = (GroovyObject) gcl.newInstance();
Fails with:
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method java.lang.Class getClassLoader
evaluate(new File("./AutomatedBuild/BuildTools.groovy"))
def bt = new BuildTools()
Fails with:
15:29:07 WorkflowScript: 8: unable to resolve class BuildTools
15:29:07 # line 8, column 10.
15:29:07 def bt = new BuildTools()
15:29:07 ^
import BuildTools
def bt = new BuildTools()
Fails with:
15:35:58 WorkflowScript: 16: unable to resolve class BuildTools (note that BuildTools.groovy is in the same folder as this script)
15:35:58 # line 16, column 1.
15:35:58 import BuildTools
15:35:58 ^
GroovyShell shell = new GroovyShell()
def bt = shell.parse(new File("./AutomatedBuild/BuildTools.groovy"))
Fails with:
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use new groovy.lang.GroovyShell
TL,DR: From a Sphinx extension, how do I tell sphinx-build to treat an additional file as a dependency? In my immediate use case, this is the extension's source code, but the question could equally apply to some auxiliary file used by the extension.
I'm generating documentation with Sphinx using a custom extension. I'm using sphinx-build to build the documentation. For example, I use this command to generate the HTML (this is the command in the makefile generated by sphinx-quickstart):
sphinx-build -b html -d _build/doctrees . _build/html
Since my custom extension is maintained together with the source of the documentation, I want sphinx-build to treat it as a dependency of the generated HTML (and LaTeX, etc.). So whenever I change my extension's source code, I want sphinx-build to regenerate the output.
How do I tell sphinx-build to treat an additional file as a dependency? That is not mentioned in the toctree, since it isn't part of the source. Logically, this should be something I do from my extension's setup function.
Sample extension (my_extension.py):
from docutils import nodes
from docutils.parsers.rst import Directive
class Foo(Directive):
def run(self):
node = nodes.paragraph(text='Hello world\n')
return [node]
def setup(app):
app.add_directive('foo', Foo)
Sample source (index.rst):
.. toctree::
:maxdepth: 2
.. foo::
Sample conf.py (basically the output of sphinx-quickstart plus my extension):
import sys
import os
sys.path.insert(0, os.path.abspath('.'))
extensions = ['my_extension']
templates_path = ['_templates']
source_suffix = '.rst'
master_doc = 'index'
project = 'Hello directive'
copyright = '2019, Gilles'
author = 'Gilles'
version = '1'
release = '1'
language = None
exclude_patterns = ['_build']
pygments_style = 'sphinx'
todo_include_todos = False
html_theme = 'alabaster'
html_static_path = ['_static']
htmlhelp_basename = 'Hellodirectivedoc'
latex_elements = {
}
latex_documents = [
(master_doc, 'Hellodirective.tex', 'Hello directive Documentation',
'Gilles', 'manual'),
]
man_pages = [
(master_doc, 'hellodirective', 'Hello directive Documentation',
[author], 1)
]
texinfo_documents = [
(master_doc, 'Hellodirective', 'Hello directive Documentation',
author, 'Hellodirective', 'One line description of project.',
'Miscellaneous'),
]
Validation of a solution:
Run make html (or sphinx-build as above).
Modify my_extension.py to replace Hello world by Hello again.
Run make html again.
The generated HTML (_build/html/index.html) must now contain Hello again instead of Hello world.
It looks like the note_dependency method in the build environment API should do what I want. But when should I call it? I tried various events but none seemed to hit the environment object in the right state. What did work was to call it from a directive.
import os
from docutils import nodes
from docutils.parsers.rst import Directive
import sphinx.application
class Foo(Directive):
def run(self):
self.state.document.settings.env.note_dependency(__file__)
node = nodes.paragraph(text='Hello done\n')
return [node]
def setup(app):
app.add_directive('foo', Foo)
If a document contains at least one foo directive, it'll get marked as stale when the extension that introduces this directive changes. This makes sense, although it could get tedious if an extension adds many directives or makes different changes. I don't know if there's a better way.
Inspired by Luc Van Oostenryck's autodoc-C.
As far as I know app.env.note_dependency can be called within the doctree-read to add any file as a dependency to the document currently being read.
So in your use case, I assume this would work:
from typing import Any, Dict
from sphinx.application import Sphinx
import docutils.nodes as nodes
def doctree-read(app: Sphinx, doctree: nodes.document):
app.env.note_dependency(file)
def setup(app: Sphinx):
app.connect("doctree-read", doctree-read)
1. Summary
I can't find, how I can automatically prettify my YAML files.
2. Data
Example:
I have SashaPrettifyYAML.yaml file:
sasha_commands:
# Sasha comment
sasha_command_help: {call: sublime.command_help, caption: 'Sasha Command: Command Help'}
3. Expected behavior
I want to delete {braces}:
sasha_commands:
# Sasha comment
sasha_command_help:
call: sublime.command_help
caption: 'Sasha Command: Command Help'
4. Not helped
Pretty YAML (based on PyYAML) and online formatters as YAML Formatter and OnlineYAMLTools delete comments;
I can't find the required option in ruamel.yaml.cmd;
align-yaml align, not prettify YAML file.
There is no option to do this in ruamel.yaml.cmd, but it is fairly straightforward to do this with a small python program and using ruamel.yaml, by loading and dumping in round-trip mode (the default).
The only thing you need to do is make sure the flow-style on the data-structure that is the value for the key sasha_command_help is set to block-style (which is how I interpret your definition of "prettifying YAML"):
import sys
import ruamel.yaml
yaml_str = """\
sasha_commands:
# Sasha comment
sasha_command_help: {call: sublime.command_help, caption: 'Sasha Command: Command Help'}
"""
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
data = yaml.load(yaml_str)
data['sasha_commands']['sasha_command_help'].fa.set_block_style()
yaml.dump(data, sys.stdout)
this will exactly give the output you expect.
A recursive data structure walker can be found in scalarstring.py in the ruamel.yaml source, and adapted to make a generic "make-everything-block-style" routine:
import sys
import ruamel.yaml
def block_style(base):
"""
This routine walks over a simple, i.e. consisting of dicts, lists and
primitives, tree loaded from YAML. It recurses into dict values and list
items, and sets block-style on these.
"""
if isinstance(base, dict):
for k in base:
try:
base.fa.set_block_style()
except AttributeError:
pass
block_style(base[k])
elif isinstance(base, list):
for elem in base:
try:
base.fa.set_block_style()
except AttributeError:
pass
block_style(elem)
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
file_in = sys.argv[1]
file_out = sys.argv[2]
with open(file_in) as fp:
data = yaml.load(fp)
block_style(data)
with open(file_out, 'w') as fp:
yaml.dump(data, fp)
If you store the above in prettifyyaml.py you can call it with:
python prettifyyaml.py SashaPrettifyYAML.yaml Prettified.yaml
Since you are already using single quotes around the scalar that has embedded spaces, you won't see a change if you leave out yaml.preserve_quotes = True. But if you had used a double quoted scalar then that line makes sure the double quotes are preserved.
I had the same problem. I wrote my own YAML beautifier https://github.com/wangkuiyi/yamlfmt. I hope it helps.
I tried top results from Google, but none of them address the requirements of https://sqlflow.org/sqlflow, which I am leading:
https://pypi.org/project/yamlfmt cannot handle a file of multiple YAML documents separated by ---
https://github.com/devopyio/yamlfmt cannot handle multiple files.
https://github.com/miekg/yamlfmt/blob/master/fmt.go cannot replace (inline edit) the input files.
You can use yq tool - it's easy to install and use, and it's well maintained.
Supposing you have example.yml file to format, it can be processed by following ways:
from file: yq r --unwrapScalar -p pv -P example.yml '*'
from stdin: cat example.yml | yq r --unwrapScalar -p pv -P - '*'
With $AdminApp view <applicationName> -MapResRefToEJB it is possible to list the resource references defined for a deployed EJB module. However, the result of that command is plain text (that in addition may be localized). To extract that information one would have to parse this text, which is not very convenient. Is there a way to get the same information (i.e. the resources references of an application) in a structured form using $AdminConfig?
The AppManagement MBean provides this data in a structured format (Vector of AppDeploymentTasks). To obtain this data using wsadmin scripting (jython):
import javax.management as mgmt
appName = sys.argv[0]
appMgmt = mgmt.ObjectName(AdminControl.completeObjectName("WebSphere:*,type=AppManagement"))
appInfo = AdminControl.invoke_jmx(appMgmt, "getApplicationInfo", [appName, java.util.Hashtable(), None], ["java.lang.String", "java.util.Hashtable", "java.lang.String"])
for task in appInfo :
if (task.getName() == "MapResRefToEJB") :
resRefs = task.getTaskData()
# skip the first row since it contains the headers
for i in range(1, len(resRefs)) :
resRef = resRefs[i]
print
print "URI:", resRef[4]
print "EJB:", resRef[3]
print "Name:", resRef[5]
print "Type:", resRef[6]
print "JNDI:", resRef[8]