I am trying to determine how to test on the node attributes that junos occasionally uses. In this particular case, I want to find all BGP sessions that are down between 20w and 1y. The seconds value is contained in the node attribute, but I have not been able to figure out how to access it for the test.
I have tried various methods using the entire explicit xpath, all the way to what I have below in the code.
Here is the xpath I am trying to access (edited for brevity):
<rpc-reply xmlns:junos="http://xml.juniper.net/junos/18.2R3/junos">
<bgp-information xmlns="http://xml.juniper.net/junos/18.2R3/junos-routing">
<bgp-peer junos:style="terse" heading="Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...">
<elapsed-time junos:seconds="263788">3d 1:16:28</elapsed-time>
</bgp-peer>
</bgp-information>
</rpc-reply>
test_bgp_summ:
- rpc: get-bgp-summary-information
- iterate:
xpath: /bgp-information/bgp-peer
id: ./peer-address
tests:
- in-range: //#junos:seconds, 12096000, 31449600
err: ""
info: 'Peer session <{{id_0}}> is likely stale'
You need to include the elapsed-time node in your test:
show_bgp_sum:
- command: show bgp summary
- iterate:
xpath: '//bgp-information/bgp-peer'
id: ./peer-address
tests:
- exists: elapsed-time/#seconds
err: "elpased-time doesn't exist"
info: "Elapsed-time is: <{{post['elapsed-time/#seconds']}}>"
and the output:
"passed": [
{
"actual_node_value": "1309804",
"id": {
"./peer-address": "10.10.12.100"
},
"message": "Elapsed-time is: <1309804>",
"post": {
"elapsed-time/#seconds": "1309804"
},
"pre": {
"elapsed-time/#seconds": "1309802"
}
},
Related
I'm using Google Cloud Workflow to call via http.get a CloudRun app that returns a XML document that has been converted to json, the below json gets successfully returned to Workflow in Step 2 which contains the converted XML to json in the body.
{
"body": {
"ResponseMessage": {
"#xmlns": "http://someurl.com/services",
"Response": {
"#xmlns:a": "http://someurl.com/data",
"#xmlns:i": "http://www.w3.org/2001/XMLSchema-instance",
"a:ReferenceNumber": {
"#i:nil": "true"
},
"a:DateTime": "2023-01-01T00:17:38+0000",
"a:TransactionId": "154200432",
"a:Environment": "Development",
"a:RequestDateTime": "2023-01-01T11:17:39",
}
},
"code": 200,
"headers": {
"Alt-Svc": "h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"",
"Content-Length": "1601",
"Content-Type": "application/json",
"Date": "Sun, 01 Jan 2023 00:17:39 GMT",
"Server": "Google Frontend",
"X-Cloud-Trace-Context": "931754ab82102397eb07775171485850"
}
}
}
The full yaml of the workflow is below and without step3/step4 it works. In step3 I try to access an element in the json which is returned from step2 as per https://cloud.google.com/workflows/docs/http-requests#access-data
main:
params: [args]
steps:
- step1:
assign:
- searchURL: ${"https://myfunction.a.run.app/search/" + args.type + "/" + args.serial}
- step2:
call: http.get
args:
url: ${searchURL}
result: s2result
- step3:
assign:
- resubmitURL: '${https://myfunction.a.run.app/resubmit/" + ${s2result.body.ResponseMessage.Response.a:TransactionId} }'
- step4:
call: http.get
args:
url: ${resubmitURL}
result: s4result
- returnOutput:
return: ${s4result}
However due to the colon : in the field I'm trying to access there are yaml parsing errors when I attempt to save another variable assignment. How can I access a HTTP response data saved in a variable when there are colons in the property field.
The errors in the console are similar too
Could not deploy workflow: main.yaml:14:25: parse error: in workflow 'main', step 'step3': token recognition error at: ':'
- resubmitURL: '${"https://myfunction.a.run.app/resubmit/" + ${s2result.body.ResponseMessage.Response.a:TransactionId}'
^
main.yaml:14:25: parse error: in workflow 'main', step 'step3': mismatched input '+' expecting {'[', LOGICAL_NOT, '(', '-', TRUE, FALSE, NULL, NUM_FLOAT, NUM_INT, STRING, IDENTIFIER}
- resubmitURL: '${"https://myfunction.a.run.app/resubmit/" + ${s2result.body.ResponseMessage.Response.a:TransactionId}'
Two techniques are required to reference map keys with special characters like this:
As recommended in the documentation, all expressions should be wrapped in single quotes to avoid YAML parsing errors (i.e. '${...}').
When referencing keys with special characters, you can use array notation to wrap the key name in quotes (i.e. var["KEY"]).
Together, it looks like this:
main:
steps:
- init:
assign:
- var:
key:
"co:lon": bar
- returnOutput:
return: '${"foo" + var.key["co:lon"]}'
In your code you are using an expression inside an expression:
- resubmitURL: '**${**"https://myfunction.a.run.app/resubmit/" + **${**s2result.body.ResponseMessage.Response.a:TransactionId**}**'
In this sample from your error message your not even closing the expressions right.
If you pack everything into one expression and use the hint from Kris with the key, it should deploy:
- resubmitURL: '${"https://myfunction.a.run.app/resubmit/" + s2result.body.ResponseMessage.Response["a:TransactionId"]}'
Here is my full test case:
main:
params: [args]
steps:
- init_assign:
assign:
- input: ${args}
- s2result:
body:
ResponseMessage:
Response:
"a:TransactionId": "Test"
- resubmitURL: '${"https://myfunction.a.run.app/resubmit/" + s2result.body.ResponseMessage.Response["a:TransactionId"]}'
- log1:
call: sys.log
args:
text: ${resubmitURL}
severity: INFO
With the log: 'https://myfunction.a.run.app/resubmit/Test'
I have this (in the example shown I reduced it by removing many lines) non-trivial JSON retrieved from a Spark server:
{
"spark.worker.cleanup.enabled": true,
"spark.worker.ui.retainedDrivers": 50,
"spark.worker.cleanup.appDataTtl": 7200,
"fusion.spark.worker.webui.port": 8082,
"fusion.spark.worker.memory": "4g",
"fusion.spark.worker.port": 8769,
"spark.worker.timeout": 30
}
I try to read fusion.spark.worker.memory but fail to do so. In my debug statements I can see that the information is there:
msg: "Spark memory: {{spark_worker_cfg.json}} shows this:
ok: [process1] => {
"msg": "Spark memory: {u'spark.worker.ui.retainedDrivers': 50, u'spark.worker.cleanup.enabled': True, u'fusion.spark.worker.port': 8769, u'spark.worker.cleanup.appDataTtl': 7200, u'spark.worker.timeout': 30, u'fusion.spark.worker.memory': u'4g', u'fusion.spark.worker.webui.port': 8082}"
}
The dump using var: spark_worker_cfg shows this:
ok: [process1] => {
"spark_worker_cfg": {
"changed": false,
"connection": "close",
"content_length": "279",
"content_type": "application/json",
"cookies": {},
"cookies_string": "",
"failed": false,
"fusion_request_id": "Pj2zeWThLw",
"json": {
"fusion.spark.worker.memory": "4g",
"fusion.spark.worker.port": 8769,
"fusion.spark.worker.webui.port": 8082,
"spark.worker.cleanup.appDataTtl": 7200,
"spark.worker.cleanup.enabled": true,
"spark.worker.timeout": 30,
"spark.worker.ui.retainedDrivers": 50
},
"msg": "OK (279 bytes)",
"redirected": false,
"server": "Jetty(9.4.12.v20180830)",
"status": 200,
"url": "http://localhost:8765/api/v1/configurations?prefix=spark.worker"
}
}
I can't access the value using {{spark_worker_cfg.json.fusion.spark.worker.memory}}, my problem seems to be caused by the names containing dots:
The task includes an option with an undefined variable. The error was:
'dict object' has no attribute 'fusion'
I have had a look at two SO posts (1 and 2) that look like duplicates of my question but could not derive from them how to solve my current issue.
The keys in the 'json' element of the data structure, contain literal dots, rather than represent a structure. This will causes issues, because Ansible will not know to treat them as literal if dotted notation is used. Therefore, use square bracket notation to reference them, rather than dotted:
- debug:
msg: "{{ spark_worker_cfg['json']['fusion.spark.worker.memory'] }}"
(At first glance this looked like an issue with a JSON encoded string that needed decoding, which could have been handled:"{{ spark_worker_cfg.json | from_json }}")
You could use the json_query filter to get your results. https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html
msg="{{ spark_worker_cfg.json | json_query('fusion.spark.worker.memory') }}
edit:
In response to your comment, the fact that we get an empty string returned leads me to believe that the query isn't correct. It can be frustrating to find the exact query while using the json_query filter so I usually use a jsonpath tool beforehand. I've linking one in my comment below but I, personally, use the jsonUtils addon in intelliJ to find my path (which still needs adjustment because the paths are handled a bit differently between the two).
If your json looked like this:
{
value: "theValue"
}
then
json_query('value')
would work.
The path you're passing to json_query isn't correct for what you're trying to do.
If your top level object was named fusion_spark_worker_memory (without the periods), then your query should work. The dots are throwing things off, I believe. There may be a way to escape those in the query...
edit 2: clockworknet for the win! He beat me to it both times. :bow:
In my ansible coding i want to know the status of the service like service httpd status (service is runngin or not) the result would be store in to variable. Using that status i will use some other code in ansible.
I am using ansible service module there is no option for status. If i use the shell module i got this warning
[WARNING]: Consider using service module rather than running service
so is it any other module doing to get service status?
No, there is no standard module to get services' statuses.
But you can suppress warning for specific command task if you know what are you doing:
- command: service httpd status
args:
warn: false
I've posted a quick note about this trick a while ago.
You can use the service_facts module.
For example, say I want to see the status of Apache.
- name: Check for apache status
service_facts:
- debug:
var: ansible_facts.services.apache2.state
The output is:
ok: [192.168.blah.blah] => {
"ansible_facts.services.apache2.state": "running"
}
If you would like to see all of them, you can do that by just going two levels up in the array:
var: ansible_facts.services
The output will list all the services, and will look like this (truncated for the sake of brevity):
"apache2": {
"name": "apache2",
"source": "sysv",
"state": "running"
},
"apache2.service": {
"name": "apache2.service",
"source": "systemd",
"state": "running"
},
"apparmor": {
"name": "apparmor",
"source": "sysv",
"state": "running"
},
etc,
etc
I am using Ansible 2.7. Here are the docs for that module: Click here
here is an example of starting a service and then checking status using service facts, in my example you have to register the variable then output it using debug var and pointing to the correct format in the json chain resulting output:
## perform start service for alertmanager
- name: Start service alertmanager if not started
become: yes
service:
name: alertmanager
state: started
## check to see the state of the alertmanager service status
- name: Check status of alertmanager service
service_facts:
register: service_state
- debug:
var: service_state.ansible_facts.services["alertmanager.service"].state
Hopefully service: allow user to query service status #3316 will be merged into the core module soon.
You can patch it by hand using this diff to system/service.py
Here's my diff using ansible 2.2.0.0. I've run this on my mac/homebrew install and it works for me.
This is the file that I edited: /usr/local/Cellar/ansible/2.2.0.0_2/libexec/lib/python2.7/site-packages/ansible/modules/core/system/service.py
## -36,11 +36,12 ##
- Name of the service.
state:
required: false
- choices: [ started, stopped, restarted, reloaded ]
+ choices: [ started, stopped, status, restarted, reloaded ]
description:
- C(started)/C(stopped) are idempotent actions that will not run
- commands unless necessary. C(restarted) will always bounce the
- service. C(reloaded) will always reload. B(At least one of state
+ commands unless necessary. C(status) would report the status of
+ the service C(restarted) will always bounce the service.
+ C(reloaded) will always reload. B(At least one of state
and enabled are required.)
sleep:
required: false
## -1455,7 +1456,7 ##
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True),
- state = dict(choices=['running', 'started', 'stopped', 'restarted', 'reloaded']),
+ state = dict(choices=['running', 'started', 'stopped', 'status', 'restarted', 'reloaded']),
sleep = dict(required=False, type='int', default=None),
pattern = dict(required=False, default=None),
enabled = dict(type='bool'),
## -1501,6 +1502,9 ##
else:
service.get_service_status()
+ if module.params['state'] == 'status':
+ module.exit_json(state=service.running)
+
# Calculate if request will change service state
service.check_service_changed()
I am using a combination of API Blueprint and Dredd to test an API my application is dependent on. I am using attributes in API blueprint to define the structure of the response's body.
Apparently I'm missing something though because the tests always pass even though I've purposefully defined a fake "required" parameter that I know is missing from the API's response. It seems that Dredd is only testing whether the type of the response body (array) rather than the type and the parameters within it.
My API Blueprint file:
FORMAT: 1A
HOST: http://somehost.net
# API Title
## Endpoints [GET /endpoint/{date}]
+ Parameters
+ date: `2016-09-01` (string, required) - Date
+ Response 200 (application/json; charset=utf-8)
+ Attributes (array[Data])
## Data Structures
### Data
- realParameter: 2432432 (number)
- realParameter2: `some string` (string, required)
- realParameter3: `Something else` (string, required)
- realParameter4: 1 (number, required)
- fakeParam: 1 (number, required)
The response body:
[
{
"realParameter": 31,
"realParameter2": "some value",
"realParameter3": "another value",
"realParameter4": 8908
},
{
"realParameter": 54,
"realParameter2": "something here",
"realParameter3": "and here too",
"realParameter4": 6589
}
]
And my Dredd config file:
reporter: apiary
custom:
apiaryApiKey: somekey
apiaryApiName: somename
dry-run: null
hookfiles: null
language: nodejs
sandbox: false
server: null
server-wait: 3
init: false
names: false
only: []
output: []
header: []
sorted: false
user: null
inline-errors: false
details: false
method: []
color: true
level: info
timestamp: false
silent: false
path: []
blueprint: myApiBlueprintFile.apib
endpoint: 'http://ahost.com'
Does anyone have any idea why Dredd ignores the fact that "fakeParameter" doesn't actually show up in the response body and still allows the test to pass?
You've run into a limitation of MSON, the language API Blueprint uses for describing attributes. In many cases, MSON describes what MAY be present in the data structure rather than what MUST exactly be present.
The most prominent case are arrays, where basically any content of the array is optional and thus the underlying generated JSON Schema doesn't put any constraints on array contents. Dredd just respects that, so indirectly it becomes a Dredd issue too, however there's not much Dredd can do about it.
There's an issue for the problem: apiaryio/mson#66 You can follow and comment under the issue to get updated about this. Dredd is usually very prompt in getting the latest API Blueprint parser, so once it's implemented in the language itself, it won't take long to appear in Dredd.
Obvious (but tedious) workaround is to specify your own JSON Schema with stricter rules using the + Schema section alongside the + Attributes section.
I'm using Dredd to test an API I have written. It works fine until I try to vary the action uri within a resource. When I have an action of the form
## Retrieve Task [GET /task/{id}]
it sends a request to Drakov with the ] appended. This Drakov server is running the blueprint document.
Drakov 0.1.16 Listening on port 8090
[LOG] GET /task/myid]
[LOG] DELETE /task/myid]
[LOG] GET /task/myid]
You can see this request has an extra ] on the end.
This is my blueprint. It is a subset of the example from the Api Blueprint examples:
FORMAT: 1A
# Advanced Action API
A resource action is – in fact – a state transition. This API example demonstrates an action - state transition - to another resource.
## API Blueprint
# Tasks [/tasks/tasks{?status,priority}]
+ Parameters
+ status `test` (string)
+ priority `1` (number)
## Retrieve Task [GET /task/{id}]
This is a state transition to another resource
+ Parameters
+ id: `myid` (string)
+ Response 200 (application/json)
{
"id": 123,
"name": "Go to gym",
"done": false,
"type": "task"
}
What am I doing wrong?
Your API Blueprint has multiple errors. For instance,
+ Parameters
+ status `test` (string)
+ priority `1` (number)
...should be:
+ Parameters
+ status: `test` (string)
+ priority: `1` (number)
Also, you are defining a resource Tasks with URI Template /tasks/tasks{?status,priority} and then you are trying to define a GET action for the resource with a different URI Template. That is confusing.
I tried to create a sample API Blueprint (saved as sample-blueprint.md) like this:
FORMAT: 1A
# My API
## Task [/task/{id}]
### Retrieve Task [GET]
+ Parameters
+ id: `123` (string)
+ Response 200 (application/json)
{
"id": 123,
"name": "Go to gym",
"done": false,
"type": "task"
}
Then I launched a Drakov server in one terminal like this:
drakov -f *.md
Then I tried to run Dredd:
dredd sample-blueprint.md http://localhost:3000
Everything passed correctly:
$ dredd sample-blueprint.md http://localhost:3000
info: Beginning Dredd testing...
pass: GET /task/123 duration: 42ms
complete: 1 passing, 0 failing, 0 errors, 0 skipped, 1 total
complete: Tests took 50ms
Is this something you originally wanted to achieve?