Advanced Loki searches from inside json map/list - grafana-loki

I have a WAF log like
{
"terminatingRuleId": "Default_Action",
"action": "ALLOW",
"nonTerminatingMatchingRules": [{
"ruleId": "AWS-AWSManagedRulesSQLiRuleSet",
"action": "COUNT",
"ruleMatchDetails": [{
"conditionType": "SQL_INJECTION",
"location": "BODY",
"matchedData": ["{", "limit", ":100}"]
}]
}],
"requestHeadersInserted": null,
"responseCodeSent": null,
"httpRequest": {
"uri": "/v0.1/updates",
"args": "",
"httpVersion": "HTTP/1.1",
"httpMethod": "POST",
}
}
Now httpRequest_uri and httpRequest_httpMethod is set as a label but we don't set nonTerminatingMatchingRules as a label. I'm looking for a way to show a log line like
POST - /v0.1/updates
-- ruleId | COUNT | contents of ruleMatchDetails
I've tried things like
{s3="aws-waf-logs", action="ALLOW"}
| json match="nonTerminatingMatchingRules"
| line_format "{{ .httpRequest_uri }}"
Because i set match to the embedded json, I can't reference httpRequest_uri anymore it seems.

Related

How do i get id from json with jq?

i have json. How can I get the id whose attributes value is 0fda6bb8-4fc9-4463-9d26-af2d503cb19c ?
[
{
"id": "c3b1516d-5b2c-4838-b5eb-77d94d634832",
"versionId": "c3b1516d-5b2c-4838-b5eb-77d94d634832",
"name": "выписка маленькая заявка с лендинга ИБ",
"entityTypeName": "TestCases",
"projectId": "6dfe2ace-dd40-4e36-b66e-4a655a855a2f",
"sectionId": "bf7fbece-4fdf-466a-b041-2d830debc844",
"isAutomated": false,
"globalId": 264511,
"duration": 300,
"attributes": {
"1be40893-5dad-4b37-b70d-b830c4bd273f": "0fda6bb8-4fc9-4463-9d26-af2d503cb19c",
"f4b408ae-5418-4a8d-99d9-4a67cb34870b": "fa000fb2-375d-4eb5-901c-fb5df30785ad"
},
"createdById": "995b1f08-cc65-409c-aa1c-a16c82dabf1d",
"modifiedById": "995b1f08-cc65-409c-aa1c-a16c82dabf1d",
"createdDate": "2022-10-12T00:22:43.544Z",
"modifiedDate": "2022-10-12T00:22:43.544Z",
"state": "NeedsWork",
"priority": "Medium",
"isDeleted": false,
"tagNames": [
"master"
],
"iterations": []
},
{
"id": "ec423701-f2a8-4667-8459-939a6e079941",
"versionId": "0dfe176e-b172-47ae-8049-e6974086d497",
"name": "[iOS] СБПэй фичатоглы. Fts.SBPay.Settings выключен Fts.C2B.Settings.Subscriptions включен",
"entityTypeName": "TestCases",
"projectId": "6dfe2ace-dd40-4e36-b66e-4a655a855a2f",
"sectionId": "8626c9f5-a5aa-4584-bbca-e9cd60369a5e",
"isAutomated": false,
"globalId": 402437,
"duration": 300,
"attributes": {
"1be40893-5dad-4b37-b70d-b830c4bd273f": "b52bfc88-9b13-41e1-8b4c-098ebfa673e0",
"240b7589-9461-44dc-8b13-361132877c50": "cfd99bad-fb3f-43fe-be8a-cb745f2d4c78",
"6639eb1a-1335-44ec-ba8b-c3c52bff9e79": "ed3bc553-e873-472f-8dc1-7f2720ad457d",
"9ae36ef5-ca0e-4273-bb39-aedf289a119d": "6687017f-138b-4d75-91bd-c6465f1f5331",
"b862c3ee-55eb-486f-8125-a7a034d69340": "IBANK5-37207",
"f4b408ae-5418-4a8d-99d9-4a67cb34870b": "36dc55ac-359c-4312-9b1a-646ad5fd5aa9"
},
"createdById": "11a30c8b-73e2-4233-bbf5-7cc41556d3e0",
"modifiedById": "11a30c8b-73e2-4233-bbf5-7cc41556d3e0",
"createdDate": "2022-11-01T12:05:56.821Z",
"modifiedDate": "2022-11-02T14:16:55.246Z",
"state": "Ready",
"priority": "Medium",
"isDeleted": false,
"tagNames": [],
"iterations": []
}
]
I tried using
cat new2.xml | jq '.' | jq '.[] | select(."1be40893-5dad-4b37-b70d-b830c4bd273f" | index("0fda6bb8-4fc9-4463-9d26-af2d503cb19c")) | .[] .id'
but the search returns nothing
You could select on .attributes[] and display the id field only:
jq '.[] | select(.attributes[] == "0fda6bb8-4fc9-4463-9d26-af2d503cb19c").id'
Output:
"c3b1516d-5b2c-4838-b5eb-77d94d634832"
With the input given, you'd get the same result with the more specific:
jq '.[] | select(.attributes["1be40893-5dad-4b37-b70d-b830c4bd273f"] == "0fda6bb8-4fc9-4463-9d26-af2d503cb19c").id'
(because there's only one attribute Key with the Value "0fda6bb8-4fc9-4463-9d26-af2d503cb19c" in your example)

MSAL4J - Token not found in the cache

When attempting to use the method PublicClientApplication.acquireTokenSilently() I am getting the error "Token not found in cache". It looks like it is failing to be stored. Our AuthenticationResult looks like this:
{
"accessToken": "...",
"expiresOn": 1671035322,
"extExpiresOn": 0,
"refreshOn": 0,
"idToken": "...",
"idTokenObject": {},
"accountCacheEntity": {
"homeAccountId": "...",
"environment": "URL",
"name": "username",
"authorityType": "ADFS"
},
"account": {
"value": {
"homeAccountId": "...",
"environment": "URL"
}
},
"tenantProfile": {},
"environment": "env",
"expiresOnDate": {
"value": "Dec 14, 2022 11:28:42 AM"
},
"scopes": "openid"
}
I see that the account is missing the "name" field and I am wondering if that is part of it. I'm not quite sure how to work around this issue.
I've looked at the source and it appears that result.account().username() is returning null. I'm not sure if there is a way to use the value in accountCacheEntity.

Getting the values of keys of Ansible JSON output

I have the following JSON data
{
"docker_compose_init_result": {
"changed": true,
"failed": false,
"services": {
"grafana": {
"docker-compose_grafana_1": {
"cmd": [],
"image": "grafana/grafana:8.5.14",
"labels": {
"com.docker.compose.config-hash": "4d0b5dd6e697a8fe5bf5074192770285e54da43ad32cc34ba9c56505cb709431",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "grafana",
"com.docker.compose.version": "1.29.2"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.2",
"IPPrefixLen": 16,
"aliases": [
"3d19f54271b2",
"grafana"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:02"
}
},
"state": {
"running": true,
"status": "running"
}
}
},
"node-red": {
"docker-compose_node-red_1": {
"cmd": [],
"image": "nodered/node-red:2.2.2",
"labels": {
"authors": "Dave Conway-Jones, Nick O'Leary, James Thomas, Raymond Mouthaan",
"com.docker.compose.config-hash": "5610863d4b28b11645acb5651e7bab174125743dc86a265969788cc8ac782efe",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "node-red",
"com.docker.compose.version": "1.29.2",
"org.label-schema.arch": "",
"org.label-schema.build-date": "2022-02-18T21:01:04Z",
"org.label-schema.description": "Low-code programming for event-driven applications.",
"org.label-schema.docker.dockerfile": ".docker/Dockerfile.alpine",
"org.label-schema.license": "Apache-2.0",
"org.label-schema.name": "Node-RED",
"org.label-schema.url": "https://nodered.org",
"org.label-schema.vcs-ref": "",
"org.label-schema.vcs-type": "Git",
"org.label-schema.vcs-url": "https://github.com/node-red/node-red-docker",
"org.label-schema.version": "2.2.2"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.4",
"IPPrefixLen": 16,
"aliases": [
"fc56e973c98d",
"node-red"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:04"
}
},
"state": {
"running": true,
"status": "running"
}
}
},
"organizr": {
"docker-compose_organizr_1": {
"cmd": [],
"image": "organizr/organizr:linux-amd64",
"labels": {
"base.maintainer": "christronyxyocum,Roxedus",
"base.s6.arch": "amd64",
"base.s6.rel": "2.2.0.3",
"com.docker.compose.config-hash": "430b338b0c0892a25522e1b641a9e3a08eedd255309b1cd275b22a3362dcac58",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "organizr",
"com.docker.compose.version": "1.29.2",
"maintainer": "christronyxyocum,Roxedus",
"org.label-schema.description": "Baseimage for Organizr",
"org.label-schema.name": "organizr/base",
"org.label-schema.schema-version": "1.0",
"org.label-schema.url": "https://organizr.app/",
"org.label-schema.vcs-url": "https://github.com/organizr/docker-base",
"org.opencontainers.image.created": "2022-05-08_15",
"org.opencontainers.image.source": "https://github.com/Organizr/docker-organizr/tree/master",
"org.opencontainers.image.title": "organizr/base",
"org.opencontainers.image.url": "https://github.com/Organizr/docker-organizr/blob/master/README.md"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.3",
"IPPrefixLen": 16,
"aliases": [
"organizr",
"f3f61d8938fe"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:03"
}
},
"state": {
"running": true,
"status": "running"
}
}
},
"prometheus": {
"docker-compose_prometheus_1": {
"cmd": [
"--config.file=/etc/prometheus/prometheus.yml",
"--storage.tsdb.path=/prometheus",
"--web.console.libraries=/etc/prometheus/console_libraries",
"--web.console.templates=/etc/prometheus/consoles",
"--web.enable-lifecycle"
],
"image": "prom/prometheus:v2.35.0",
"labels": {
"com.docker.compose.config-hash": "7d2ce7deba1a152ebcf4fe5494384018c514f6703b5e906aef6f2e8820733cb2",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "prometheus",
"com.docker.compose.version": "1.29.2",
"maintainer": "The Prometheus Authors <prometheus-developers#googlegroups.com>"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.5",
"IPPrefixLen": 16,
"aliases": [
"04f346e6694f",
"prometheus"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:05"
}
},
"state": {
"running": true,
"status": "running"
}
}
}
}
}
}
And I need an output similar to
- docker-compose_grafana_1
- docker-compose_node-red_1
- docker-compose_organizr_1
- docker-compose_prometheus_1
I can do that with jq easy-peasy:
jq --raw-output '.docker_compose_init_result.services\[\] | keys | .\[\]' jsondata.json
But I am not able to do it with Ansible and especially json_query (and thus JMESPath).
I was able to get one key with
jp -f jsondata.json "keys(docker_compose_init_result.services.grafana)"
[
"docker-compose_grafana_1"
]
But have no idea how to get all four. Also sometimes expressions that worked with jp did not work in Ansible with json_query, which additionally made me mad.
If anyone can give me a solution for this (wether its with json_query or not) in the best case explains how it works, I would be very glad.
Solution using only builtin filters:
docker_compose_list: "{{ docker_compose_init_result.services | dict2items
| map(attribute='value') | map('dict2items')
| flatten | map(attribute='key') }}"
which gives once expanded:
{
"docker_compose_list": [
"docker-compose_grafana_1",
"docker-compose_node-red_1",
"docker-compose_organizr_1",
"docker-compose_prometheus_1"
]
}
In a pure JMESPath way, you query should be:
docker_compose_init_result.services.*.keys(#)[]
Where:
.* is the notation for an object project, that will make you get the values under docker_compose_init_result.services, whatever the key might be.
.keys(#) is the keys() function, in order to get the keys, mapped to the current node — #, which effectively means that keys() is going to be applied to every object which are children of docker_compose_init_result.services.*, (e.g.: on docker_compose_init_result.services.grafana."docker-compose_grafana_1", docker_compose_init_result.services."node-red"."docker-compose_node-red_1", and so on)
[] is the flatten operator, in order to reduce your array of arrays to a single level array
The query below
docker_compose_list: "{{ docker_compose_init_result|
json_query(_query) }}"
_query: 'services.*.keys(#)'
gives
docker_compose_list:
- - docker-compose_grafana_1
- - docker-compose_node-red_1
- - docker-compose_organizr_1
- - docker-compose_prometheus_1
Select first items
docker_compose_list: "{{ docker_compose_init_result|
json_query(_query)|
map('first')|list }}"
or flatten the list
docker_compose_list: "{{ docker_compose_init_result|
json_query(_query)|
flatten }}"
both give
docker_compose_list:
- docker-compose_grafana_1
- docker-compose_node-red_1
- docker-compose_organizr_1
- docker-compose_prometheus_1
Note: Be careful how you select or flatten the list. There might be a reason for the second-level keys. For example, if there are more keys in services.grafana the result might be
docker_compose_list:
- - docker-compose_grafana_1
- docker-compose_grafana_2
- docker-compose_grafana_3
- - docker-compose_node-red_1
- - docker-compose_organizr_1
- - docker-compose_prometheus_1
In this case, taking the first item or flattening the list doesn't necessarily give the result you want.

Ansible - How to combine list attributes?

I have two separate lists. The first is a list (base_list) with basic parameters, and the second is a list (dev_list) with parameters for a specific stand.
"base_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/logstash.yml",
"src": "/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml"
},
{
"name": "grafana",
"path": "grafana/conf/grafana.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json"
},
{
"name": "grafana",
"path": "grafana/conf/nginx.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/nginx.json"
},
{
"name": "grafana",
"path": "grafana/conf/config.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/config.json"
},
]
"dev_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/jvm.options",
"src": "/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
}
]
My goal is to combine these two lists to get one item.name with several item.path and item.src. Paths that look like this:
"end_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": [
"logstash/conf/logstash.yml",
"logstash/conf/jvm.options"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml",
"/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
]
},
{
"name": "grafana",
"path": [
"grafana/conf/grafana.json",
"grafana/conf/nginx.json",
"grafana/conf/config.json"
]
"src": [
"/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/nginx.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/config.json"
]
},
]
What would be the best way to do this?
This would probably be easier with a custom Python filter, but here's a solution using Ansible's built-in filters:
---
- hosts: localhost
gather_facts: false
vars:
"base_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/logstash.yml",
"src": "/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml"
},
{
"name": "grafana",
"path": "grafana/conf/grafana.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json"
},
]
"dev_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/jvm.options",
"src": "/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
}
]
tasks:
- set_fact:
end_list: >-
{{ end_list|default([]) + [
{
'name': item.0.name,
'path': item.1.path|ternary([item.0.path, item.1.path], item.0.path),
'src': item.1.src|ternary([item.0.src, item.1.src], item.1.src)
}
]}}
loop: >-
{{ base_list|zip_longest(dev_list,
fillvalue={'path': false, 'src': false})|list }}
- debug:
var: end_list
This was a little tricky to put together, so I'll try to describe the various parts:
The loop uses the zip_longest filter. Given the lists list1=[1, 2, 3] and list2=[11, 12], list1|zip_longest(list2) would produce [[1,11], [2,12], [3,None]] (that is, by default, zip_longest will use None as a fill value if one list is shorter than the other). By setting the fillvalue parameter, we can use a value other than None. In this case...
loop: >-
{{ base_list|zip_longest(dev_list,
fillvalue={'path': false, 'src': false})|list }}
...We're setting the fill value to a dictionary with stub values for path and src, since this makes the rest of the expression easier.
The meat of the solution is of course the set_fact action, which in simplified form looks like:
end_list: "{{ end_list|default([]) + [{...a dictionary...}] }}"
In other words, for each iteration of the loop, this will append a new dictionary to end_list.
We create the dictionary like this:
{
'name': item.0.name,
'path': item.1.path|ternary([item.0.path, item.1.path], item.0.path),
'src': item.1.src|ternary([item.0.src, item.1.src], item.1.src)
}
We're using the ternary filter here, which evaluates it's input as a boolean; if it's true, it selects the first argument, otherwise the second. Here we're taking advantage of the fillvalue we passed to the zip_longest filter: if dev_list is shorter than base_list, we'll have some items for which item.1.path and item.1.src are false, causing the ternary filter to select the second value (either item.0.path or item.1.src). In other cases, we build a list by combining the values from each of base_list and dev_list.
The result of running this playbook looks like:
ok: [localhost] => {
"end_list": [
{
"name": "kibana",
"path": [
"kibana/conf/kibana.xml",
"kibana/conf/kibana.xml"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml",
"/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
]
},
{
"name": "logstash",
"path": [
"logstash/conf/logstash.yml",
"logstash/conf/jvm.options"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml",
"/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
]
},
{
"name": "grafana",
"path": "grafana/conf/grafana.json",
"src": false
}
]
}
Let me know if that helps, and whether or not the resulting data structure is what you were looking for. I had to make a few assumptions since your example end_list contained invalid syntax, so I took a guess at what you wanted.
Assuming you had well-formed json and those are properties on the root object, jq is perfectly suited for this. Group the contents of the arrays by name then generate the appropriate result objects.
$ jq '{
end_combine: (
.base_list + .dev_list
| group_by(.name)
| map({ name: .[0].name, path: map(.path), src: map(.src) })
)
}' input.json
{
"end_combine": [
{
"name": "grafana",
"path": [
"grafana/conf/grafana.json",
"grafana/conf/nginx.json",
"grafana/conf/config.json"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/nginx.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/config.json"
]
},
{
"name": "kibana",
"path": [
"kibana/conf/kibana.xml",
"kibana/conf/kibana.xml"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml",
"/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
]
},
{
"name": "logstash",
"path": [
"logstash/conf/logstash.yml",
"logstash/conf/jvm.options"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml",
"/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
]
}
]
}

AWS EC2 Systems Manager Parameter Types

I'm trying to use the Amazon EC2 Systems Manager (http://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) to create an "Automation" document type to (amongst other things) tag an AMI it just created.
You can create tags in a predetermined manner like this within "mainSteps":
...
{
"name": "CreateTags",
"action": "aws:createTags",
"maxAttempts": 3,
"onFailure": "Abort",
"inputs": {
"ResourceType": "EC2",
"ResourceIds": ["{{ CreateImage.ImageId }}"],
"Tags": [
{
"Key": "Original_AMI_ID",
"Value": "Created from {{ SourceAmiId }}"
}
]
}
},
...
but to tag with a variable number of Tags, I'm assuming the following change is neccessary:
...
{
"name": "CreateTags",
"action": "aws:createTags",
"maxAttempts": 3,
"onFailure": "Abort",
"inputs": {
"ResourceType": "EC2",
"ResourceIds": ["{{ CreateImage.ImageId }}"],
"Tags": {{ Tags }}
}
},
...
with the addition of a new parameter called 'Tags' of type 'MapList':
"parameters": {
"Tags": {
"type": "MapList"
}
}
since running the process was complaining about my using a 'String' type and saying I should use a 'MapList'.
'MapList' is listed as a parameter type of the Amazon EC2 Systems Manager (http://docs.aws.amazon.com/systems-manager/latest/APIReference/top-level.html), but I have not yet found any documentation on how to define this type.
I have guessed at several formats based upon both what I've seen from their 'hardcoded' sample above and other tagging method in their other API's to no avail:
[ { "Key": "Name", "Value": "newAmi" } ]
[ { "Key": "Name", "Values": [ "newAmi" ] } ]
1: { "Key": "Name", "Values": [ "newAmi" ] }
Does anyone know how to define the new parameter types introduced with the Amazon EC2 Systems Manager (specifically, 'MapList')?
Update:
Since the docs are lacking, Amazon Support is asking the automation team how to best tag ami's using this method. I have found how to add a single tag as a parameter value in the console, though:
{ "Key": "TagName", "Value": "TagValue" }
My attempts to add multiple tags will allow the automation to start:
{ "Key": "TagName1", "Value": "TagValue1" }, { "Key": "TagName2", "Value": "TagValue2" }
but ultimately returns this generic error at runtime:
Internal Server Error. Please refer to Automation Service Troubleshooting
Guide for more diagnosis details
It might seem like the [] is missing from around the array, but you seem to get those for free because when I add them I get this error:
Parameter type error. [[ { "Key": "Description", "Value": "Desc" },
{ "Key": "Name", "Value": "Nm" } ]] is defined as MapList.
Thanks for using EC2 Systems Manager, Automation feature. Here's the document I tested, it works.
{
"schemaVersion": "0.3",
"description": "Test tags.",
"assumeRole": "arn:aws:iam::123456789012:role/TestRole",
"parameters": {
"Tags": {
"default": [{
"Key": "TagName1",
"Value": "TagValue1"
},
{
"Key": "TagName2",
"Value": "TagValue2"
}],
"type": "MapList"
}
},
"mainSteps": [
{
"name": "CreateTags",
"action": "aws:createTags",
"maxAttempts": 3,
"onFailure": "Abort",
"inputs": {
"ResourceType": "EC2",
"ResourceIds": [
"i-12345678"
],
"Tags": "{{ Tags }}"
}
}
]
}

Resources