How can I parse a string in ansible? - shell

I use the openssl command to retrieve data and register the result in ansible. I receive the following output:
#cert.stdout_lines
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
ed:92:fe:51:b1:d1:6c:91:03:00:00:00:00:cb:f7:b1
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = US, O = Google Trust Services, CN = GTS CA 1O1
Validity
Not Before: Apr 13 10:17:32 2021 GMT
Not After : Jul 6 10:17:31 2021 GMT
Subject: C = US, ST = California, L = Mountain View, O = Google LLC, CN = www.google.com
Subject Public Key Info:
I want to use certain fields such as CN, Not Before and Not After. So I tried to get the data-structure as YAML, but that does not work.
set_fact:
test: "{{ cert.stdout_lines | from_yaml }}
How I can use data from that command in ansible?
(I can't use the get_cert module in ansible because of python restrictions and I can't modify the python-version.)

You should try to use a module, but as you say you can not do that, here is how to do it without using one:
from_yaml will parse valid yaml into a dictionary. Your data is not in the yaml-format, so from_yaml does not work.
If you need to use the openssl-command, you will need to use a regex to parse the data. You can do that by using grep in the shell-module or the regex_search filter in ansible. As we want to use as much ansible as possible, I'll show how to do it with a filter:
set_fact:
not_before: "{{ cert.stdout_lines | regex_search('(?<=Not Before: ).*') }}"
not_after: "{{ cert.stdout_lines | regex_search('(?<=Not After: ).*') }}"
subject: "{{ cert.stdout_lines | regex_search('(?<=Subject: ).*') | regex_search('(?<=CN = ).*')}}"
Check out the documentation of python regexes and ansible's regex filters.

Write a wrapper script to run your openssl command and parse its output, then have the wrapper script output the needed info in valid yaml.

Related

Benthos: How to get variable from processor to input?

i'm new to benthos, hope following configuration to work, i looked at the benthos doc and tried to google, but didn't find an answer, any answer is greatly appreciated
actually, the sign will be the calculated value, but now I'm stuck on the first step, i can't get the sign value to be successfully assigned to the header
input:
processors:
- bloblang: |
meta sign = "1233312312312312"
meta device_id = "31231312312"
http_client:
url: >-
https://test/${!meta("device_id")}
verb: GET
headers:
sign: ${!meta("sign")}
after #Mihai Todor helped, now i have new question.
this config below can work.(first)
input:
http_client:
url: >-
https://test/api
verb: GET
headers:
sign: "signcode"
but this one returned invalid signature error
input:
mapping: root = {}
generate:
count: 1
pipeline:
processors:
- http:
url: >-
https://test/api
verb: GET
headers:
sign: "signcode"
output:
stdout: {}
update(more detail screenshot)
first
second
Finally i got it work with the #Mihai helped.
the reason why i got 'signure is invaild' is because a space character in paramter headers->stringToSign, for reason i need add paramter 'stringTosign' and the value need to be empty, but somehow i copy an invisible space character in it, this will make benthos to add a Content-Length: 1 to the request header(i don't know why), this Content-Length: 1 cause my request always failed with error 'signure invaild'.
after i deleted the space character, all is ok.
Input processors operate on the messages returned by the input, so you can't set metadata that way. Also, metadata is associated with in-flight messages and doesn't persist across messages (use an in-memory cache if you need that).
One workaround is to combine a generate input with an http processor like so:
input:
generate:
mapping: root = ""
interval: 0s
count: 1
processors:
- mapping: |
meta sign = "1233312312312312"
meta device_id = "31231312312"
pipeline:
processors:
- http:
url: >-
https://test/${!meta("device_id")}
verb: GET
headers:
sign: ${!meta("sign")}
output:
stdout: {}
Note that the mapping processor (the replacement for the soon-to-be deprecated bloblang one) can also reside under pipeline.processors, and, if you just need to set those metadata fields, you can also do it inside the mapping field of the generate input (root = {} is implicit).
Update: Following the comments, I ran two configs and used nc to print the full HTTP request each of them make:
generate input with http processor:
input:
generate:
mapping: root = ""
interval: 0s
count: 1
processors:
- mapping: |
meta sign = "1233312312312312"
meta device_id = "31231312312"
pipeline:
processors:
- http:
url: >-
http://localhost:6666/${!meta("device_id")}
verb: GET
headers:
sign: ${!meta("sign")}
output:
stdout: {}
HTTP request dump:
> nc -l localhost 6666
GET /31231312312 HTTP/1.1
Host: localhost:6666
User-Agent: Go-http-client/1.1
Sign: 1233312312312312
Accept-Encoding: gzip
http_client input:
input:
http_client:
url: >-
http://localhost:6666/31231312312
verb: GET
headers:
sign: 1233312312312312
output:
stdout: {}
HTTP request dump:
> nc -l localhost 6666
GET /31231312312 HTTP/1.1
Host: localhost:6666
User-Agent: Go-http-client/1.1
Sign: 1233312312312312
Accept-Encoding: gzip
I used Benthos v4.5.1 on OSX and, for both configs, the request looks identical.
My best guess is that you're seeing a transitive issue on your end (some rate limiting perhaps).

How to use Kubernetes secret object stringData to store base64 encoded privateKey

apiVersion: v1
kind: Secret
metadata:
name: {{ include "backstage.fullname" . }}-backend
type: Opaque
stringData:
GH_APP_PRIVATEKEY_BASE:|-
{{ .Values.auth.ghApp.privateKey | quote | b64dec | indent 2 }}
Getting error converting YAML to JSON: yaml: line 22: could not find expected ':' as the result when
trying to store a base64 encoded string to GH_APP_PRIVATEKEY_BASE
My application (backstage) is using helm charts to map the env secret.
I keep having trouble with storing/passing multi-line RSA. private key,
Currently trying to base64 encoded private key into one-liner, but still failed at validating the secret file. Would love to know other approach like passing a file with key written on it?
BTW, I use GITHUB_PRVATE_KEY=$(echo -n $GITHUB_PRVATE_KEY | base64 -w 0) and
helm_overrides="$helm_overrides --set auth.ghApp.clientSecret=$GITHUB_PRVATE_KEY"
at GitHub action to encoded the private key.
Try increase the indent to 4:
...
stringData:
GH_APP_PRIVATEKEY_BASE: |-
{{ .Values.auth.ghApp.privateKey | quote | b64dec | indent 4 }}
GH_APP_PRIVATEKEY_BASE:|-
You need a space in there GH_APP_PRIVATEKEY_BASE: |-
Also not sure why you have a b64dec in there but I don't think that's the immediate problem.

How can I convert a decimal string to an hexadecimal string?

I have a playbook that queries a server for it's SystemID which can be converted to a model number using a vendor-provided table that maps the id to a model. The server returns a decimal value but the table uses the hexadecimal equivalent.
What I want to do is to convert the decimal string to an hexadecimal string that can be matched with an entry in the vendor-provided table.
Example:
Server returns: SystemID = 1792
Matching entry in vendor table: 0x0700
I've searched in the Ansible documentation and Web searched for either a native Ansible command or jinja2 expression to do the conversion.
I've only found the int(value, base=x) jinja2 function that does the opposite of what I am trying to do.
The native python hex() command can do it. But I'd like to avoid that if possible.
Here is the playbook task that parses the servers stdout to get systemid value:
set_fact:
server_model: "{{ ( server_model_results.stdout_lines | select('match','SystemID' ) | list| first ).split('=')[1] | regex_replace('^\ |\ /$', '' ) }}"
Environment:
Ansible 2.9.7
Python 3.8.0
macOS 10.15.4
You can use a python format with the % operator inside a jinja2 template string:
$ ansible localhost -m debug -a msg="{{ '%#x' % 1792 }}"
localhost | SUCCESS => {
"msg": "0x700"
}
You will probably still have to deal with the leading 0 that is present in your file (i.e. 0x0700).
If all your values are padded to 4 hexa digits in your file (i.e. after the 0x prefix) a quick and dirty solution could be:
$ ansible localhost -m debug -a msg="0x{{ '%04x' % 1792 }}"
localhost | SUCCESS => {
"msg": "0x0700"
}
If not, you will have to implement some kind of dynamic 0 padding to the next odd number of chars yourself.
You might want to switch the 'x' type specifier to 'X' (see doc link above) if hexa digits above nine are uppercase in your vendor table
$ ansible localhost -m debug -a msg="0x{{ '%04x' % 2569 }}"
localhost | SUCCESS => {
"msg": "0x0a09"
}
$ ansible localhost -m debug -a msg="0x{{ '%04X' % 2569 }}"
localhost | SUCCESS => {
"msg": "0x0A09"
}

Test a substring with special character in a list

I have a list with some application landscape names and I have to look for an specific application with special characters in Jinja2
landscape_list: ["cmdb:app1 landscape", "cmdb:app2 (ex app3) landscape",
"cmdb:app4 landscape"]
app_to_look: "app2 (ex app3)"
I'm trying to use this code to test the list:
{{landscape_list | select('search',land_key) | list | count > 0}}
But I'm always getting 0 when I tried to test "app2 (ex app3)".
I think this problem is related with special characters like ().
Is it possible to look into a list for that specific application in jinja2?
Thanks
Q: "This problem is related to special characters like ()."
A: Yes. The parenthesis must be escaped in the regex. For example
- set_fact:
land_key: 'app2 \(ex app3\)'
- debug:
msg: "{{ landscape_list|select('search', land_key)|list }}"
- debug:
msg: "{{ landscape_list|select('search', land_key)|list|length }}"
- debug:
msg: One or more items match the searched pattern.
when: landscape_list|select('search', land_key)|list|length > 0
give
"msg": [
"cmdb:app2 (ex app3) landscape"
]
"msg": "1"
"msg": "One or more items match the searched pattern."
I end up using a similar method. Instead of using search, I used contains as a search method
{{completed_list | select('contains',solution_search) | list | count > 0}}
solution_search contains the full name of what I'm looking.
{%-set solution_search = env_key ~' '~env_server_key ~' TEST'-%}
Where env_key is the application name that can contain special characters and env_server_key is the application environment.

yaml multi line syntax without newline to space conversion

I have something like this dictionary:
env: qat
target_host: >
{%if env in ['prd'] %}one
{%elif env in ['qat','stg'] %}two
{%else%}three
{%endif%}
when I print it I get:
ok: [localhost] => {
"var": {
"target_host": "two "
} }
So it is converting the \n at the end of the line to a space. Which is exactly what it is supposed to do. However in this case I am just trying to spread out the lines to make the structure of the if/else more readable and I don't want the extra space. It works as expected if I put it all on one line without the > but I would like to be able to make it multiline just so its easier to read.
I found this question
Is there a way to represent a long string that doesnt have any whitespace on multiple lines in a YAML document?
So I could do:
env: qat
target_host: "{%if env in ['prd'] %}one\
{%elif env in ['qat','stg'] %}two\
{%else%}three\
{%endif%}"
And that gives the desired result.
Is there anyway to accomplish this without cluttering it up even more?
In Jinja* you can strip whitespaces/newlines by adding a minus sign to the start/end of a block. This should do the trick:
env: qat
target_host: >
{%if env in ['prd'] -%}one
{%- elif env in ['qat','stg'] -%}two
{%- else -%}three
{%- endif%}
* Jinja 2 is the templating engine used by Ansible.
Maybe what you need is the | literal?
env: qat
target_host: |
{%if env in ['prd'] %}one
{%elif env in ['qat','stg'] %}two
{%else%}three
{%endif%}
This will not 'fold' newlines into spaces, as oposed to >

Resources