Is there a way to check playbook syntax and variables?
I'm trying to dry-run(--check) but for some reasons it works really slow. It looks like it tries to perform an action instead of just check the syntax
I want to omit en errors like this:
..."msg": "AnsibleUndefinedVariable: ERROR! 'application_name' is undefined"}
This is expected behaviour according to the documentation:
When ansible-playbook is executed with --check it will not make any
changes on remote systems. Instead, any module instrumented to support
‘check mode’ (which contains most of the primary core modules, but it
is not required that all modules do this) will report what changes
they would have made rather than making them. Other modules that do
not support check mode will also take no action, but just will not
report what changes they might have made.
Old link (does not work anymore): http://docs.ansible.com/ansible/playbooks_checkmode.html
New link: https://docs.ansible.com/ansible/latest/user_guide/playbooks_checkmode.html#using-check-mode
If you would like to check the YAML syntax you can use syntax-check.
ansible-playbook rds_prod.yml --syntax-check
playbook: rds_prod.yml
I was looking for the same, but was not satisfied by the --syntax-check option, since it does not work its way down to the roles. A more complete check can be performed with ansible-lint which also includes style-checks. But if you turn off all style-checks, then you have a pretty complete syntax-check.
So do something like
ansible-lint -x $(echo $(ansible-lint -L | awk -F':' '{print $1}' | grep '^[^ ]') | tr ' ' ',') my_playbook.yml
Add a task to fail the playbook when variables aren't defined. This should be the first task run.
Another option is to ensure that all variables have a default value in the /defaults/ directory so that it never fails, but the variables can still be overwritten at other levels.
My preferd way is
pip install yamllint
yamllint -d "{extends: default, rules: {quoted-strings: enable}}" .
Since I really want to catch quote errors, e.g.
validate: bash -c ' ' \""
This is valid yaml, since yaml will just quote the string and turn it into:
validate: "bash -c ' ' \\\"\""
Whilst there was just clearly a quote missing at the beginning of the validate comand.
So a normal yaml checker will not detect this, yamllint wil not even detect this in it's default configuration, so turn on quoted-strings checker.
Related
How can I get jenkin ansible-playbook plugin to pass a list of strings in the same way I would on the command line?
ansible-playbook ... \
-e '{"package_urls": ["http...windows.exe", "http...linux.rpm", "http...babbage.steam"]}'
In jenkins the playbook seems to take a map for extraVars and my escaping attempts haven't yet worked
ansiblePlaybook (..., extraVars: [
package_urls: """["http...windows.exe", "http...linux.rpm", "http...babbage.steam"]""" )
Results in the following which lacks ' and ' and is not recognized as a list
... -e package_urls=["http...windows.exe", "http...linux.rpm", "http...babbage.steam"]
This works in that it produces the correct command line. The playbook plugin has an extras param which can be used to pass a variety of text down the the command line. I usually use this for the '-vvv' verbosity modification.
ansiblePlaybook ( ...,
extras: """-e '{"package_urls": ["http...windows.exe", "http...linux.rpm", "http...babbage.steam"]}'""")
The desired text is passed down to the command line with no modifications and the job pulls in its list.
This is probably not the optimal approach and either populating a var file or a json file which is read by the playbook feels like a cleaner approach
Man pages for ansible and ansible-playbook define -i option as:
-i PATH, --inventory=PATH
The PATH to the inventory hosts file, which defaults to
/etc/ansible/hosts.
Yet to run on a local system the following syntax is used in examples:
ansible -i "localhost," -c local -m ping localhost
What exactly is this "localhost," with comma at the end (otherwise it is treated as filename) and how does it relate to PATH?
This is (now, at least) a documented feature. From the man page:
-i, --inventory, --inventory-file
specify inventory host path or comma separated host list. --inventory-file is deprecated
(emphasis added)
What's still not in the manual is that "comma separated host list" means that you need to add a comma even if the "list" is a single item, to distinguish between "target a single host called hostname":
$ ansible -i 'hostname,' ...
and "load inventory from a file called hostname":
$ ansible -i 'hostname,' ...
If anyone out there has time, maybe you could submit a pull request to change the help text to explain this (and to add a hyphen in "comma-separated", but maybe that's just me..)
According to Michael DeHann, who created Ansible, the comma trick you're referring to is a hack that shouldn't be relied upon. It's a hack to run Ansible without an inventory file, for cases where you're going to run against localhost. That way you don't actually have to create an inventory file that just lists localhost.
Actually, when you want to run commands against a specific host, don't add -i, instead, run it in the following way:
ansible localhost -m ping
Use -i only to specify the path for dynamic inventory or hosts.
I'm trying to come up with a way of parameterizing some config files for a docker image using env vars. A sample input file could look something like:
<database>
<host>${DB_HOST}</host>
<name>${DB_NAME}</name>
<user>${DB_USER}</user>
<pass>${DB_PASSWORD}</pass>
</database>
and I'd want the output to basically be exactly the same, but with the fields filled in with the corresponding environment variable value.
Is there a simple way to do this directly in bash? The system I'm using doesn't natively support env vars, so I need to write these config files by hand, but I need to inject different vars for each environment I'm in so I don't particularly want to actually maintain real files for each env.
This is that simple as :
cat<<EOF > new_config_file
<database>
<host>${DB_HOST}</host>
<name>${DB_NAME}</name>
<user>${DB_USER}</user>
<pass>${DB_PASSWORD}</pass>
</database>
EOF
ls -l new_config_file
you're looking for envsubst
$ envsubst < config_template > config_instance
Just another option.
My generated config files often require just a bit of logic + embedding environment variable contents. At some point I got tired of reinventing the wheel every time and stitched together a simple tool in Go called Templater (https://github.com/reertech/templater).
It's just a standalone zero dependency binary for most of the systems, so you can just download it and use Go templating language (https://golang.org/pkg/text/template/) like that:
#!/bin/bash
YEAR=1997 \
NAME=John \
SURNAME=Connor \
./templater -t - -o /tmp/hello.txt <<EOF
Hello, {{env "NAME"}} {{env "SURNAME"}}!
{{ if env "YEAR" | parseInt | eq 1997 }}
Have a nice Judgement Day!
{{ end }}
EOF
In case you're not comfortable with prebuilt binaries you can use Go and build one yourself.
Hope this helps you or someone else who cannot cut it with just environment substitution and doesn't feel like Bashing.
Man pages for ansible and ansible-playbook define -i option as:
-i PATH, --inventory=PATH
The PATH to the inventory hosts file, which defaults to
/etc/ansible/hosts.
Yet to run on a local system the following syntax is used in examples:
ansible -i "localhost," -c local -m ping localhost
What exactly is this "localhost," with comma at the end (otherwise it is treated as filename) and how does it relate to PATH?
This is (now, at least) a documented feature. From the man page:
-i, --inventory, --inventory-file
specify inventory host path or comma separated host list. --inventory-file is deprecated
(emphasis added)
What's still not in the manual is that "comma separated host list" means that you need to add a comma even if the "list" is a single item, to distinguish between "target a single host called hostname":
$ ansible -i 'hostname,' ...
and "load inventory from a file called hostname":
$ ansible -i 'hostname,' ...
If anyone out there has time, maybe you could submit a pull request to change the help text to explain this (and to add a hyphen in "comma-separated", but maybe that's just me..)
According to Michael DeHann, who created Ansible, the comma trick you're referring to is a hack that shouldn't be relied upon. It's a hack to run Ansible without an inventory file, for cases where you're going to run against localhost. That way you don't actually have to create an inventory file that just lists localhost.
Actually, when you want to run commands against a specific host, don't add -i, instead, run it in the following way:
ansible localhost -m ping
Use -i only to specify the path for dynamic inventory or hosts.
I have to ls command to get the details of certain types of files. The file name has a specific format. The first two words followed by the date on which the file was generated
e.g.:
Report_execution_032916.pdf
Report_execution_033016.pdf
Word summary can also come in place of report.
e.g.:
Summary_execution_032916.pdf
Hence in my shell script I put these line of codes
DATE=`date +%m%d%y`
Model=Report
file=`ls ${Model}_execution_*${DATE}_*.pdf`
But the value of Model always gets resolved to 'REPORT' and hence I get:
ls: cannot access REPORT_execution_*032916_*.pdf: No such file or directory
I am stuck at how the resolution of Model is happening here.
I can't reproduce the exact code here. Hence I have changed some variable names. Initially I had used the variable name type instead of Model. But Model is the on which I use in my actual code
You've changed your script to use Model=Report and ${Model} and you've said you have typeset -u Model in your script. The -u option to the typeset command (instead of declare — they're synonyms) means "convert the strings assigned to all upper-case".
-u When the variable is assigned a value, all lower-case characters are converted to upper-case. The lower-case attribute is disabled.
That explains the upper-case REPORT in the variable expansion. You can demonstrate by writing:
Model=Report
echo "Model=[${Model}]"
It would echo Model=[REPORT] because of the typeset -u Model.
Don't use the -u option if you don't want it.
You should probably fix your glob expression too:
file=$(ls ${Model}_execution_*${DATE}*.pdf)
Using $(…) instead of backticks is generally a good idea.
And, as a general point, learn how to Debug a Bash Script and always provide an MCVE (How to create a Minimal, Complete, and Verifiable Example?) so that we can see what your problem is more easily.
Some things to look at:
type is usually a reserved word, though it won't break your script, I suggest you to change that variable name to something else.
You are missing an $ before {DATE}, and you have an extra _ after it. If the date is the last part of the name, then there's no point in having an * at the end either. The file definition should be:
file=`ls ${type}_execution_*${DATE}.pdf`
Try debugging your code by parts: instead of doing an ls, do an echo of each variable, see what comes out, and trace the problem back to its origin.
As #DevSolar pointed out you may have problems parsing the output of ls.
As a workaround
ls | grep `date +%m%d%y` | grep "_execution_" | grep -E 'Report|Summary'
filters the ls output afterwards.
touch 'Summary_execution_032916.pdf'
DATE=`date +%m%d%y`
Model=Summary
file=`ls ${Model}_execution_*${DATE}*.pdf`
worked just fine on
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
Part of question:
But the value of Model always gets resolved to 'REPORT' and hence I get:
This is due to the fact that in your script you have exported Model=Report
Part of question:
ls: cannot access REPORT_execution_*032916_*.pdf: No such file or directory
No such file our directory issue is due to the additional "_" and additional "*"s that you have put in your 3rd line.
Remove it and the error will be gone. Though, Model will still resolve to Report
Original 3rd line :
file=`ls ${Model}_execution_*${DATE}_*.pdf`
Change it to
file=`ls ${Model}_execution_${DATE}.pdf`
Above change will resolve the could not found issue.
Part of question
I am stuck at how the resolution of Model is happening here.
I am not sure what you are trying to achieve, but if you are trying to populate the file parameter with file name with anything_exection_someDate.pdf, then you can write your script as
DATE=`date +%m%d%y`
file=`ls *_execution_${DATE}.pdf`
If you echo the value of file you will get
Report_execution_032916.pdf Summary_execution_032916.pdf
as the answer
There were some other scripts which were invoked before the control reaches the line of codes which I mentioned in the question. In one such script there is a code
typeset -u Model
This sets the value of the variable model always to uppercase which was the reason this error was thrown
ls: cannot access REPORT_execution_032916_.pdf: No such file or directory
I am sorry that
i couldn't provide a minimal,complete and verifiable code