Saltstack: how to launch "apt-get upgrade" with cache expiration time? - apt

I've been unable to find in docs, how can I launch apt-get update with cache expiration. In ansible it's quite easy to achieve:
- name: Update APT cache
apt: update_cache=yes cache_valid_time=86400 # 24 hours
Would be nice to know how to achieve this with saltstack. I'm using vagrant in here, it's pretty wise to put this into sharable folder, so you won't need to do this for each vm you have.

Interesting. The following should work:
{% set time_then = salt['file.stats']('/var/cache/apt/pkgcache.bin')['mtime'] -%}
{% set time_now = salt['cmd.run']('date +"%s"')|float -%}
{% set time_diff = (time_now - time_then) -%}
{% if time_diff > 60*60*2 -%}
apt_get_update_if_2_hours_stale:
cmd.run:
- name: apt-get update -qqy
{% endif %}

May be done with:
apt_update_cache:
module.run:
- name: pkg.refresh_db
- pkg.refresh_db:
- cache_valid_time: 500

Related

Why does my ansible template file have no substitutions?

I have an Ansible template file which I'm applying correctly with the 'template' directive, but it's showing up on the remote machine with no substitutions:
- name: "buildAgent.properties for {{ agent_name }}"
template:
src: buildAgent.properties.j2
dest: "{{ config_path }}/buildAgent.properties"
The template file looks something like this:
serverUrl={{ teamcity_url }}
name={{ agent_name }}
{% if teamcity_agent_variables %}
{% for variable in teamcity_agent_variables %}
{{ variable }}={{ teamcity_agent_variables[variable] }}
{% endfor %}
{% else %}
# no teamcity_agent_variables from ansible
{% endif %}
and when it arrived on the remote machine, without errors from ansible, it looked exactly the same - even though when I displayed the variables in the step before the template step, they existed
Update: version stuff.
% ansible --version
ansible [core 2.13.3]
config file = /Users/timb/git/mre-ansible/ansible.cfg
configured module search path = ['/Users/timb/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/6.3.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/timb/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.6 (main, Aug 30 2022, 05:12:36) [Clang 13.1.6 (clang-1316.0.21.2.5)]
jinja version = 3.1.2
libyaml = True
% grep error_on /Users/timb/git/mre-ansible/ansible.cfg
#error_on_missing_handler = True
error_on_undefined_vars = True
It turned out that the teamcity_agent_variables had been over-ridden in one file by a list instead of a hash, so the attempts to deference it failed.
I have no idea why Ansible didn't print a useful error when jinja failed - nor indeed when the type of the variable changed - instead of simply copying the template.

Logrotate for rsyslog in Ansible

I'm trying to change rsyslog logrotate configuration using Ansible, but when running task:
- name: Setup logrotate.d scripts
template:
src: logrotate.d.j2
dest: "{{ logrotate_conf_dir }}{{ item.name }}"
with_items: "{{ logrotate_scripts }}"
when: logrotate_scripts is defined
Which is adding this kind of configuration:
logrotate_scripts:
- name: rsyslog
path:
- "/var/log/syslog.log"
- "/var/log/daemon.log"
- "/var/log/kern.log"
- "/var/log/mail.log"
- "/var/log/user.log"
- "/var/log/lpr.log"
- "/var/log/auth.log"
- "/var/log/cron.log"
- "/var/log/debug"
- "/var/log/messages"
options:
- daily
- missingok
- maxsize 100M
- rotate 14
- compress
- compresscmd /bin/bzip2
- compressoptions -4
- compressext .bz2
- notifempty
I get this wrong format:
['/var/log/syslog.log', '/var/log/daemon.log', '/var/log/kern.log', '/var/log/mail.log', '/var/log/user.log', '/var/log/lpr.log', '/var/log/auth.log', '/var/log/cron.log', '/var/log/debug', '/var/log/messages'] {
daily
missingok
maxsize 100M
rotate 14
compress
compresscmd /bin/bzip2
compressoptions -4
compressext .bz2
notifempty
}
This is template I used for all my logrotate scripts (nginx, php and so on), but is not working properly for rsyslog.
{{ item.path }} {
{% if item.options is defined -%}
{% for option in item.options -%}
{{ option }}
{% endfor -%}
{% endif %}
{%- if item.scripts is defined -%}
{%- for name, script in item.scripts.iteritems() -%}
{{ name }}
{{ script }}
endscript
{% endfor -%}
{% endif -%}
}
How should I properly pass list of paths in order to get this effect?
/var/log/daemon.log
/var/log/kern.log
/var/log/auth.log
/var/log/user.log
/var/log/lpr.log
/var/log/cron.log
/var/log/debug
/var/log/messages
{
daily
missingok
maxsize 100M
rotate 14
compress
compresscmd /bin/bzip2
compressoptions -4
compressext .bz2
notifempty
}
You didn't share your template file but you probably want something like this in there:
{% for path in item.path %}
path
{% endfor %}
to get your list of paths.
Unless you are editing multiple files it seems wrong to use with_items like this. It is probably better to use lineinfile to customise the standard logrotate configuration with your settings.

Ansible dictionary not working when we have multiple strings in the list

I have dictionary to loop multiple strings in the list, if I provide 2 or more then it's always reading last value in the list, please suggest me.
- set_fact:
env_microservice_variable_map: |
{% set res = [] -%}
{% for microservice_name in MICROSERVICE_NAMES -%}
{% if microservice_name in MICROSERVICE_ENV_MAP -%}
{% set microservice_envs = MICROSERVICE_ENV_MAP[microservice_name] -%}
{% else -%}
{% set microservice_envs = env_variable_map.keys() -%}
{% endif -%}
{% for env in microservice_envs -%}
{% set variables = env_variable_map[env] -%}
{% set ignored = variables.__setitem__("MICROSERVICE_NAME", microservice_name) -%}
{% set ignored = res.extend([variables]) -%}
{%- endfor %}
{%- endfor %}
{{ res }}
- name: Copy values file
command: cp {{dir_path}}/helm/{{item.MICROSERVICE_NAME}}/values-template.yaml {{dir_path}}/helm/{{item.MICROSERVICE_NAME}}/values-{{item.EXEC_ENV}}-{{item.EXEC_REGION}}.yaml
with_items: "{{ env_microservice_variable_map }}"
become_user: jenkins
First one is set_fact where it has mapping.
The 2nd task should be able to loop when we have multiple strings in the variable defined "MICROSERVICE_NAMES"
There is ansible command am running, it's always reading last string in the List(read-service), please help me, thanks.
ansible-playbook generate_values_files.yml -i hosts --extra-vars "#generate_values_files_variable.yml" --extra-vars="{"'"MICROSERVICE_NAMES"'":{'processor-create','processor-update','read-service'}}" '--extra-vars={"MICROSERVICE_ENV_MAP":{}}'
Varibales:
dir_path: /jenkins
EXEC_ENV: dd
EXEC_REGION: west
Basically we have multiple directories
1. /jenkins/helm/processor-create/values-template.yml
2. /jenkins/helm/processor-update/values-template.yml
3. /jenkins/helm/read-service/values-template.yml
Each folder has values-template.yml file init when i run above script it has to create multiple files based above template file in each folder.
1. /jenkins/helm/processor-create/values-template.yml
values-dd-west.yml
values-mm-west.yml
values-gg-west.yml
2. /jenkins/helm/processor-update/values-template.yml
values-dd-west.yml
values-mm-west.yml
values-gg-west.yml
3. /jenkins/helm/read-service/values-template.yml
values-dd-west.yml
values-mm-west.yml
values-gg-west.yml
Problem here is when i run above ansible tasks it's always generating files for last service in the list :"read-service".
I suspect you found the well-known (to those who was unlucky to find it) WTF of the jinja2.
If you set some variables inside the loop, they live only within this loop. You need to initialize a container (list or dict) outside of the loop and add items into it to get something out of the loop.

how to properly take care of network interfaces configuration file using Ansible?

I want to be able to fully manage my /etc/network/interfaces.d/ configuration files using Ansible.
I already use ansible for a lot of feature, including apache files, database, and logs files, but I can't find a way to properly add / update / remove network interface configuration files.
There are a few different project on my server using different interfaces, and I want my ansible to be able to work on any server i could deploy my project.
I already found a way to create a new file using the next free interface like this :
- name: calc next free interface
set_fact:
nextFreeIf: "
{%- set ifacePrefix = vars.ansible_default_ipv4.alias -%}
{%- set ifaceNum = { 'cnt': 0 } -%}
{%- macro increment(dct, key, inc=1)-%}
{%- if dct.update({key: dct[key] + inc}) -%}
{%- endif -%}
{%- endmacro -%}
{%- for iface in ansible_interfaces|sort -%}
{%- if iface| regex_search('^' ~ vars.ansible_default_ipv4.alias) -%}
{{ increment(ifaceNum, 'cnt') }}
{%- endif -%}
{%- endfor -%}
{{ifacePrefix}}:{{ifaceNum.cnt}}"
tags: network
- name: "copy network interface configuration"
template:
src: "files/etc/network/interfaces.d/my-configuration.conf"
dest: "/etc/network/interfaces.d/my-configuration.conf"
owner: root
group: root
force: true
notify: 'restart failover interface'
tags: network
Now I need to find a way to check if my configuration file is already present so i don't recreate a new configuration file every time I run ansible.
But if it is present, there is still a problem :
network configuration file will look like this
auto {{ interface }}
iface {{ interface }} inet static
address {{ ip }}
netmask 255.255.255.255
Since I don't know which interface is used by my project, I need to check for every available interfaces if it matches the actual file, and update using the next free interface if not.
I can't find a way to do it using Ansible!!
I hope you can help me.
Well, I found a nice way to do what I wanted:
I couldn't figure out what interface was in use, if so. That's why I wanted to check for every interfaces if they were the good ones. And I was trying to find this out by comparison between the file I would get for each interface and the existing file.
But I know which ip address is used, or will be used. Ansible has a fact for every interfaces in which I can find what address is corresponding. So I don't need to compare files, I only need to compare addresses.
I simply updated the task I used for getting the next free interface, to get the actual interface to use, which can be the next free interface, or the one already in use.
- name: find interface to use
set_fact:
interface: "
{%- set ifacePrefix = vars.ansible_default_ipv4.alias -%}
{%- set ifaceNum = { 'cnt': 1 } -%}
{%- macro increment(dct, key, inc=1)-%}
{%- if dct.update({key: dct[key] + inc}) -%}
{%- endif -%}
{%- endmacro -%}
{%- for iface in ansible_interfaces|sort -%}
{%- if ifacePrefix + '_' + ifaceNum.cnt|string in ansible_interfaces -%}
{{ increment(ifaceNum, 'cnt') }}
{%- endif -%}
{%- endfor -%}
{%- for iface in ansible_interfaces|sort -%}
{%- if iface.startswith(ifacePrefix) and ansible_facts[iface]['ipv4']['address'] == ip_failover -%}
{{ ifaceNum.update({'cnt': iface.split('_')[-1]}) }}
{%- endif -%}
{%- endfor -%}
{{ifacePrefix}}:{{ifaceNum.cnt}}"
tags: network
For information, the first for loop is getting the first free interface even when there is gaps in interface numbers which can happen when someone down some interfaces.
To check if you conf file exists, you can use stat (https://docs.ansible.com/ansible/latest/modules/stat_module.html)
- stat:
path: "/path/to/conf/file"
register: conf_file
- name: DO it if conf file exists
action: {...}
when: "conf_file.stat.exists == True"

Ansible concat vars to string

I've spent most of the day trying to solve this problem and have thus far failed. I am building some playbooks to automate functions in Splunk, and am attempting to convert a list of hosts from an inventory group E.G.
[search_head]
1.2.3.4
5.6.7.8
My expected (desired) result from the debug output of the play should be:
https://1.2.3.4:8089, https://5.6.7.8:8089
I am attempting to complete this by running the following playbook against a running host:
---
- name: Build search head list to initialize the captain
hosts: search_head
remote_user: ansible
vars:
inventory_file: ./inventory-ec2-single-site
search_head_uri: "{{ lookup('template', './bootstrap-sh-deployer.j2') }}"
pre_tasks:
- include_vars:
dir: 'group_vars'
extensions:
- yml
- yaml
tasks:
- name: dump array
debug:
msg: "{{ search_head_uri }}"`
With the template bootstrap-sh-deployer.j2:
{%- set search_head_uri = [] %}
{% for host in groups['search_head'] %}
{%- if search_head_uri.append("https://{{ host }}:8089") %}
{%- endif %}
{%- if not loop.last %}, {% endif -%}
{%- endfor %}
However, the current play returns search_head_uri: ", " which tells me that the loop is running, but {{ host }} is not resolving.
Once you open a Jinja2 expression or a statement you should use Jinja2 syntax. You cannot nest them (i.e. you can't use {{ }} inside {% %}).
{%- if search_head_uri.append("https://" + host + ":8089") %}
This worked - Combination of the answer above to fix jinja formatting and using hostvars to get to the ansible_nodename.
{%- set search_head_uri = [] %}
{% for host in groups['search_head'] %}
{{ "https://" + hostvars[host]['ansible_nodename'] + ":8089" }}
{%- if not loop.last %}, {% endif -%}
{%- endfor %}

Resources