Create File on Target Server Using Ansible - ansible

I'm trying to create a file on the target server using Ansible.
I would like to get the path from the env variable on the target server.
The code is:
- name: Create a file
win_file:
path: ansible_env.CONFIG_HOME\conf\xyz.txt
state: file
The error I'm receiving is:
"changed": false,
"msg": "path ansible_env.CONFIG_HOME\conf\xyz.txt will not be created"

There are two issues here.
Firstly, since you are trying to use an environment variable, you need braces {{ }} to interpolate the value.
Secondly, have a close read of the docs for win_file where it talks about the way state behaves:
If file, the file will NOT be created if it does not exist
If touch, an empty file will be created if the path does not exist
Taking into consideration the above, the following should work for you:
- name: Create a file
win_file:
path: "{{ ansible_env.CONFIG_HOME }}\conf\xyz.txt"
state: touch

Related

Find a file and rename it ansible playbook [duplicate]

This question already has answers here:
How to move/rename a file using an Ansible task on a remote system
(13 answers)
Closed 1 year ago.
So i have been trying to fix a mistake i did in all the servers by using a playbook. Basicly i launched a playbook with logrotate to fix the growing logs problem, and in there is a log named btmp, which i wasnt supposed to rotate but did anyway by accident, and now logrotate changed its name to add a date to it and therefore braking the log. Now i want to use a playbook that will find a log named btmp in /var/log directory and rename it back, problem is that the file atm is different in each server for example 1 server has btmp-20210316 and the other has btmp-20210309, so in bash command line one would use wildcard "btmp*" to bypass thos problem, however this does not appear to work in playbook. So far i came up with this:
tasks:
- name: stat btmp*
stat: path=/var/log
register: btmp_stat
- name: Move btmp
command: mv /var/log/btmp* /var/log/btmp
when: btmp_stat.stat.exists
However this results in error that the file was not found. So my question is how does one get the wildcard working in playbook or is there an equivalent way to find all files that have "btmp" in their names and rename them ? BTW all servers are Centos 7 servers.
So i will add my own solution aswell, even tho the answer solution is better.
Make a bash script with a single line, anywhere in you ansible VM.
Line is : mv /var/log/filename* /var/log/filename
And now create a playbook to operate this in target VM:
---
- hosts: '{{ server }}'
remote_user: username
become: yes
become_method: sudo
vars_prompt:
- name: "server"
prompt: "Enter server name or group"
private: no
tasks:
- name: Move the script to target host VM
copy: src=/anywhereyouwant/bashscript.sh dest=/tmp mode=0777
- name: Execute the script
command: sh /tmp/bashscript.sh
- name: delete the script
command: rm /tmp/bashscript.sh
There's more than one way to do this in Ansible, and using the shell module is certainly a viable way to do it (but you would need the shell module in place of command as the latter does not support wildcards). I would solve the problem as follows:
First create a task to find all matching files (i.e. /var/log/btmp*) and store them in a variable for later processing - this would look like this:
- name: Find all files named /var/log/btmp*
ansible.builtin.find:
paths: /var/log
patterns: 'btmp*'
register: find_btmp
This task uses the find module to locate all files called btmp* in /var/log - the results are stored in a variable called find_btmp.
Next create a task to copy the btmp* file to btmp. Now you may very well have more than 1 file pathing the above pattern, and logically you don't want to rename them all to btmp as this simply keeps overwriting the file every time. Instead, let's assume you want only the newest file that you matched - we can use a clever Jinja2 filter to get this entry from the results of the first task:
- name: Copy the btmp* to the required filename
ansible.builtin.copy:
src: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
dest: /var/log/btmp
remote_src: yes
when: find_btmp.failed == false
This task uses Ansible's copy module to copy our chosen source file to /var/log/btmp. The remote_src: yes parameter tells the copy module that the source file exists on the remote machine rather than the Ansible host itself.
We use a when clause to ensure that we don't run this copy operation if we failed to find any files.
Now let's break down that Jinja2 filter:
find_btmp.files - this is all of the files listed in our find_btmp variable
sort(attribute='mtime',reverse=true) - here we are sorting our list of files using the mtime (modification time) attribute - we're reverse sorting so that the newest entry is at the top of the list.
map(attribute='path') - we're using map to "extract" the path attribute of the files dictionary, as this is the only data we actually want to pass to the copy module - the path of the file itself
first - this selects only the first element in the list (i.e. the newest file as they were reverse sorted)
Finally, you asked for a move operation - there's no native "move" module in Ansible so you will want to remove the source file after the copy - this can be done as follows (the Jinja2 filter is the same as before:
- name: Delete the original file
ansible.builtin.file:
path: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
state: absent
when: find_btmp.failed == false
Again we use a when clause to ensure we don't delete anything if we didn't find it in the first place.
I have tested this on Ansible 3.1.0/ansible-base 2.10.7 - if you're running Ansible 2.9 or earlier, remove the ansible.builtin. from the module names (i.e. ansible.builtin.copy becomes copy.)
Hope this helps you out!

Generate Ansible template variable only once

Context
I have an Ansible template that creates a configuration file. Most variables used by this template come from some stable source, such as variables defined in the playbook. However one variable contains a secret key which somehow needs to be generated if it does not already exist.
I want Ansible to generate the key on an as-needed basis. I do not want the key to be stored on the OS running Ansible, like is done with the password module. And naturally I want idempotence.
I currently have a working solution but I'm not happy with it, hence my question here. My current solution uses an extra file which is include by the configuration file created by my template. This is possible as it is a PHP file. I have a task using the shell module that generates the secret when this extra file does not exist, and then another that then creates it using the registered variable.
- name: Generate secret key
shell: openssl rand -hex 32
register: secret_key_command
when: not SecretKey.stat.exists
- name: Create SecretKey.php
template:
src: "SecretKey.php.j2"
dest: "{{ some_dir }}/SecretKey.php"
when: not SecretKey.stat.exists
Surely there is a less contrived way to do this?
Question
Is there a nice way to have a variable that gets generated only once in an Ansible template?
I am not sure, but I understood correctly, we want to generate your template if it doesn't already exists. So you can just do as follow:
- name: Create SecretKey.php
template:
src: "SecretKey.php.j2"
dest: "{{ some_dir }}/SecretKey.php"
force: no
force: no tells to don't overwrite a file if it already exists. No need to do extra check.

Ansible, save variable content to new file

I'm trying to create an Ansible task to save the content of a variable to a new file.
Using Ansible 2.5.13 and Python 2.7.5.
I've tried already to copy the content of the variable to a destination path where the file should be created...
- name: Save alert rule example file to Prometheus
copy:
content: "{{ alert_rule_config }}"
dest: /opt/compose/prom/alert_rule.yml
Tried also to create the file before copying the content of the variable
- name: Create alert rule file
file:
path: /opt/compose/prom/alert_rule.yml
state: touch
- name: Save alert rule example file to Prometheus
copy:
content: "{{ alert_rule_config }}"
dest: /opt/compose/prom/alert_rule.yml
Tried to wrap the destination path in quotes...
But no matter what a directory /opt/compose/prom/alert_rule.yml/ is created!
The content of the variable is something like
alert_rule_config:
groups:
- name: debug-metrics
rules:
- alert: DebugAlert
expr: test_expression
I expect the file to be created (because it does not exist) and the content of the variable to be saved to the newly created file but the task fails with
FAILED! => {"changed": false, "msg": "can not use content with a dir as dest"}
I want to avoid issuing a command and would prefer to use an Ansible module.
You need to create the target directory instead of the target file, or you will get Destination directory /opt/compose/prom does not exist in the first option or Error, could not touch target: [Errno 2] No such file or directory: '/opt/compose/prom/alert_rule.yml' in the second option.
- name: Create alert rule containing directory
file:
path: /opt/compose/prom/
state: directory
- name: Save alert rule example file to Prometheus
copy:
content: "{{ alert_rule_config }}"
dest: /opt/compose/prom/alert_rule.yml
But as #Calum Halpin says, if during your tests you did an error and created a directory /opt/compose/prom/alert_rule.yml/, you need to remove it before.
This will happen if a directory /opt/compose/prom/alert_rule.yml already exists when your tasks are run.
To remove it as part of your tasks add
- file:
path: /opt/compose/prom/alert_rule.yml
state: absent
ahead of your other tasks.

How to create a file with a single line?

The documentation for file operations does not seem to mention a module which would allow me to create a file with one line. The closest is lineinfile but this module also inserts markers (so a minimum of three lines).
The line will be generated from variables so I do not want to use a local file transferred to the target.
Is there such a module? Or should I run a shell command such as
command: echo {{ myvariable }} > the_file_to_create
to generate it?
You can do it easily with copy module using force=no to create a new empty file only when the file does not yet exist (if the file exists, it's content is preserved)
- name: Copy the content
copy:
content: "{{ myvariable }}"
dest: /location/of/the/file
force: no
Hope that might help you.

Finding file name in files section of current Ansible role

I'm fairly new to Ansible and I'm trying to create a role that copies a file to a remote server. The local file can have a different name every time I'm running the playbook, but it needs to be copied to the same name remotely, something like this:
- name: copy file
copy:
src=*.txt
dest=/path/to/fixedname.txt
Ansible doesn't allow wildcards, so when I wrote a simple playbook with the tasks in the main playbook I could do:
- name: find the filename
connection: local
shell: "ls -1 files/*.txt"
register: myfile
- name: copy file
copy:
src="files/{{ item }}"
dest=/path/to/fixedname.txt
with_items:
- myfile.stdout_lines
However, when I moved the tasks to a role, the first action didn't work anymore, because the relative path is relative to the role while the playbook executes in the root dir of the 'roles' directory. I could add the path to the role's files dir, but is there a more elegant way?
It looks like you need access to a task that looks up information locally, and then uses that information as input to the copy module.
There are two ways to get local information.
use local_action:. That's shorthand for running the task agains 127.0.0.1, more info found here. (this is what you've been using)
use a lookup. This is a plugin system specifically designed for getting information locally. More info here.
In your case, I would go for the second method, using lookup. You could set it up like this example:
vars:
local_file_name: "{{ lookup('pipe', 'ls -1 files/*.txt') }}"
tasks:
- name: copy file
copy: src="{{ local_file_name }}" dest=/path/to/fixedname.txt
Or, more directly:
tasks:
- name: copy file
copy: src="{{ lookup('pipe', 'ls -1 files/*.txt') }}" dest=/path/to/fixedname.txt
With regards to paths
the lookup plugin is run from the context of the task (playbook vs role). This means that it will behave differently depending on where it's used.
In the setup above, the tasks are run directly from a playbook, so the working dir will be:
/path/to/project -- this is the folder where your playbook is.
If you where to add the task to a role, the working dir would be:
/path/to/project/roles/role_name/tasks
In addition, the file and pipe plugins run from within the role/files folder if it exists:
/path/to/project/roles/role_name/files -- this means your command is ls -1 *.txt
caveat:
The plugin is called every time you access the variable. This means you cannot trust debugging the variable in your playbook, and then relying on the variable to have the same value when used later in a role!
I do wonder though, about the use-case for a file that resides inside a projects ansible folders, but who's name is not known in advance. Where does such a file come from? Isn't it possible to add a layer in between the generation of the file and using it in Ansible... or having a fixed local path as a variable? Just curious ;)
Just wanted to throw in an additional answer... I have the same problem as you, where I build an ansible bundle on the fly and copy artifacts (rpms) into a role's files folder, and my rpms have versions in the filename.
When I run the ansible play, I want it to install all rpms, regardless of filenames.
I solved this by using the with_fileglob mechanism in ansible:
- name: Copy RPMs
copy: src="{{ item }}" dest="{{ rpm_cache }}"
with_fileglob: "*.rpm"
register: rpm_files
- name: Install RPMs
yum: name={{ item }} state=present
with_items: "{{ rpm_files.results | map(attribute='dest') | list }}"
I find it a little bit cleaner than the lookup mechanism.

Resources