Ansible Builtin Lineinfile to ~/.bashrc - ansible

I'm relatively new to ansible, so apologies if this question misses something.
My goal is to add a line to the ~/.bashrc file with ansible. I think the best way to do that is with the ansible.builtin.lineinfile module.
Unfortunately, I've run the module, it appears to run properly on the target host machine, reports back 'changed' on the first run (and 'ok' on subsequent runs), but no changes are actually made in the ~/.bashrc file.
Appreciate any help in figuring out what changes would be needed to create the desired outcome.
---
- hosts: setup
become: true
vars_files:
- /etc/ansible/vars.yml
tasks:
- name: Test lineinfile
ansible.builtin.lineinfile:
path: ~/.bashrc
line: "test lineinfile"

Changed path: ~/.bashrc to path: .bashrc and it worked.

Multiple lines could be done this way:
- name: Add an environment variable to the remote user's shell.
lineinfile:
dest: "~/.bashrc"
line: |
Line 1
Line 2

Related

Ansible: How do I run an entire playbook based on a condition?

Context: provisioning fresh new servers.
I would like to provision them just once especially the update part.
After launching the first playbook bootstrap.yml I made it leave a file in a folder for me to know that the playbook ran well.
So then in the same playbook I would have to add a condition above every task (which I did) to check for the existance of this file.
If file exists skip running the playbook against all those machines who have that file.
Problem: How do I add the condition to run the playbook only if the file isn't found? I don't want to add a "when:" statement for each of my own tasks I find it silly.
Question: Does anyone have a better solution that maybe solves this with a single line or a parameter in the inventory file that I haven't thought of?
edit:
This is how i check for the file
- name: Bootstrap check
find:
path: /home/bot/bootstrapped-ok
register: bootstrap
and then when condition would be:
when: bootstrap.matched == 0
so if file is not found run the entire playbook.
I think you may be over-complicating this slightly. Would it be accurate to say "I want to bail early on a playbook without error under a certain condition?"
If so, you want "end_host"
How do I exit Ansible play without error on a condition
Just begin with a check for the file, and end_host if it's found.
- name: Bootstrap check
stat:
path: /home/bot/bootstrapped-ok
register: bootstrap
- name: End the play if previous provisioning was successful
meta: end_host
when: bootstrap.stat.exists == True
- name: Confinue if bootstrap missing
debug:
msg: "Hello"

How to set different environment variables for each managed host in ansible

i have an inventory file which looks like below.
[abc]
dsad1.jkas.com
dsad2.jkas.com
[def]
dsad3.jkas.com
dsad4.jkas.com
[main:children]
abc
def
[main:vars]
ansible_user="{{lookup('env', 'uid')}}
ansible_password="{{lookup('env', 'pwd')}}
ansible_connection=paramkio
main.yaml --> main yaml looks like below
---
- hosts: "{{my-hosts}}"
roles:
- role: dep-main
tags:
- main
- role: dep-test
tags:
-test
cat roles/dep-main/tasks/main.yaml
- name: run playbook
script: path/scripts/dep-main.sh
where i have scripts folder inside which i have dep-main.sh --> using script module to run the shell script on remote machine.
"ansible-playbook -i inventory -e "my_hosts=main" --tags main main.yaml"
i am following above design for a new requirement. Now the challenge is i need to set environment variable for each host and env variables are diff for each host. how can i achieve it . please help me.
there are around 15 env key value pairs that needs to be exported to each host above, out of which 10 are common, where i'll simply put in the above shell script. where as other 5 env key value pairs are diff for each host like below.
dsad1.jkas.com
sys=abc1
cap=rty2
jam=yup4
pak=hyd4
jum=563
dsad2.jkas.com
sys=abc45
cap=hju
jam=upy
pak=upsc
jum=y78
please help.
Please see the ansible documentation for this as it's rather explanatory:
https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html
I am editing my initial answer and try to make it more clear. But please note that I am assuming a few things here:
you are using Linux
you use bash as your shell, Linux offers a variety of shells and the syntax is different
when you say export environment variables I assume you mean copy the variables to the remote host in some form and export them to the shell
Also there's a few things left to do but I have to leave it to you:
You need to "load" the env file so the variables are available to the shell (export), you can add an extra line to the ".bashrc" file to source the file, like "source /etc/env_variables", but that will depend on how you want to use the vars. You can easily do that with your shell script or with ansible (lineinfile)
I recommend you use a separate file like I did and don't edit the bashrc adding all the variables. It will make you life easier in the future if you want to update the variables.
Rename the env file properly so you know what's for and add some backup mechanism - check the documentation https://docs.ansible.com/ansible/latest/collections/ansible/builtin/blockinfile_module.html
Inventory file with the variables:
all:
hosts:
<your_host_name1>:
sys: abc1
cap: rty2
jam: yup4
pak: hyd4
jum: 563
<your_host_name2>:
sys: abc45
cap: hju
jam: upy
pak: upsc
jum: y78
My very simplistic playbook:
- hosts: all
become: yes
become_method: sudo
tasks:
- name: Create env file and add variables
blockinfile:
path: /etc/env_variables
create: yes
block: |
export sys={{ sys }}
export cap={{ cap }}
export jam={{ jam }}
export pak={{ pak }}
export jum={{ jum }}
The final result on the servers I used for testing is the following:
server1:
# BEGIN ANSIBLE MANAGED BLOCK
export sys=abc1
export cap=rty2
export jam=yup4
export pak=hyd4
export jum=563
# END ANSIBLE MANAGED BLOCK
server2:
# BEGIN ANSIBLE MANAGED BLOCK
export sys=abc45
export cap=hju
export jam=upy
export pak=upsc
export jum=y78
# END ANSIBLE MANAGED BLOCK
Hope this is what you are asking for.
For windows you may have to find where the user variables are set and adjust the playbook to edit that file, if they are in a text file...

Find a file and rename it ansible playbook [duplicate]

This question already has answers here:
How to move/rename a file using an Ansible task on a remote system
(13 answers)
Closed 1 year ago.
So i have been trying to fix a mistake i did in all the servers by using a playbook. Basicly i launched a playbook with logrotate to fix the growing logs problem, and in there is a log named btmp, which i wasnt supposed to rotate but did anyway by accident, and now logrotate changed its name to add a date to it and therefore braking the log. Now i want to use a playbook that will find a log named btmp in /var/log directory and rename it back, problem is that the file atm is different in each server for example 1 server has btmp-20210316 and the other has btmp-20210309, so in bash command line one would use wildcard "btmp*" to bypass thos problem, however this does not appear to work in playbook. So far i came up with this:
tasks:
- name: stat btmp*
stat: path=/var/log
register: btmp_stat
- name: Move btmp
command: mv /var/log/btmp* /var/log/btmp
when: btmp_stat.stat.exists
However this results in error that the file was not found. So my question is how does one get the wildcard working in playbook or is there an equivalent way to find all files that have "btmp" in their names and rename them ? BTW all servers are Centos 7 servers.
So i will add my own solution aswell, even tho the answer solution is better.
Make a bash script with a single line, anywhere in you ansible VM.
Line is : mv /var/log/filename* /var/log/filename
And now create a playbook to operate this in target VM:
---
- hosts: '{{ server }}'
remote_user: username
become: yes
become_method: sudo
vars_prompt:
- name: "server"
prompt: "Enter server name or group"
private: no
tasks:
- name: Move the script to target host VM
copy: src=/anywhereyouwant/bashscript.sh dest=/tmp mode=0777
- name: Execute the script
command: sh /tmp/bashscript.sh
- name: delete the script
command: rm /tmp/bashscript.sh
There's more than one way to do this in Ansible, and using the shell module is certainly a viable way to do it (but you would need the shell module in place of command as the latter does not support wildcards). I would solve the problem as follows:
First create a task to find all matching files (i.e. /var/log/btmp*) and store them in a variable for later processing - this would look like this:
- name: Find all files named /var/log/btmp*
ansible.builtin.find:
paths: /var/log
patterns: 'btmp*'
register: find_btmp
This task uses the find module to locate all files called btmp* in /var/log - the results are stored in a variable called find_btmp.
Next create a task to copy the btmp* file to btmp. Now you may very well have more than 1 file pathing the above pattern, and logically you don't want to rename them all to btmp as this simply keeps overwriting the file every time. Instead, let's assume you want only the newest file that you matched - we can use a clever Jinja2 filter to get this entry from the results of the first task:
- name: Copy the btmp* to the required filename
ansible.builtin.copy:
src: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
dest: /var/log/btmp
remote_src: yes
when: find_btmp.failed == false
This task uses Ansible's copy module to copy our chosen source file to /var/log/btmp. The remote_src: yes parameter tells the copy module that the source file exists on the remote machine rather than the Ansible host itself.
We use a when clause to ensure that we don't run this copy operation if we failed to find any files.
Now let's break down that Jinja2 filter:
find_btmp.files - this is all of the files listed in our find_btmp variable
sort(attribute='mtime',reverse=true) - here we are sorting our list of files using the mtime (modification time) attribute - we're reverse sorting so that the newest entry is at the top of the list.
map(attribute='path') - we're using map to "extract" the path attribute of the files dictionary, as this is the only data we actually want to pass to the copy module - the path of the file itself
first - this selects only the first element in the list (i.e. the newest file as they were reverse sorted)
Finally, you asked for a move operation - there's no native "move" module in Ansible so you will want to remove the source file after the copy - this can be done as follows (the Jinja2 filter is the same as before:
- name: Delete the original file
ansible.builtin.file:
path: "{{ find_btmp.files | sort(attribute='mtime',reverse=true) | map(attribute='path') | first }}"
state: absent
when: find_btmp.failed == false
Again we use a when clause to ensure we don't delete anything if we didn't find it in the first place.
I have tested this on Ansible 3.1.0/ansible-base 2.10.7 - if you're running Ansible 2.9 or earlier, remove the ansible.builtin. from the module names (i.e. ansible.builtin.copy becomes copy.)
Hope this helps you out!

With Ansible what is the best way of remembering a command has executed successfully previously?

With the command module, if the command creates a file, you can check to see if that file exists. If it does it prevents the command executing again.
- command: touch ~/myfile
args:
creates: ~/myfile
However, if the command does not create a file then on a re-run it executes again.
To avoid a second execution, I create some random file on a change (notify) as follows:
- command: dothisonceonly # this does not create a file
args:
creates: ~/somefile
notify: done
then the handler:
- name: done
command: touch ~/somefile
This approach works but is a bit ugly. Can anyone shed let on best practice? Maybe setting some fact? Maybe a whole new approach?
It is a fact (in common language) that a command was run successfully on a specific target host, so the most appropriate would be to use local facts (in Ansible vernacular).
In your handler save the state as a JSON file under /etc/ansible/facts.d with copy module and content parameter.
It will be retrieved and accessible whenever you run a play against the host with a regular fact-gathering process.
Then you can control the tasks with when condition (you need to include the default filter for the situation when no fact exists yet).
Ideally with Ansible, check for the state that was changed by the first command rather than using a file as a proxy. The reason being that checking the actual state of something provides better immutability since it is tested on every pass.
If there is a reason not to use that approach. Then register the result of the command and use that instead of a notifier to trigger creation of the file.
- command: dothisonceonly # this does not create a file
creates: ~/somefile
register: result
- file:
path: ~/somefile
state: touch
when: result|succeeded
If you are curious to see what is happening here, add:
- debug: var=result
Be aware with notifiers that they are run at the end of the play. This means that if a notifier is triggered by a task but then then play fails to complete, the notifier will not be run. Conversely there are Ansible options which cause notifiers to run even when not triggered by tasks.
This is my actual implementation based on the answer by #techraf
- command: echo 'phew'
register: itworks
notify: Done itworks
when: ansible_local.itworks | d(0) == 0
and handler:
- name: Done itworks
copy:
content: '{{ itworks }}'
dest: /etc/ansible/facts.d/itworks.fact
Docs:
http://docs.ansible.com/ansible/playbooks_variables.html#local-facts-facts-d
Thanks #techraf this works great and persists facts.
EDIT
Applied default value logic in #techraf's comment.

Running Python script via ansible

I'm trying to run a python script from an ansible script. I would think this would be an easy thing to do, but I can't figure it out. I've got a project structure like this:
playbook-folder
roles
stagecode
files
mypythonscript.py
tasks
main.yml
release.yml
I'm trying to run mypythonscript.py within a task in main.yml (which is a role used in release.yml). Here's the task:
- name: run my script!
command: ./roles/stagecode/files/mypythonscript.py
args:
chdir: /dir/to/be/run/in
delegate_to: 127.0.0.1
run_once: true
I've also tried ../files/mypythonscript.py. I thought the path for ansible would be relative to the playbook, but I guess not?
I also tried debugging to figure out where I am in the middle of the script, but no luck there either.
- name: figure out where we are
stat: path=.
delegate_to: 127.0.0.1
run_once: true
register: righthere
- name: print where we are
debug: msg="{{righthere.stat.path}}"
delegate_to: 127.0.0.1
run_once: true
That just prints out ".". So helpful ...
try to use script directive, it works for me
my main.yml
---
- name: execute install script
script: get-pip.py
and get-pip.py file should be in files in the same role
If you want to be able to use a relative path to your script rather than an absolute path then you might be better using the role_path magic variable to find the path to the role and work from there.
With the structure you are using in the question the following should work:
- name: run my script!
command: ./mypythonscript.py
args:
chdir: "{{ role_path }}"/files
delegate_to: 127.0.0.1
run_once: true
An alternative/straight forward solution:
Let's say you have already built your virtual env under ./env1 and used pip3 install the needed python modules.
Now write playbook task like:
- name: Run a script using an executable in a system path
script: ./test.py
args:
executable: ./env1/bin/python
register: python_result
- name: Get stdout or stderr from the output
debug:
var: python_result.stdout
If you want to execute the inline script without having a separate script file (for example, as molecule test) you can write something like this:
- name: Test database connection
ansible.builtin.command: |
python3 -c
"
import psycopg2;
psycopg2.connect(
host='127.0.0.1',
dbname='db',
user='user',
password='password'
);
"
You can even insert Ansible variables in this string.

Resources