How do I use ${USER} in ansible_python_interpreter variable?
In a deploy.yml I have several tasks and first task installs python into /local/${USER}/venv. Follow up task should use installed python not the one from my own env.
I tried different combinations, those did not work.
deploy.yml
- name: use installed python
host: localhost
vars_files:
- settings.yml
vars:
# commented items did not work neither here nor in settings.yml
# ansible_python_interpreter: "/local/{{lookup('env', 'USER')}}/venv/bin/python"
# ansible_python_interpreter: "/local/${USER}/venv/bin/python"
# venv_dir defined in settings.yml
# ansible_python_interpreter: "{{venv_dir}}/bin/python"
# hardcoded worked:
ansible_python_interpreter: "/local/myuser/bin/python"
The error something like:
"/bin/sh: {{venv_dir}}/bin/python: No such file or directory\n"
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "/bin/sh: {{venv_dir}}/bin/python: No such file or directory\n", "module_stdout": "", "msg": "MODULE FAILURE"}
settings.yml:
---
root_dir: "/local/{{lookup('env', 'USER')}}"
venv_dir: "{{root_dir}}/venv/"
Note: moving ansible_python_interpreter into settings.yml does not help.
Am I missing something?
I guess ansible_python_interpreter is not templated under the hood.
You can use set_fact workaround:
---
- hosts: localhost
gather_facts: false
tasks:
# default Python here
- shell: echo hello
- set_fact:
ansible_python_interpreter: /local/{{lookup('env', 'USER')}}/venv/bin/python
# modified Python here
- shell: echo hello
Globally, use the interpreter_python key in the [defaults] section of ansible.cfg
interpreter_python=/usr/bin/python2.7
Related
I want to encrypt my host credentials in a central secrets.yml file.
How can I tell Ansible, to use the variables?
I tried with this setup:
host_vars/test.yml
ansible_user: {{ test_user }}
ansible_become_pass: {{ test_pass }}
secrets.yml
# Credentials Test Server #
test_user: user
test_pass: password
inventory.yml
all:
children:
test:
hosts:
10.10.10.10
playbook.yml
---
- name: Update Server
hosts: test
become: yes
vars_files:
- secrets.yml
tasks:
- name: Update
ansible.builtin.apt:
update_cache: yes
For execution I user this command:
ansible-playbook -i inventory.yml secure_linux.yml --ask-vault-pass
During execution I get this Error Message:
fatal: [10.10.10.10]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: root#10.10.10.10: Permission denied (publickey,password).", "unreachable": true}
For those credentials to be used by all hosts, use the group_vars/all directory. So you will have the file group_vars/all/secrets.yml, which you will encrypt with ansible-vault.
ansible_user: user
ansible_password: password
You do not need a host_vars file.
The solution was:
give the host_vars file the right name (10.10.10.10.yml)
add ansible_password as variable
use quotation marks "{{ test_user }}"
I need to set date to remote host, so I read localhost date then need to get the otherhost "ipv4_address" that is defined in hosts file of ansible.
- hosts: localhost
become_user : root
tasks:
- name: align datetime
shell: |
data="$(date +'%Y/%m/%d +%H:%m:00')"
ssh user#{{ otherhost.ipv4_address }} "sudo date -s $data"
- hosts: otherhost
become: true
his tasks....
but it seems that is not the correct way to get the ipv4:
fatal: [127.0.0.1]: FAILED! => {"msg": "The task includes an option
with an undefined variable. The error was: 'otherhost.ipv4_address ' is
undefined\n\nThe error appears to be in
ansible --version
ansible 2.9.2
config file = None
configured module search path = ['/home/tec1/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tec1/.local/share/virtualenvs/sniperx-EdPGXWMw/lib/python3.7/site-packages/ansible
executable location = /home/tec1/.local/share/virtualenvs/sniperx-EdPGXWMw/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
The problem is that {{ otherhost.ipv4_address }} is not available at the time you are using it as ansible hasnt gathered facts about otherhost.
A few options are available to you:
Gather facts before you start using the data, for example
---
- hosts: all
gather_facts: true
# here you gather facts about otherhost
- hosts: localhost
tasks:
- debug: var=hostvars["otherhost"].ansible_all_ipv4_addresses
# and here you can access those facts about
Run the playbook at the remote host and delegate the localhost time before hand, for example
- hosts: otherhost
tasks:
- name: get localhost time
shell: date +'%Y/%m/%d +%H:%m:00'
become: false
register: local_date
- name: set the date on the server
shell: sudo date -s '{{ local_date.stdout_lines[0] }}'
This takes advantage of ansibles way of auto-ssh-ing into the target box and you dont have to manual ssh.
I have defined the following in my ansible.cfg
# default user to use for playbooks if user is not specified
# (/usr/bin/ansible will use current user as default)
remote_user = ansible
However I have a playbook bootstrap.yaml where I connect with root rather than ansible
---
- hosts: "{{ target }}"
become: no
gather_facts: false
remote_user: root
vars:
os_family: "{{ osfamily }}}"
roles:
- role: papanito.bootstrap
However it seems that remote_user: root is ignored as I always get a connection error, because it uses the user ansible instead of root for the ssh connection
fatal: [node001]: UNREACHABLE! => {"changed": false,
"msg": "Failed to connect to the host via ssh:
ansible#node001: Permission denied (publickey,password).",
"unreachable": true}
The only workaround for this I could find is calling the playbook with -e ansible_user=root. But this is not convenient as I want to call multiple playbooks with the site.yaml, where the first playbook has to run with ansible_user root, whereas the others have to run with ansible
- import_playbook: playbooks/bootstrap.yml
- import_playbook: playbooks/networking.yml
- import_playbook: playbooks/monitoring.yml
Any suggestions what I am missing or how to fix it?
Q: "remote_user: root is ignored"
A: The playbook works as expected
- hosts: test_01
gather_facts: false
become: no
remote_user: root
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
gives
"result.stdout": "root"
But, the variable can be overridden in the inventory. For example with the inventory
$ cat hosts
all:
hosts:
test_01:
vars:
ansible_connection: ssh
ansible_user: admin
the result is
"result.stdout": "admin"
Double-check the inventory with the command
$ ansible-inventory --list
Notes
It might be also necessary to double-check the role - role: papanito.bootstrap
See Controlling how Ansible behaves: precedence rules
I faced a similar issue, where ec2 instance required different username to ssh with. You could try with below example
- import_playbook: playbooks/bootstrap.yml
vars:
ansible_ssh_user: root
Try this
Instead of “remote_user: root”use “remote_user: ansible” and additional “become: yes” ,”become_user: root”,”become_method: sudo or su”
I'm very, very new to Ansible so I just need someone to break down how to set up a yaml file to use as a playbook.
I wrote this string of code that does work:
ansible Test --user exampleuser --ask-pass -c local -m ping
Output:
192.168.1.4 | SUCCESS => {
"changed": false,
"ping": "pong"
How to I format what I wrote so I can just type:
ansible-playbook test.yaml
Below is the content of yaml file should look like
---
- hosts: Test
connection: local
remote_user: exampleuser
tasks:
- ping:
I am using Ansible 2.1.0.0
I try to use become_user with a variable in a task, but I receive the following message:
fatal: [host]: FAILED! => {"failed": true, "msg": "'ansible_user' is undefined"}
The task executing this is
- name: Config git user name
git_config: name=user.name scope=global value={{ ansible_host }}
become: Yes
become_user: "{{ansible_user}}"
And the playbook has the following line to define the remote user:
- name: Foo
hosts: foo
vars:
http_port: 80
remote_user: admin
I've seen this response which seems to be the same problem, but this does not work for me.
I have seen also a set_fact solution but I would like to use the remote_user var if possible so no extra lines must be added if a playbook already has the remote_user var set.
Does anyone know how to do this or what I am doing wrong?
What about that:
- name: Foo
hosts: foo
vars:
http_port: 80
my_user: admin
remote_user: "{{my_user}}"
then:
- name: Config git user name
git_config: name=user.name scope=global value={{ ansible_host }}
become: Yes
become_user: "{{my_user}}"
I think I found it:
become_user: "{{ansible_ssh_user}}"
In fact the remote_user: admin is another way of defining the variable ansible_ssh_user, I dont know why remote_user is not accessible as a variable, but what I know is that when you set remote_user, it changes the variable ansible_ssh_user
Not sure if it's a clean solution though, but it works
I had a similar problem thrying to use {{ ansible_ssh_user }}
fatal: [xxx]: FAILED! => {"msg": "The field 'become_user' has an
invalid value, which includes an undefined variable. The error was:
'ansible_user' is undefined"}
I fixed this error using this approach:
- name: Backups - Start backups service
shell:
cmd: systemctl --user enable backups.service && systemctl --user restart backups.service
executable: /bin/bash
become: true
become_method: sudo
become_user: "{{ lookup('env','USER') }}"
I hope this helps.