Retrieve Remmina password from Ubuntu keyring in Ansible? - ansible

I can add passwords to Ubuntu's system keyring, retrievable by Ansible, with the command
keyring set myservice username
by installingsudo apt install python-keyring. This password is then retrievable within an Ansible playbook for example using
ansible_become_pass: "{{ lookup('keyring','myservice username') }}"
See the documentation page for more examples.
But I reinstalled a local computer and I want to re-install/configure a VNC server in it via Ansible. I would like to pick up the saved VNC password that already exists in my computer's (the client) keyring, because I don't want to have the password in plaintext in the middle of a playbook. This password was saved in the keyring by Remmina and does not follow the python-keyring format.
Is there a way to retrieve this password from within an Ansible playbook?

Related

getting a password prompt when running ansibleplaybook

I am new to ansible. I have written an ansible playbook to install vnc. I want to ensure when someone runs the playbook they are prompted for a password. I was able to run the playbook via some code i put together and it prompts for the password but accepts any password.
---
- hosts : test-server
vars_prompt:
- name: password
prompt: "What is your password?"
private: yes
tasks :
- name : install tightvncserver
package : pkg=tightvncserver state=installed
notify:
- start tightvncserver
handlers :
- name : start tightvncserver
service : name=tightvncserver state=started
Please excuse the indentation. Any help will be appreciated
Now, this is where ansible vault comes into picture. Any password or other confidential information has to be stored in ansible vault. If you are not worried about security then you can simply add a when module to check if password matches a specific string or else the best way to solve this would be to prompt for ansible vault password and fetch confidential informations from there.
Alternatively, you can also store your password as key value in a yml variable file and include that in your playbook and add a when condition to check if password provided equals the password mentioned in that variable file.

Setting vault password in Ansible Tower

I have used Ansible Vault to encrypt a file in which I have stored sensitive data.
In my orchestration script below command is mentioned to run the playbook.
ansible-playbook -i hosts -vvv Playbook.yml --ask-vault-pass
This prompts user to provide password for Ansible vault.
Now how can I achieve the same through Ansible Tower so that no manual intervention.
I do not want this to done through password file as it is the requirement.
Any suggestion would be great help.
On Ansible Tower, go to Settings > Credentials and edit your Machine Credentials. There is an option to enter your vault password. When you run the playbook on Ansible Tower, the vault password should automatically be entered. You can also check the box "Ask at runtime?" if you want to manually enter your vault password when the playbook is running.
Here is an overview of this functionality under "Vault Support" of this page: https://www.ansible.com/blog/ansible-tower-148

Where are SSH keys for other servers stored? - Ansible

I am a newbie to Ansible. I have managed to write playbooks that set up Apache, Tomcat and others, all on localhost. I am now trying to move this to other servers to test the playbooks.
I have done the following:
1. Added a section [webservers] in /etc/ansible/hosts and put the public IP for that instance there.
2. I invoked ansible-playbook like so:
ANSIBLE_KEEP_REMOTE_FILES=1 ansible-playbook -vvvv -s serverSetup.yml
My questions:
1. Where do I store the public SSH key for the target server?
2. How do I specify which public key to use?
There are a number of other ways it is possible: ansible.cfg, set_fact, environment vars.
ansible.cfg
You can have an Ansible Config file within your project folder which can state which key to use, using the following:
private_key_file = /path/to/key/key1.pem
You can see an example of an ansible.cfg file here: https://raw.githubusercontent.com/ansible/ansible/devel/examples/ansible.cfg
set_fact
You can add the key using the set_fact module within your playbook, this can be hardcoded as below or templated:
- name: Use particular private key for this playbook
set_fact: ansible_private_ssh_key=/path/to/key/key1.pem
http://docs.ansible.com/ansible/set_fact_module.html
environment vars
See this stackoverflow post's answer for more information:
how to define ssh private key for servers fetched by dynamic inventory in files
Where do I store the public SSH key for the target server?
Wherever makes sense. Since these are keys that I may use to directly connect to the machine, I usually store them in ~/.ssh/ with my other private keys. For projects where I'm working on multiple computers or with other users, I store them in Ansible Vault and have a playbook that extracts them and stores them on the local machine.
How do I specify which public key to use?
group_vars is a good place to specify ansible_private_ssh_key.
ansible uses a user to connect to the target machine.
So if your user is ubuntu (-u ubuntu in ansible flags) the key will be ~ubuntu/.ssh/authorized_keys on target machine).
And from the ansible --help command you have
--private-key=PRIVATE_KEY_FILE, --key-file=PRIVATE_KEY_FILE use this file to authenticate the connection

How to make ansible only ask for become password when required

I am using ansible 2.0.2.0 to update my static website from any computer. My playbook runs on localhost only and essentially has two parts:
Privileged part: Ensure packages are installed, essentially apt tasks with become: true
Unprivileged part: Fill in templates, minify and sync with web hosting service, essentially command tasks without become.
I would prefer having these two parts in the same playbook so that I do not need to worry about dependencies when switching computers. Ideally, I would like ansible to check if the apt packages are installed and only ask for the become password if it needs to install any.
Other, less satisfactory alternatives that I have explored so far and their drawbacks are highlighted below:
sudo ansible-playbook ...: Runs the unprivileged part as root, asks sudo password when not required;
ansible-playbook --ask-become-pass ...: Always asks sudo password, even if no new packages need to be installed;
ansible-playbook ...: Fails with sudo: a password is required.
Is there any way to keep the privileged and unprivileged parts in the same playbook without needlessly typing the sudo password nor giving needless privileges to the unprivileged part?
If you run ansible-playbook with the --ask-sudo-pass parameter, then your second option will ask you for the password once, and will reuse that each time, where needed.
If do run as sudo as in your first case, then you can use become within the script, to lose your privilege status, as you need it.
However, you can setup ansible.cfg to do remote installs to localhost. Hence you can setup an unprivileged ansible user (I use centos), which is setup to sudo without needing a password. Then I setup my local user in the authorized_keys for the centos user.
Hence you run unprivileged (as centos), but when you need to sudo, you can use become_method: sudo to become root.
Using this method I do bare metal installs with the same ansible playbook, as I do remote AWS installs.
Looking at my ansible.cfg I have:-
[defaults]
hostfile = inventory
# use local centos account, and ask for sudo password
remote_user = centos
#ask_pass = true
#private_key_file = ~/packer/ec2_amazon-ebs.pem
My inventory.yml contains:-
[webservers]
localhost
my setup.sh contains:-
ansible-playbook playbook.yml -vvv
#ansible-playbook --ask-sudo-pass playbook.yml
Hence all password asking statements are off. Remember as I don't specify a private_key_file in the defaults, it assumes the running user has authority to ssh to centos#localhost without requiring a password

set ansible-playbook user variable dynamically based on the ec2 distros

I'm creating an ansible playbook that goes through a group of AWS EC2 hosts and install some basic packages. Before the playbook can execute any tasks, the playbook needs to login to each host (2 type of distros AWS Linux or Ubuntu) with correct user: {{ userXXX }} this is the part that I'm not too sure how to pass in the correct user login, it would be either ec2-user or ubuntu.
- name: setup package agent
hosts: ec2_distros_hosts
user: "{{ ansible_user_id }}"
roles:
- role: package_agent_install
I was assuming ansible_user_id would work based of the reserved variable from ansible but that is not the case here. I don't want to create 2 separate playbook for different distros, is there an elegant solution to dynamically lookup user login and used as the user: ?
Here is the failed cmd with unknown user ansible-playbook -i inventory/ec2.py agent.yml
You have several ways to accomplish your task:
1. Create ansible user with the same name on every host
If you have one, you can use user: ansible_user in your playbook.
2. Tag every host with suitable login_name
You can create a tag (e.g. login_name) for every ec2 host and specify user in it. For Ubuntu hosts – ubuntu, for AWS Linux hosts – ec2-user.
After doing so, you can use user: "{{ec2_tag_login_name}}" in your playbook – this will take username from login_name tag of the host.
3. Patch the ec2.py script for your needs
It seems there is no decent way to get exact platform name from AMI, but you can use something like this:
image_name = getattr(conn.get_image(image_id=getattr(instance,'image_id')),'name')
login_name = 'user'
if 'ubuntu' in image_name:
login_name = 'ubuntu'
elif 'amzn' in image_name:
login_name = 'ec2-user'
setattr(instance, 'image_name', image_name)
setattr(instance, 'login_name', login_name)
Paste this code just before self.add_instance(instance, region) in ec2.py with the same indentation. It fetches image name and do some guess work to define login_name. Then you can use user: "{{ec2_login_name}}" in your playbook.
You can set variables based on EC2 instance tags. If you tag instances with the distro name then you can set Ansible's ssh username for each distro via group_vars files.
Example group_vars file for Ubuntu relative to your playbook: group_vars/tag_Distro_Ubuntu.yml
---
ansible_user: ubuntu
Any instances tagged Distro: Ubuntu will connect with the ubuntu user. Create a separate group_vars file per distro tag to accommodate other distros.

Resources