I need to connect windows ec2 instances of aws using common username and password. Here is my ansible code :
- win_ping:
with_items: "{{ Ips }}"
delegate_to: "{{ item }}"
ansible_ssh_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
It showing error like "ERROR! 'ansible_connection' is not a valid attribute for a Task. When i pass this as variable file ,playbook itself running in port 5986.So how can i connect windows host with my user and password from linux using ansible??
Thanks
Put it in your inventory in groups_vars or host_vars as suggested here https://docs.ansible.com/ansible/intro_windows.html. You can't specify ansible_connection as a task attribute.
I would also suggest encrypting the file (also recommended in the link provided) so you don't have usernames and passwords laying around in plain text.
Related
I am creating my first role in ansible to install packages through apt as it is a task I usually do on a daily basis.
apt
├── apt_install_package.yaml
├── server.yaml
├── tasks
│ └── main.yaml
└── vars
└── main.yaml
# file: apt_install_package.yaml
- name: apt role
hosts: server
gather_facts: no
become: yes
roles:
- apt
# tasks
- name: install package
apt:
name: "{{ package }}"
state: present
update_cache: True
with_items: "{{ package }}"
#vars
---
package:
- nginx
- supervisor
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
In order to install the packages I must indicate a user of ssh but I do not know where to indicate it.
My idea is to indicate parameters with variables, something similar to this
ansible_user: "{{ myuser }}"
ansible_ssh_pass: "{{ mypass }}"
ansible_become_pass: "{{ mypass }}"
Any sugestions??
Regards,
As per the latest documentation from Ansible regarding inventory sources, an SSH password can be configured using the ansible_password keyword while an SSH user is specified by ansible_user keyword, for any given host or host group entry.
It is worth mentioning that in order to implement SSH password login for hosts with Ansible, the sshpass program is required on the controller. Any plays will otherwise fail due to an error injecting the password while initializing connections.
Define these keywords within your existing inventory in the following manner:
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
ansible_user: <username>
ansible_password: <password>
ansible_become_password; <password>
The YAML plugin will parse your host configuration parameters from the inventory source and they will be appended host/group variables used for establishing an SSH connection.
The major disadvantage in this scenario is the fact that your authentication secrets will be stored in plain text, and usually Ansible is IaC committed to source control repositories. Unacceptable in production environments.
Consider using a lookup plugin such as env or file to render password secrets dynamically from the controller and prevent leakage.
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
ansible_user: "{{ lookup('env', 'MY_SECRET_ANSIBLE_USER') }}"
ansible_password: "{{ lookup('file', '/path/to/password') }}"
ansible_become_password; "{{ lookup('env', 'MY_SECRET_BECOME_PASSWORD') }}"
Other lookup plugins may also be of use to you depending on your use-case, and can be further researched here.
Alternatively, you could designate a default remote connection user, connection user password, and become password separately within your Ansible configuration file, usually located at /etc/ansible/ansible.cfg.
Read more about available configuration options in Ansible here.
All three mentioned configuration options can be specified under the [defaults] INI section as:
remote_user: default connection username
connection_password_file: default connection password file
become_password_file: default become password file
NOTE: connection_password_file and become_password_file must be a filesystem path containing the password with which Ansible should use to login or escalate privileges. Storing a default password as plain-text string within the configuration file is not supported.
Another option involves environment variables being present at the time of playbook execution. Either export or pass them explicitly via the command line at the time of playbook execution.
ANSIBLE_REMOTE_USER: default connection username
ANSIBLE_CONNECTION_PASSWORD_FILE: default connection password file
ANSIBLE_BECOME_PASSWORD_FILE: default become password file
Such as the following:
ANSIBLE_BECOME_PASSWORD_FILE="/path/to/become_password" ansible-playbook ...
This approach is considerably less efficient unless you update the shell profile of the Ansible run user on the controller to set these variables upon login automatically. Generally not recommended for sensitive data such as passwords.
In regards to security, SSH key-pair authentication is preferred whenever possible. However, the second best approach would be to vault the sensitive passwords using ansible-vault.
Vault allows you to encrypt data at rest, and reference it within inventory/plays without risk of compromise.
Review this link to gain understanding of available procedures and get started.
I have Ansible Tower running, I would like to be able to assign multiple credentials (ssh keys) to a job template and then when I run the job I would like it to use the correct credentials for the machine it connects to, thus not causing user lockouts as it cycles through the available credentials.
I have a job template and have added multiple credentials using custom credentials types, but how can I tell it to use the correct key for each host?
Thanks
I'm using community ansible so forgive if ansible tower behave differently
inventory.yml
all:
children:
web_servers:
vars:
ansible_user: www_user
ansible_ssh_private_key_file: ~/.ssh/www_rsa
hosts:
node1-www:
ansible_host: 192.168.88.20
node2-www:
ansible_host: 192.168.88.21
database_servers:
vars:
ansible_user: db_user
ansible_ssh_private_key_file: ~/.ssh/db_rsa
hosts:
node1-db:
ansible_host: 192.168.88.20
node2-db:
ansible_host: 192.168.88.21
What you can see here is that:
Both server are runing both webapp and db
Webapp & DB uses different users to run
Groups host names differ base on user (this is important)
You have to create Credential Types per Hosts and then create Credential per your credential types. you should transform injector configurations to host variables.
Please see the following post:
https://faun.pub/lets-do-devops-ansible-awx-tower-authentication-power-user-2d4c3c68ca9e
I'm creating an ansible playbook that goes through a group of AWS EC2 hosts and install some basic packages. Before the playbook can execute any tasks, the playbook needs to login to each host (2 type of distros AWS Linux or Ubuntu) with correct user: {{ userXXX }} this is the part that I'm not too sure how to pass in the correct user login, it would be either ec2-user or ubuntu.
- name: setup package agent
hosts: ec2_distros_hosts
user: "{{ ansible_user_id }}"
roles:
- role: package_agent_install
I was assuming ansible_user_id would work based of the reserved variable from ansible but that is not the case here. I don't want to create 2 separate playbook for different distros, is there an elegant solution to dynamically lookup user login and used as the user: ?
Here is the failed cmd with unknown user ansible-playbook -i inventory/ec2.py agent.yml
You have several ways to accomplish your task:
1. Create ansible user with the same name on every host
If you have one, you can use user: ansible_user in your playbook.
2. Tag every host with suitable login_name
You can create a tag (e.g. login_name) for every ec2 host and specify user in it. For Ubuntu hosts – ubuntu, for AWS Linux hosts – ec2-user.
After doing so, you can use user: "{{ec2_tag_login_name}}" in your playbook – this will take username from login_name tag of the host.
3. Patch the ec2.py script for your needs
It seems there is no decent way to get exact platform name from AMI, but you can use something like this:
image_name = getattr(conn.get_image(image_id=getattr(instance,'image_id')),'name')
login_name = 'user'
if 'ubuntu' in image_name:
login_name = 'ubuntu'
elif 'amzn' in image_name:
login_name = 'ec2-user'
setattr(instance, 'image_name', image_name)
setattr(instance, 'login_name', login_name)
Paste this code just before self.add_instance(instance, region) in ec2.py with the same indentation. It fetches image name and do some guess work to define login_name. Then you can use user: "{{ec2_login_name}}" in your playbook.
You can set variables based on EC2 instance tags. If you tag instances with the distro name then you can set Ansible's ssh username for each distro via group_vars files.
Example group_vars file for Ubuntu relative to your playbook: group_vars/tag_Distro_Ubuntu.yml
---
ansible_user: ubuntu
Any instances tagged Distro: Ubuntu will connect with the ubuntu user. Create a separate group_vars file per distro tag to accommodate other distros.
I would like to insert an IP address in to a J2 template which is used by an Ansible playbook. That IP adress is not the address of the host which is being provisioned, but the IP of the host from which the provisioning is done. Everything I have found so far covers using variables/facts related to the hosts being provisioned.
In other words: the IP I’d like to insert is the one in ['ansible_default_ipv4']['address'] when executing ansible -m setup 127.0.0.1.
I think that I could use a local playbook to write a dynamically generated template file containing the IP, but I was hoping that this might be possible “the Ansible way”.
Just use this:
{{ ansible_env["SSH_CLIENT"].split()[0] }}
You can force Ansible to fetch facts about the control host by running the setup module locally by using either a local_action or delegate_to. You can then either register that output and parse it or simply use set_fact to give it a useful name to use later in your template.
An example play might look something like:
tasks:
- name: setup
setup:
delegate_to: 127.0.0.1
- name: set ansible control host IP fact
set_fact:
ansible_control_host_address: "{{ hostvars[inventory_hostname]['ansible_eth0']['ipv4']['address'] }}"
delegate_to: 127.0.0.1
- name: template file with ansible control host IP
template:
src: /path/to/template.j2
dest: /path/to/destination/file
And then use the ansible_control_host_address variable in your template as normal:
...
Ansible control host IP: {{ ansible_control_host_address }}
...
This is how I solved the same problem (Ansible 2.7):
- name: Get local IP address
setup:
delegate_to: 127.0.0.1
delegate_facts: yes
- name: Add to hosts file
become: yes
lineinfile:
line: "{{ hostvars['127.0.0.1']['ansible_default_ipv4']['address'] }} myhost"
path: /etc/hosts
Seems to work like a charm. :-)
I am trying to configure a EOS switch using ansible. I have established connection between them using SSH keys, and tested the connection. I wrote a simple playbook and trying to execute it. But I am getting a msg: unsupported parameter for module: transport
My playbook
- hosts: EOS
gather_facts: no
roles:
- arista.eos
tasks:
- name: Configuring VLAN
eos_vlan: vlanid=150
name=NewVLAN
transport={{ transport }}
username={{ username }}
password={{ password }}
debug=yes
register: vlan_cfg_output
- debug: var=vlan_cfg_output
In my inventory file
[EOS]
Arista ansible_ssh_host=192.168.10.5
[EOS:vars]
ansible_ssh_user=ansible
transport=http
username=eapi
password=password
Which versions of the following are you using?
Ansible
pyeapi
ansible-eos
I ran a test using SSH with your playbook and hosts file and it worked fine on one of my switches.
PS: There is a mailing for questions, ansible-dev#arista.com. Also, you can raise issues here: https://github.com/arista-eosplus/ansible-eos/issues. Both of these options are better than SO.
While this might not completely answer your question, the arista.eos role is deprecated
https://eos.arista.com/forum/arista-eos-was-not-found-on-httpsgalaxy-ansible-com/