I'm setting up Ansible AWX and so far it's been working nicely.
Although when I'm trying to add remote hosts (e.g hosts that is NOT localhost) the playbook fails, even though it's possible to ssh from the machine running AWX to the nodes.
The node to be configured was added to an inventory, and under the inventory i added it as a host. I used the IP-address ad the hostname:
Then run the job:
If I try to run `ansible -m ping all from CLI:
root#node1:/home/ubuntu# ansible -m ping all
...
10.212.137.189 | SUCCESS => {
"changed": false,
"ping": "pong"
}
...
It seems a problem related to ssh credentials.
Have you correctly configured credentials on AWX/Tower?
You need to configure credential, type "Machine": follow documentation here Ansible Tower User Guide - Credentials
From ansible command line you can ping hosts because probably you have already copied ssh-keys on remote hosts, but AWX/Tower settings are independent from it.
Related
Installed Ansible AWX on CentOS 7 without docker. Want to add remote linux hosts(without password) to AWX and run play books and get the results. How to do it? Can any one help. One or two hosts I can add it in web page. How to add 100 remote hosts to AWX. Is there any AWX back end scripting is there to add N number of remote hosts to AWX? Thanks.
Create inventory file in git. Add it to projects in AWX. Create inventory with source as inventory project in AWX.
Your ssh keys will have to be stored in awx credentials.
Create inventory with credentials via web interface.
Sign in hosts with AWX via ssh.
ssh user#hostname
Sign in container awx_tasks
docker exec -it awx_task sh
Create or copy file with hosts ip/hostname
# cat hosts.ini
10.0.0.1
10.0.0.2
#
Add multiple hosts from file to inventory
awx-manage inventory_import \
--inventory-name my-inventory \
--source hosts.ini
Worked in my case, AWX 17.0.1
I am trying to set up a passwordless login (copy id_rsa.pub from server A to server B) from server A to server B while running a playbook from controller machine C. The playbook:
cannot have an inventory file. The host IP will be passed from the command line to the playbook as:
ansible-playbook -i , test.yml
Server A DNS name or IP address will be hardcoded in my playbook.
I have tried:
Using fetch module, I tried fetching ssh key(id_rsa_serverA.pub) from server A to controller C and then using copy module to copy the ssh_key(id_rsa_ServerA) to Server B. While it did the work, it does not adhere to the project guidelines I am working on.
Tried 'synchronize' module with ansible 2.5. Fails.
I did a similar thing,
i use user module on serverA with option generate_ssh_key: yes and user register: user_pubkey
then i use authorized_key module with delegate_to serverB, setting the key to "{{ user_pubkey.stdout }}" for the neededuser:`
you can pass #IP of serverB as extra_vers at launch time : ansible-playbook ... ... ... -e serverB=serverB_#IP
hope this helps
cheers
I am new to ansible, and I am trying to use ansible on some lxc containers.
My problem is that I don't want to install ssh on my containers. So
What I tried:
I tried to use this connection plugin but it seams that it does not work with ansible 2.
After understanding that chifflier connection plugin doesn't work, I tried to use the connection plugin from openstack.
After some failed attempts I dived into the code, and I understand
that the plugin doesn't have the information that the host I am talking with is a container.(because the code never reached this point)
My current setup:
{Ansbile host}---|ssh|---{vm}--|ansible connection plugin|---{container1}
My ansible.cfg:
[defaults]
connection_plugins = /home/jkarr/ansible-test/connection_plugins/ssh
inventory = inventory
My inventory:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm container_name=mailserver
my group vars:
ansible_host: "{{ physical_hostname }}"
ansible_ssh_extra_args: "{{ container_name }}"
ansible_user: containeruser
container_name: "{{ inventory_hostname }}"
physical_hostname: "{{ hostvars[physical_host]['ansible_host'] }}"
My testing playbook:
- name: Test Playbook
hosts: containers
gather_facts: true
tasks:
- name: testfile
copy:
content: "Test"
dest: /tmp/test
The output is:
fatal: [mailserver]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mailserver: No address associated with hostname\r\n",
"unreachable": true
}
Ansible version is: 2.3.1.0
So what am I doing wrong? any tips?
Thanks in advance!
Update 1:
Based on eric answer I am now using this connection plug-in.
I update the my inventory and it looks like:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm ansible_connection=lxc
After running my playbook I took:
<192.168.28.12> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "192.168.28.12 is not running"
}
Which is weird because 192.168.28.12 is the vm and the container is called mailserver. Also I verified that the container is running.
Also why it says that 192.168.28.12 is local lxc dir?
Update 2:
I remove my group_vars, my ansible.cfg and the connection plugin from the playbook and I got the this error:
<mailserver> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "mailserver is not running"
}
You should take a look at this lxc connection plugin. It might fit your needs.
Edit : lxc connection plugin is actually part of Ansible.
Just add ansible_connection=lxc in your inventory or group vars.
I'm trying something similar.
I want to configure a host over ssh using ansible and run lxc containers on the host, which are also configured using ansible:
ansible control node ----> host-a -----------> container-a
ssh lxc-attach
The issue with the lxc connection module is, that it only works for local lxc containers. There is no way to get it working through ssh.
At the moment the only way seems to be a direct ssh connection or a ssh connection through the first host:
ssh
ansible control node ----> container-a
or
ssh ssh
ansible control node ----> host-a ----> container-a
Both require sshd installed in the container. But the second way doesn't need port forwarding or multiple ip addresses.
Did you get a working solution?
Just starting out with Ansible. I configured the hosts file like this:
[webserver]
<remote-server-ip> ansible_user=<user> ansible_private_key_file=<full-path-to-private-ssh-key>
When I run:
ansible all -m ping
I get:
<remote-server-ip> | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Couldn't read packet: Connection reset by peer\r\n",
"unreachable": true
I can connect with no issues if I run:
ssh -i <full-path-to-private-ssh-key> <user>#<remote-server-ip>
Notes:
There is no password on the SSH key.
The project is located at ~/my_project_name.
I also tried using ansible_connection=local, and while ansible all -m ping appeared to work, in reality all it does is allow me to execute tasks that modify the host machine Ansible is running on.
The ansible.cfg file has not been modified, though it is in a different directory: /etc/ansible/ansible.cfg.
Ansible by default tries to connect to localhost through ssh. For localhost, set the ansible_connection to local in your hosts file shown below.
<remote-server-ip> ansible_connection=local ansible_user=<user> ansible_private_key_file=<full-path-to-private-ssh-key>
Refer this documentation for more details.
Hope this helps!
I think I saw this earlier, can you try adding below in the hosts file and see if that works
ansible_connection=ssh ansible_port=22
I figured out that this is an issue with the version of Ansible I was using (2.3.1). Using version 2.2.0.0 works with no problems.
When I try running this Ansible command - ansible testserver -m ping it works just fine, but when I try this command - ansible webservers -m ping I get the following error - ERROR! Specified hosts options do not match any hosts.
My host file looks like this -
[webservers]
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
What could be the problem? Why can ansible recognize the host in question and not the host group?
I've tried changing the file to make sure ansible is reading from this file specifically, and made sure this is the case, so this is not a problem of reading configurations from another file I am not aware of.
I've also tried using the solutions specified in Why Ansible skips hosts group and does nothing but it seems like a different problem with a different solution.
EDIT - added my anisble.cfg file, to point out I've already made all the vagrant specific configurations.
[defaults]
inventory = ./ansible_hosts
roles_path = ./ansible_roles
remote_user = vagrant
private_key_file = .vagrant/machine/default/virtualbox/private_key
host_key_checking = False
I think you are working with the vagrant and you need to ping like this:
ansible -i your-inventory-file webservers -m ping -u vagrant -k
Why your ping fail prevously:
ansible try to connect to vagrant machine using local login user and it doesn't exist on the vagrant machine
it also need password for the vagrant user which is also vagrant.
Hope that help you.