adding remote N number of hosts to AWX - ansible

Installed Ansible AWX on CentOS 7 without docker. Want to add remote linux hosts(without password) to AWX and run play books and get the results. How to do it? Can any one help. One or two hosts I can add it in web page. How to add 100 remote hosts to AWX. Is there any AWX back end scripting is there to add N number of remote hosts to AWX? Thanks.

Create inventory file in git. Add it to projects in AWX. Create inventory with source as inventory project in AWX.
Your ssh keys will have to be stored in awx credentials.

Create inventory with credentials via web interface.
Sign in hosts with AWX via ssh.
ssh user#hostname
Sign in container awx_tasks
docker exec -it awx_task sh
Create or copy file with hosts ip/hostname
# cat hosts.ini
10.0.0.1
10.0.0.2
#
Add multiple hosts from file to inventory
awx-manage inventory_import \
--inventory-name my-inventory \
--source hosts.ini
Worked in my case, AWX 17.0.1

Related

How to deploy Consul on AWS EC2 instance over Ansible

Note: This is not the Consul by HashiCorp it is a different project: http://consulproject.org/
I'm not sure if this is a specific problem to the consul project or if anyone with knowledge of Ansible and EC2 would be able to solve it.
I am trying to deploy a Consul instance to an Amazon EC2 instance via the Installer. I've followed the instructions found here: https://github.com/consul/installer
I'm doing this from a Mac running macOS Sierra 10.12.6 which has Python 2.7 installed on it.
I downloaded the .pem file from AWS which is a Private Key for SSH authentication. I can SSH into the server via ssh -i [path to .pem file] ec2-user#[ip address of EC2 instance]
I've installed Ansible. When I run:
sudo ansible-playbook -v consul.yml -i hosts
"Please login as the user \"ec2-user\" rather than the user
\"root\".\r\n\r\n"
However, I have edit my group_vars/all file to include root_access: false so I'm not really sure what else needs to be done.
My git diff looks like this from the repository:
diff --git a/group_vars/all b/group_vars/all
index 85bf74d..cb3db37 100644
--- a/group_vars/all
+++ b/group_vars/all
## -10,10 +10,10 ## locale: en_US.UTF-8
# General settings
env: production
-root_access: true
-deploy_user: deploy
+root_access: false
+deploy_user: ec2-user
home_dir: "/home/{{ deploy_user }}"
-deploy_password: test
+deploy_password: <some really great password>
deploy_app_name: test
deploy_server_hostname: 127.0.0.1
consul_dir: "{{ home_dir }}/consul"
## -27,7 +27,7 ## shared_public_dirs:
- "system"
- "ckeditor_assets"
-ssh_public_key_path: "~/.ssh/id_rsa.pub"
+ssh_public_key_path: "[path to .pem file]"
# Ruby
ruby_version: 2.4.9
Do not run the ansible-playbook command with sudo. If the tasks in the playbook need root access (on the remote/managed host), then add become: true where necessary (i.e. either at a task level, play level, or in your Ansible config)
I needed to update the ansible_user.
To do this in this case, I added this line to the group_vars/all file
ansible_user: ec2-user

Running Ansible setup module without an ansible.cfg

I want to run the ansible setup module, by declaring a specific host.
I am using Windows 10 machine , with windows subsystem for linux where ansible is installed.
I can run the setup module when using localhost, ie:
ansible localhost -m setup
I can run it when using the ansible.cfg file I use from one virtual environment, against a virtual machine ie:
ansible TestVm -m setup
In another virtual environment i have installed ansible without setting up an ansible.cfg file, and my hosts.yml file is in the windows (not the WSL filesystem).
No matter which directory i switch, I cannot run the setup module using for instance:
ansible -i inventory -m setup
.
Is there a way to run setup without a configuration file and if yes what part of the call am i missing?
Try this:
ansible all -i inventory -m setup
Where the inventory is a directory with at least the hosts.yml file.
And this an example of hosts.yml
all:
hosts:
192.168.2.9:
192.168.2.3:
192.168.2.4:
Also before any of these you must have added pywinrm:
pip install pywinrm

How to add remote hosts in Ansible AWX/Tower?

I'm setting up Ansible AWX and so far it's been working nicely.
Although when I'm trying to add remote hosts (e.g hosts that is NOT localhost) the playbook fails, even though it's possible to ssh from the machine running AWX to the nodes.
The node to be configured was added to an inventory, and under the inventory i added it as a host. I used the IP-address ad the hostname:
Then run the job:
If I try to run `ansible -m ping all from CLI:
root#node1:/home/ubuntu# ansible -m ping all
...
10.212.137.189 | SUCCESS => {
"changed": false,
"ping": "pong"
}
...
It seems a problem related to ssh credentials.
Have you correctly configured credentials on AWX/Tower?
You need to configure credential, type "Machine": follow documentation here Ansible Tower User Guide - Credentials
From ansible command line you can ping hosts because probably you have already copied ssh-keys on remote hosts, but AWX/Tower settings are independent from it.

rundeck ansible module - No matched nodes

system: centos 6.8 x86_64 , ansible-2.1 , rundeck-2.6.8
Batix/rundeck-ansible-plugin v1.3.0
rundeck runs on the same ansible controller host, and i just want to run the playbooks from rundeck interface.
su rundeck -c "ansible all -m ping" , works well, but when i try to run playbooks from rundeck there's an error:
Execution failed: 10: No matched nodes: MultiNodeSelector{nodenames=[localhost]}
in the jobs the node selection is "Execute locally", on the second option "Dispatch to Nodes" there's no hosts on the list. the hosts specified in the ansible playbooks, so i don't raelly need to specify it to rundeck, am i missing something here? rundeck should run the playbooks on the same host, and the ansible will deploy to the remote systems.
Thanks,
Nir.
Rundeck keeps its own internal inventory of the hosts, separate from Ansible. The plugin gives you a Resource Model Source to have Rundeck use your Ansible inventory to scan for nodes and populate the Rundeck inventory. You then configure your Rundeck job against the Rundeck inventory.
The plugin uses the defaults, so if you store your inventory in a different place than /etc/ansible/hosts on the Rundeck system, you will need to pass that as a parameter to the Resource Model Source like so:
Another solution would be to create a Rundeck job that merely acts as a wrapper for calling ansible or ansible-playbook and put that as a workflow step:
cd <your ansible dir>
. env/bin/activate # we use virtualenv
export ANSIBLE_CONFIG=/var/lib/rundeck/.ansible.cfg
ansible-playbook -i inventory -l "$RD_OPTION_LIMIT" $RD_OPTION_ANSIBLE_OPTS playbooks/$RD_OPTION_PLAYBOOK
Something like this wouldn't require the rundeck-ansible-plugin as you can configure the Rundeck job options to suit your Ansible argument needs.

Running Ansible against host group

When I try running this Ansible command - ansible testserver -m ping it works just fine, but when I try this command - ansible webservers -m ping I get the following error - ERROR! Specified hosts options do not match any hosts.
My host file looks like this -
[webservers]
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
What could be the problem? Why can ansible recognize the host in question and not the host group?
I've tried changing the file to make sure ansible is reading from this file specifically, and made sure this is the case, so this is not a problem of reading configurations from another file I am not aware of.
I've also tried using the solutions specified in Why Ansible skips hosts group and does nothing but it seems like a different problem with a different solution.
EDIT - added my anisble.cfg file, to point out I've already made all the vagrant specific configurations.
[defaults]
inventory = ./ansible_hosts
roles_path = ./ansible_roles
remote_user = vagrant
private_key_file = .vagrant/machine/default/virtualbox/private_key
host_key_checking = False
I think you are working with the vagrant and you need to ping like this:
ansible -i your-inventory-file webservers -m ping -u vagrant -k
Why your ping fail prevously:
ansible try to connect to vagrant machine using local login user and it doesn't exist on the vagrant machine
it also need password for the vagrant user which is also vagrant.
Hope that help you.

Resources