Running Ansible against host group - vagrant

When I try running this Ansible command - ansible testserver -m ping it works just fine, but when I try this command - ansible webservers -m ping I get the following error - ERROR! Specified hosts options do not match any hosts.
My host file looks like this -
[webservers]
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
What could be the problem? Why can ansible recognize the host in question and not the host group?
I've tried changing the file to make sure ansible is reading from this file specifically, and made sure this is the case, so this is not a problem of reading configurations from another file I am not aware of.
I've also tried using the solutions specified in Why Ansible skips hosts group and does nothing but it seems like a different problem with a different solution.
EDIT - added my anisble.cfg file, to point out I've already made all the vagrant specific configurations.
[defaults]
inventory = ./ansible_hosts
roles_path = ./ansible_roles
remote_user = vagrant
private_key_file = .vagrant/machine/default/virtualbox/private_key
host_key_checking = False

I think you are working with the vagrant and you need to ping like this:
ansible -i your-inventory-file webservers -m ping -u vagrant -k
Why your ping fail prevously:
ansible try to connect to vagrant machine using local login user and it doesn't exist on the vagrant machine
it also need password for the vagrant user which is also vagrant.
Hope that help you.

Related

Automate server setup with Ansible SSH keypairs fails without sshpass

I'm am using Ansible and want to automate my VPS & Homelab setups. I'm running into an issue, which is the initial connection.
If I have a fresh VPS that has never been used or logged into, how can I remotely configure the node from my laptop?
ansible.cfg
[defaults]
inventory = ./inventory
remote_user = root
host_key_checking = false
ansible_ssh_common_args = "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
inventory
[homelab]
0.0.0.0 <--- actual IP here
./playbooks/add_pub_keys.yaml
---
- hosts: all
become: yes
tasks:
- name: Install public key on remote node
authorized_key:
state: present
user: root
key: "{{lookup('file','~/.ssh/homelab.pub')}}"
Command
ansible-playbook playbooks/add_public_keys.yaml
Now, this fails with permission denied, which makes sense because there is nothing that would allow connection to the remote node.
I tried adding -ask-pass to the command:
ansible-playbook playbooks/add_public_keys.yaml -ask-pass
and typing in the root password, but that fails and says I need sshpass, which is not recommended and not readily available to install on Mac due to security. How should I think about this initial setup process?
When I get issues like this I try and replicate the problem using ansible ad-hoc commands and go back to basics. It helps to prove where the issue is located.
Are you able to run ansible ad-hoc commands against your remote server using the password?
ansible -i ip, all -m shell -a 'uptime' -u root -k
If you can't, something is up with the password or possible in the ansible.cfg.

Is there a way to set up an SSH connection (pass-wordless login) between host A and host B while running playbook from hostC using ansible only?

I am trying to set up a passwordless login (copy id_rsa.pub from server A to server B) from server A to server B while running a playbook from controller machine C. The playbook:
cannot have an inventory file. The host IP will be passed from the command line to the playbook as:
ansible-playbook -i , test.yml
Server A DNS name or IP address will be hardcoded in my playbook.
I have tried:
Using fetch module, I tried fetching ssh key(id_rsa_serverA.pub) from server A to controller C and then using copy module to copy the ssh_key(id_rsa_ServerA) to Server B. While it did the work, it does not adhere to the project guidelines I am working on.
Tried 'synchronize' module with ansible 2.5. Fails.
I did a similar thing,
i use user module on serverA with option generate_ssh_key: yes and user register: user_pubkey
then i use authorized_key module with delegate_to serverB, setting the key to "{{ user_pubkey.stdout }}" for the neededuser:`
you can pass #IP of serverB as extra_vers at launch time : ansible-playbook ... ... ... -e serverB=serverB_#IP
hope this helps
cheers

Ansible test hosts fails

Just starting out with Ansible. I configured the hosts file like this:
[webserver]
<remote-server-ip> ansible_user=<user> ansible_private_key_file=<full-path-to-private-ssh-key>
When I run:
ansible all -m ping
I get:
<remote-server-ip> | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Couldn't read packet: Connection reset by peer\r\n",
"unreachable": true
I can connect with no issues if I run:
ssh -i <full-path-to-private-ssh-key> <user>#<remote-server-ip>
Notes:
There is no password on the SSH key.
The project is located at ~/my_project_name.
I also tried using ansible_connection=local, and while ansible all -m ping appeared to work, in reality all it does is allow me to execute tasks that modify the host machine Ansible is running on.
The ansible.cfg file has not been modified, though it is in a different directory: /etc/ansible/ansible.cfg.
Ansible by default tries to connect to localhost through ssh. For localhost, set the ansible_connection to local in your hosts file shown below.
<remote-server-ip> ansible_connection=local ansible_user=<user> ansible_private_key_file=<full-path-to-private-ssh-key>
Refer this documentation for more details.
Hope this helps!
I think I saw this earlier, can you try adding below in the hosts file and see if that works
ansible_connection=ssh ansible_port=22
I figured out that this is an issue with the version of Ansible I was using (2.3.1). Using version 2.2.0.0 works with no problems.

Ansible Delegate_to WinRM

I am using the ansible vsphere_guest module to spin up a base windows machine on a VMWare environment. In my playbook, to do this I set Hosts: 127.0.0.1 connection: local. The reason I am doing this is I beleive im not targeting this playbook at any particular host, as I dont have one yet. I instead want to run the playbook locally.
When this runs, I get a new shiny windows server VM. What I now want to do is rename that VM's computer name. To do this I am trying to upload and run a powershell script like so rename_host.ps1 $newHostname. As I understand, I need to use the script module to do this. However, this time I want to target my brand new VM, which I get the IP address of through a fact, {{ newvm_ipaddress }}.
However, when I try and run this script with delegate_to: "{{ newvm_ipaddress}}", its trying to run as SSH. SSH wont work, im targeting a windows machine with remote powershell.
is there any way to set the connection to use winRM in the context of delegate_to? Perhaps there is a better way of doing this?
Thank you for your help
I managed to work out how to solve it. The answer is the ansible module 'add_host'. I have a play under vsphere_guest as follows. This creates a new in memory host, which can then be accessed by a different play.
- add_host group=new_machine name={{ vm_ipaddress }} ansible_connection=winrm
After this, I then have a new play that can now target this host.
- host: new_machine
Also to note, variables do not span across different hosts. The solution was to use the set_fact module in play A, which can then be accessed from within play B
-set_fact:
vm_ipaddress: "{{ hw_eth0.ipaddresses[1] }}" #hw_eth0 is the fact returned from the vsphere_guest module
What about updating the inventory with the new hosts name and with ssh winrm connection params before using delegate_to, or perhaps setting some default catch-all naming scheme with these params?
For example:
[databases]
db-[a:f].example.com:5986 ansible_user=Administrator ansible_connection=winrm ansible_winrm_server_cert_validation=ignore

How to continue Vagrant/Ansible provision script from error?

After I provision my Vagrant... I may get errors during provision... how do I restart from error, instead of doing everything from scratch ?
vagrant destroy -f && vagrant up
And I may get an error...
PLAY RECAP ********************************************************************
to retry, use: --limit #/path/to/playbook.retry
And I want to just resume from where it failed... it seems it can be done by the message... use --limit.... but when I use it in the vagrant context it doesn't work..
You can edit the Vagrantfile and include the ansible.start_at_task variable.
Then you can re-run the provision with $ vagrant reload --provision
Vagrant Reload docs
However, because Ansible plays are idempotent you don't really need to do the start_at_task. You can just re-run the provision with the reload command above.
You could do an ansible run with ansible-playbook on your vagrant box. The trick is to use the inventory file ansible has created. It is located in your .vagrant foler.
ansible-playbook playbook.yml -i vagrant/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --start-at-task='MY_TASK'
Vagrantfile: Assign your roles to a vagrant machines.
config.vm.provision "ansible" do |ansible|
ansible.verbose = "v"
ansible.playbook = "../playbook.yml"
ansible.groups = {
"web" => ['vm01app'],
"db" => ['vm01db'],
}
end
To start provisioning at a particular task:
vagrant ssh
provision --list-tasks
provision --start-at-task {task}
If there's anything that needs to be fixed, you can either do it while SSHed to the server, or you can make the changes in the host machine, then sync those changes to the Vagrant host.
https://developer.hashicorp.com/vagrant/docs/synced-folders
So, for instance, if you're using rsync as the sync type, you can sync the changes by running
vagrant rsync
If you used shell as provisioner instead of Ansible (for this you would need to have Ansible installed in the VM itself) you could run a command like this:
test -f "/path/to/playbook.retry" && (ansible-playbook site.yml --limit #/path/to/playbook.retry; rm -f "/path/to/playbook.retry") || ansible-playbook site.yml
This would basically check if a retry file exists, if it does it would run the playbook with that limit and delete the retry file after, if not it would run the whole playbook. Of course you would need to run these commands with sudo if you are using Vagrant, but that is easy to add.

Resources