Is there any way to find Ansible hostname through Client's machine - ansible

I am a beginner in Ansible.
I have one server - app.example.com in which I have installed Ansible.
I have 2 Client machines - Web1.example.com and Web2.example.com.
My question is how I will know from the Client machine that the Ansible server is app.example.com
Like from the Ansible server I can find out all Client's machines in /etc/Ansible/hosts

Assuming you know the name of the service account (such as ansible) that Ansible uses on the client, and that Ansible has logged in at some time, look in /var/log/secure. You should see an entry like this:
Aug 25 09:29:02 Client1 sshd[1343]: Accepted publickey for ansible from 192.168.124.8 port 59036 ssh2: RSA SHA256:FL2me6GmqX6SAx5CI0hJ/ZXXStnmrCdCtBbfBlk5N5E
We see is logged in from 192.168.124.8.
Now...
$ getent hosts 192.168.124.8
192.168.124.8 AnsibleTower
So the hostname of the Ansible Controller is AnsibleTower.

Related

How to connect to a Windows EC2 instance using Ansible?

From reading the Connect to your Windows instance AWS EC2 docs page, my understanding is that it is not possible to SSH to Windows EC2 instances.
The typical procedure to connect to a Windows EC2 instance manually is to download the remote desktop file, get the password for the instance, and then use the Remote Desktop Connection tool to RDP to the instance (more detail is in the docs page above).
If I am correct that Windows EC2 instances do not support connecting via SSH, how can you connect to a Windows EC2 in an Ansible playbook?
I would prefer to be able to do this without installing any software on the Windows EC2 instance beforehand, but if that is necessary, I can do that.
I have found you need to do the following to connect to a Windows EC2 instance using Ansible:
You need to configure the EC2 to allow connections from Ansible using the ConfigureRemotingForAnsible.ps1 script. This can be done either by setting this as the user data when you create the EC2, or by running this script after the EC2 is created.
You need add a security group, or configure a security group already added to the EC2 to allow the following incoming requests to the EC2 from the host(s) that the Ansible playbook will be running on:
WinRM
TCP requests to whatever you configure as the Ansible port
You need to install pywinrm>=0.3.0 so Ansible can use WinRM to connect to the EC2.
You need to run the Ansible playbook with ansible_connection variable set to winrm, and the ansible_winrm_scheme variable set to http. This can be done with --extra-args or any other way that variables are set.
You need to provide the public IP address of the Windows EC2 host, either under hosts in the playbook, or in a host file passed to ansible-playbook with -i.
You need to get or set the EC2's Administrator password, and then provide this password with the ansible_password variable for the EC2.

Can we create a playbook to install a package in our own system?

I'm using Ubuntu Linux
I have created an inventory file and I have put my own system IP address there.
I have written a playbook to install the nginx package.
I'm getting the following error:
false, msg" : Failed to connect to the host via ssh: connect to host myip : Connection refused, unreachable=true
How can I solve this?
You could use the hosts keyword with the value localhost
- name: Install nginx package
hosts: localhost
tasks:
- name: Install nginx package
apt:
name: nginx
state: latest
Putting your host IP directly in your inventory treats your local machine as any other remote target. Although this can work, ansible will use the ssh connection plugin by default to reach your IP. If an ssh server is not installed/configured/running on your host it will fail (as you have experienced), as well as if you did not configure the needed credentials (ssh keys, etc.).
You don't need to (and in most common situations you don't want to) declare localhost in your inventory to use it as it is implicit by default. The implicit localhost uses the local connection plugin which does not need ssh at all and will use the same user to run the tasks as the one running the playbook.
For more information on connection plugins, see the current list
See #gary lopez answer for an example playbook to use localhost as target.

Unable to connect to AWS instance even after manually adding in public key to authorized_keys

I am unable to run an ansible-playbook or use ansible ping on a AWS instance. However, I can ssh into the instance with no problem. My hosts file is this:
[instance]
xx.xx.xxx.xxx ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/home/josh/Ansible/Amazon/AWS.pem
Should I not use a direct path. I am trying to use ansible to install apache onto the server. In my security group in the AWS console, I allowed all incoming ssh traffic in port 22, and ansi
service: name=apache2 state=started`ble tries to ssh through port 22 so that should not be the problem. Is there some crucial idea behind sshing into instances that I didn't catch onto to. I tried following this post: Ansible AWS: Unable to connect to EC2 instance but to no avail.
make sure inside ansible.cfg ***
private_key_file = path of private key(server-private-key)
and in host machine don't change default authorized_keys file ,better way is create one user, for that user create .ssh directory and then inside create a file called authorized_keys & paste your server-public key
$~/.ssh/authorized_keys
try: ansible-playbook yourplaybookname.yml --connection=local
ansible defaults to ssh

Ansible connect to jump machine through VPN?

I was wondering if it were possible to tell Ansible to set up a VPN connection before executing the rest of the playbook. I've googled around, but haven't seen much on this.
You could combine a local playbook to setup a VPN and a playbook to run your tasks against a server.
Depending on whats the job you can use ansible or a shell script to connect the VPN. Maybe there should be another playbook to disconnect afterwards.
As result you will have three playbooks and one to combine them via include:
- include: connect_vpn.yml
- include: do_stuff.yml
- include: disconnect_vpn.yml
Check How To Use Ansible and Tinc VPN to Secure Your Server Infrastructure.
Basically, you need to install thisismitch/ansible-tinc playbook and create a hosts inventory file with the nodes that you want to include in the VPN, for example:
[vpn]
prod01 vpn_ip=10.0.0.1 ansible_host=162.243.125.98
prod02 vpn_ip=10.0.0.2 ansible_host=162.243.243.235
prod03 vpn_ip=10.0.0.3 ansible_host=162.243.249.86
prod04 vpn_ip=10.0.0.4 ansible_host=162.243.252.151
[removevpn]
Then you should review the contents of the /group_vars/all file such as:
---
netname: nyc3
physical_ip: "{{ ansible_eth1.ipv4.address }}"
vpn_interface: tun0
vpn_netmask: 255.255.255.0
vpn_subnet_cidr_netmask: 32
where:
physical_ip is IP address which you want tinc to bind to;
vpn_netmask is the netmask that the will be applied to the VPN interface.
If you're using Amazon Web Services, check out the ec2_vpc_vpn module which can create, modify, and delete VPN connections. It uses boto3/botocore library.
For example:
- name: create a VPN connection
ec2_vpc_vpn:
state: present
vpn_gateway_id: vgw-XXXXXXXX
customer_gateway_id: cgw-XXXXXXXX
- name: delete a connection
ec2_vpc_vpn:
vpn_connection_id: vpn-XXXXXXXX
state: absent
For other cloud services, check the list of Ansible Cloud Modules.

Ansible Where private key should be placed?

The scenario:
We have 2 servers and use private key to communicate via ssh
Server 1 (stage)
Server 2 (production)
Inventory file:
[server]
server1.com
server2.com
Both servers has default username "ubuntu"
For now we need put private key on Local Machine, Server 1, Server 2, on the same path. For example:
On Local Machine : /tmp/key/privatekey (private key to remote server 1)
On Server 1 : /tmp/key/privatekey (private key to remote server 2)
On Server 2 : /tmp/key/privatekey (private key to remote server 1)
- name: copy archived file to another remote server using rsync.
synchronize: mode=pull src=/home/{{ user }}/packages/{{ package_name }}.tar.gz dest=/tmp/{{ package_name }}.tar.gz archive=yes
delegate_to: "{{ production_server }}"
tags: release
Ansible playbook:
ansible-playbook -i inventory --extra-vars "host=server[0] user=ubuntu" --private-key=/tmp/key/privatekey playbook.yml --tags "release"
Executing ansible playbook on local machine will remote Server 1 to copy archived file on Server 1 to Server 2.
That worked as we expected, but It's ridiculous, we have to place private key on the same path on 3 machines (local, server 1, server 2). What if we want to place private key on different path? or the user want to use different account/username for server 1 and server 2?
I've tried modifiying inventory, providing user and auth key for server 2. I placed private key under /home/ubuntu/key/ on server 1:
[server]
server1.com
server2.com ansible_ssh_private_key_file=/home/ubuntu/key/privatekey ansible_ssh_user=ubuntu
then run the same ansible playbook command above. The result, server 1 could not remote server 2
What if we want to place private key on different path? or the user want to use different account/username for server 1 and server 2?
I think you're making your life much harder than it needs to be. If I assume you're using the user 'ubuntu' and standard RSA keys
Take your PPK pairs and put them in /home/ubuntu/.ssh/id_rsa and /home/ubuntu/.ssh/id_rsa.pub on each machine. These are the default locations for PPK files the sshd daemon will look.
Write a playbook to deploy the public key (using the authorized_keys module) from your control machine to your target machines. Since you're sharing keys they are all the same.
If you want to do communication between server1 and server2 via delegate_to you'll have to update the ssh_config for the ubuntu user. Your options here are to turn off host key/known host checking OR add the appropriate entries to known_hosts. The former (less secure) can be done by adding a /home/ubuntu/.ssh/config file and the latter (more secure) can be done via ssh-keyscan (see https://serverfault.com/questions/132970/can-i-automatically-add-a-new-host-to-known-hosts)
Part of the issue here is that when you use delegate_to I don't think the whole ssh connection logic,that would normally be constructed on the control machine, doesn't get propagated to the delegate machine.
Also I would not put your private keys out in the tmp directory
Hope this helps

Resources