How can I run an Ansible playbook without hosts specificed? - ansible

I'm writing a playbook that spins up X number of EC2 AWS instances then just installs some software on them (apt packages and pip modules). When I run my playbook, it executes the shell commands on my local system because Ansible won't run unless I specify a host and I put localhost.
In the playbook, I've tried specifying "hosts: all" at the top-level, but this just makes the playbook run for a second without doing anything.
playbook.yml
- name: Spin up spot instances
hosts: localhost
connection: local
vars_files: ansible-vars.yml
tasks:
- name: create {{ spot_count }} spot instances with spot_price of ${{ spot_price }}
local_action:
module: ec2
region: us-east-2
spot_price: '{{ spot_price }}'
spot_wait_timeout: 180
keypair: My-keypair
instance_type: t3a.nano
image: ami-0f65671a86f061fcd
group: Allow from Home
instance_initiated_shutdown_behavior: terminate
wait: yes
count: '{{ spot_count }}'
register: ec2
- name: Wait for the instances to boot by checking the ssh port
wait_for: host={{item.public_ip}} port=22 delay=15 timeout=300 state=started
with_items: "{{ ec2.instances }}"
- name: test whoami
args:
executable: /bin/bash
shell: whoami
with_items: "{{ ec2.instances }}"
- name: Update apt
args:
executable: /bin/bash
shell: apt update
become: yes
with_items: "{{ ec2.instances }}"
- name: Install Python and Pip
args:
executable: /bin/bash
shell: apt install python3 python3-pip -y
become: yes
with_items: "{{ ec2.instances }}"
- name: Install Python modules
args:
executable: /bin/bash
shell: pip3 install bs4 requests
with_items: "{{ ec2.instances }}"
ansible-vars.yml
ansible_ssh_private_key_file: ~/.ssh/my-keypair.pem
spot_count: 1
spot_price: '0.002'
remote_user: ubuntu
The EC2 instances get created just fine and the "wait for SSH" task works, but the shell tasks get run on my local system instead of the remote hosts.
How can I tell Ansible to connect to the EC2 instances without using a hosts file since we're creating them on the fly?

Can you try this if it works.
- name: test whoami
args:
executable: /bin/bash
shell: whoami
delegate_to: "{{ item }}"
with_items: "{{ ec2.instances }}"

Related

Create Ec2 instance and install Python package using Ansible Playbook

I've created an ansible playbook for create an Ec2 instance and Install python to connect the server via ssh.
The Playbook successfully created an EC2 instance but it doesn't install the python on newly created Ec2 Instance instead it installing the python on my Master Machine.
Can someone help me to resolve this.
My Code:
- hosts: localhost
remote_user: ubuntu
become: yes
tasks:
- name: I'm going to create a Ec2 instance
ec2:
key_name: yahoo
instance_type: t2.micro
region: "ap-south-1"
image: ami-0860c9429baba6ad2
count: 1
vpc_subnet_id: subnet-aa84fbe6
assign_public_ip: yes
tags:
- creation
- name: Going to Install Python
apt:
name: python
state: present
tags:
- Web
- name: Start the service
service:
name: python
state: started
tags:
- start
As is shown in the fine manual, ec2: operations should be combined with add_host: in order to "pivot" off of localhost over to the newly provisioned instance, which one cannot add to an inventory file because it doesn't exist yet
- hosts: localhost
gather_facts: no
tasks:
- name: I'm going to create a Ec2 instance
ec2:
...etc etc
register: run_instances
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: just_launched
loop: "{{ run_instances.instances }}"
- name: Wait for SSH to come up
delegate_to: "{{ item.public_ip }}"
wait_for_connection:
delay: 60
timeout: 320
loop: "{{ run_instances.instances }}"
- hosts: just_launched
# and now you can do fun things to those ec2 instances
I suspect your question about "installing python" is unrelated, but just in case: if you genuinely really do need to add python to those ec2 instances, you cannot use most ansible modules to do that because they're written in python. But that is what the raw: module is designed to fix
- hosts: just_launched
# you cannot gather facts without python, either
gather_facts: no
tasks:
- raw: |
echo watch out for idempotency issues with your playbook using raw
yum install python
- name: NOW gather the facts
setup:
- command: echo and now you have working ansible modules again
I also find your invocation of service: { name: python, state: started } suspicious, but I would guess it has not run yet due to your question here

How to stop/start multiple instance of jboss applications using ansible playbook

We have 3 instance of jboss application running on a Linux server. each instance has a separate start and stop script.
How to execute all 3 at once and also one instance at a time (like stop instance B only)
Stopping all instances:
- hosts: yourHost
remote_user: yourUser
become: True
tasks:
- name: (shutdown-servcies) Stop service
service: name="{{ item }}"
state=stopped
with_items:
- "{{ jbos1 }}"
- "{{ jbos2 }}"
- "{{ jbos3 }}"
For stopping only once I'd rather recommend to run ansible command with an extra-vars like ansible-playbook... YourPlaybook.yml --extra-vars "service_to_stop=jbosX"
- hosts: yourHost
remote_user: yourUser
become: True
tasks:
- name: (shutdown-oneService) Stop service
service: name="{{ item }}"
state=stopped
with_items:
- "{{ service_to_stop }}"`
Ansible will run the command as sudo, although you can change the user and avoid running with sudo.

How to deploy a custom VM with ansible and run subsequent steps on the guest VM via the host?

I have a playbook that I run to deploy a guest VM onto my target node.
After the guest VM is fired up, it is not available to the whole network, but to the host machine only.
Also, after booting up the guest VM, I need to run some commands on that guest to configure it and make it available to all the network members.
---
- block:
- name: Verify the deploy VM script
stat: path="{{ deploy_script }}"
register: deploy_exists
failed_when: deploy_exists.stat.exists == False
no_log: True
rescue:
- name: Copy the deploy script from Ansible
copy:
src: "scripts/new-install.pl"
dest: "/home/orch"
owner: "{{ my_user }}"
group: "{{ my_user }}"
mode: 0750
backup: yes
register: copy_script
- name: Deploy VM
shell: run my VM deploy script
<other tasks>
- name: Run something on the guest VM
shell: my_other_script
args:
cdir: /var/scripts/
- name: Other task on guest VM
shell: uname -r
<and so on>
How can I run those subsequent steps on the guest VM via the host?
My only workaround is to populate a new inventory file with the VMs details and add the use the host as a bastion host.
[myvm]
myvm-01 ansible_connection=ssh ansible_ssh_user=my_user ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p someuser#host_machine"'
However, I want everything to happen on a single playbook, rather than splitting them.
I have resolved it myself.
I managed to dynamically add the host to the inventory and used a group:vars for the newly created hosts to use the VM manager as a bastion host
Playbook:
---
hosts: "{{ vm_manager }}"
become_method: sudo
gather_facts: False
vars_files:
- vars/vars.yml
- vars/vault.yml
pre_tasks:
- name: do stuff here on the VM manager
debug: msg="test"
roles:
- { role: vm_deploy, become: yes, become_user: root }
tasks:
- name: Dinamically add newly created VM to the inventory
add_host:
hostname: "{{ vm_name }}"
groups: vms
ansible_ssh_user: "{{ vm_user }}"
ansible_ssh_pass: "{{ vm_pass }}"
- name: Run the rest of tasks on the VM through the host machine
hosts: "{{ vm_name }}"
become: true
become_user: root
become_method: sudo
post_tasks:
- name: My first task on the VM
static: no
include_role:
name: my_role_for_the_VM
Inventory:
[vm_manager]
vm-manager.local
[vms]
my-test-01
my-test-02
[vms:vars]
ansible_connection=ssh
ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p username#vm-manager.local"'
Run playbook:
ansible-playbook -i hosts -vv playbook.yml -e vm_name=some-test-vm-name

ansible dynamic hosts refuse to use custom interpreter

I'm trying to provision new machines on AWS with ec2 module
and to update my hosts file locally so that the next tasks would already use the hosts file.
So, provisioning isn't and issue and even the creation of the local host file:
- name: Provision a set of instances
ec2:
key_name: AWS
region: eu-west-1
group: default
instance_type: t2.micro
image: ami-6f587e1c # For Ubuntu 14.04 LTS use ami-b9b394ca # For Ubuntu 16.04 LTS use ami-6f587e1c
wait: yes
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: 50
wait: true
count: 2
vpc_subnet_id: subnet-xxxxxxxx
assign_public_ip: yes
instance_tags:
Name: Ansible
register: ec2
- name: Add all instance private IPs to host group
add_host:
hostname: "{{ item.private_ip }}"
ansible_ssh_user: ubuntu
groups: aws
with_items: "{{ ec2.instances }}"
- local_action: file path=./hosts state=absent
ignore_errors: yes
- local_action: file path=./hosts state=touch
- local_action: lineinfile line="[all]" insertafter=EOF dest=./hosts
- local_action: lineinfile line="{{ item.private_ip }} ansible_python_interpreter=/usr/bin/python3" insertafter=EOF dest=./hosts
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
wait_for:
host: "{{ item.private_ip }}"
port: 22
delay: 60
timeout: 600
state: started
with_items: "{{ ec2.instances }}"
- name: refreshing inventory cache
meta: refresh_inventory
- hosts: all
gather_facts: False
tasks:
- command: hostname -i
However the next task which is a simple print of hostname -i (just for the test)
fails because it can't find on Ubuntu 16.04 LTS Python 2.7 (there is python3)
For that, in my dynamic host file I add the following line:
ansible_python_interpreter=/usr/bin/python3
But it seems that ansible ignore it and goes straight to python 2.7 which is missing.
I've tried to reload the inventory file
meta: refresh_inventory
but that didn't helped either.
What am I doing wrong ?
I'm not sure why the refresh did not work but I suggest setting it in the add_host section, it takes any variable.
- name: Add all instance private IPs to host group
add_host:
hostname: "{{ item.private_ip }}"
ansible_ssh_user: ubuntu
groups: aws
ansible_python_interpreter: "/usr/bin/python3"
with_items: "{{ ec2.instances }}"
Also i find it useful to debug with this task
- debug: var=hostvars[inventory_hostname]

Ansible delegate_to how to set user that is used to connect to target?

I have an Ansible (2.1.1.) inventory:
build_machine ansible_host=localhost ansible_connection=local
staging_machine ansible_host=my.staging.host ansible_user=stager
I'm using SSH without ControlMaster.
I have a playbook that has a synchronize command:
- name: Copy build to staging
hosts: staging_machine
tasks:
- synchronize: src=... dest=...
delegate_to: staging_machine
remote_user: stager
The command prompts for password of the wrong user:
local-mac-user#my-staging-host's password:
So instead of using ansible_user defined in the inventory or remote_user defined in task to connect to target (hosts specified in play), it uses the user that we connected to delegate-to box as, to connect to target hosts.
What am I doing wrong? How do I fix this?
EDIT: It works in 2.0.2, doesn't work in 2.1.x
The remote_user setting is used at the playbook level to set a particular play run as a user.
example:
---
- hosts: webservers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: write the apache config file
template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
If you only have a certain task that needs to be run as a different user you can use the become and become_user settings.
- name: Run command
command: whoami
become: yes
become_user: some_user
Finally if you have a group of tasks to run as a user in a play you can group them with block
example:
- block:
- name: checkout repo
git:
repo: https://github.com/some/repo.git
version: master
dest: "{{ dst }}"
- name: change perms
file:
dest: "{{ dst }}"
state: directory
mode: 0755
owner: some_user
become: yes
become_user: some user
Reference:
- How to switch a user per task or set of tasks?
- https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html
The one which works for me but please note that it is for Windows and Linux do not require become_method: runas and basically does not have it
- name: restart IIS services
win_service:
name: '{{ item }}'
state: restarted
start_mode: auto
force_dependent_services: true
loop:
- 'SMTPSVC'
- 'IISADMIN'
become: yes
become_method: runas
become_user: '{{ webserver_user }}'
vars:
ansible_become_password: '{{ webserver_password }}'
delegate_facts: true
delegate_to: '{{ groups["webserver"][0] }}'
when: dev_env
Try set become: yes and become_user: stager on your YAML file... That should fix it...
https://docs.ansible.com/ansible/2.5/user_guide/become.html

Resources