We have 3 instance of jboss application running on a Linux server. each instance has a separate start and stop script.
How to execute all 3 at once and also one instance at a time (like stop instance B only)
Stopping all instances:
- hosts: yourHost
remote_user: yourUser
become: True
tasks:
- name: (shutdown-servcies) Stop service
service: name="{{ item }}"
state=stopped
with_items:
- "{{ jbos1 }}"
- "{{ jbos2 }}"
- "{{ jbos3 }}"
For stopping only once I'd rather recommend to run ansible command with an extra-vars like ansible-playbook... YourPlaybook.yml --extra-vars "service_to_stop=jbosX"
- hosts: yourHost
remote_user: yourUser
become: True
tasks:
- name: (shutdown-oneService) Stop service
service: name="{{ item }}"
state=stopped
with_items:
- "{{ service_to_stop }}"`
Ansible will run the command as sudo, although you can change the user and avoid running with sudo.
Related
I've created an ansible playbook for create an Ec2 instance and Install python to connect the server via ssh.
The Playbook successfully created an EC2 instance but it doesn't install the python on newly created Ec2 Instance instead it installing the python on my Master Machine.
Can someone help me to resolve this.
My Code:
- hosts: localhost
remote_user: ubuntu
become: yes
tasks:
- name: I'm going to create a Ec2 instance
ec2:
key_name: yahoo
instance_type: t2.micro
region: "ap-south-1"
image: ami-0860c9429baba6ad2
count: 1
vpc_subnet_id: subnet-aa84fbe6
assign_public_ip: yes
tags:
- creation
- name: Going to Install Python
apt:
name: python
state: present
tags:
- Web
- name: Start the service
service:
name: python
state: started
tags:
- start
As is shown in the fine manual, ec2: operations should be combined with add_host: in order to "pivot" off of localhost over to the newly provisioned instance, which one cannot add to an inventory file because it doesn't exist yet
- hosts: localhost
gather_facts: no
tasks:
- name: I'm going to create a Ec2 instance
ec2:
...etc etc
register: run_instances
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: just_launched
loop: "{{ run_instances.instances }}"
- name: Wait for SSH to come up
delegate_to: "{{ item.public_ip }}"
wait_for_connection:
delay: 60
timeout: 320
loop: "{{ run_instances.instances }}"
- hosts: just_launched
# and now you can do fun things to those ec2 instances
I suspect your question about "installing python" is unrelated, but just in case: if you genuinely really do need to add python to those ec2 instances, you cannot use most ansible modules to do that because they're written in python. But that is what the raw: module is designed to fix
- hosts: just_launched
# you cannot gather facts without python, either
gather_facts: no
tasks:
- raw: |
echo watch out for idempotency issues with your playbook using raw
yum install python
- name: NOW gather the facts
setup:
- command: echo and now you have working ansible modules again
I also find your invocation of service: { name: python, state: started } suspicious, but I would guess it has not run yet due to your question here
I am new to ansible and AWX. Making good progress with my project but am getting stuck with one part and hoping you guys can help.
I have a Workflow Template as follows
Deploy VM
Get IP of the VM Deployed & create a temp host configuration
Change the deployed machine hostname
Where I am getting stuck is once I create the hostname when the next template kicks off the hostname group is missing. I assume this is because of some sort of runspace. How do I move this information around? I need this as later on I want to get into more complicated flows.
What I have so far:
1.
- name: Deploy VM from Template
hosts: xenservers
tasks:
- name: Deploy VM
shell: xe vm-install new-name-label="{{ vm_name }}" template="{{ vm_template }}"
- name: Start VM
shell: xe vm-start vm="{{ vm_name }}"
---
- name: Get VM IP
hosts: xenservers
remote_user: root
tasks:
- name: Get IP
shell: xe vm-list name-label="{{ vm_name }}" params=networks | grep -oE '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -1
register: vm_ip
until: vm_ip.stdout != ""
retries: 15
delay: 5
Here I was setting the hosts and this works locally to the template but when it moves on it fails. I tried this in a task and a play.
- name: Add host to group
add_host:
name: "{{ vm_ip.stdout }}"
groups: deploy_vm
- hosts: deploy_vm
- name: Set hosts
hosts: deployed_vm
tasks:
- name: Set a hostname
hostname:
name: "{{ vm_name }}"
when: ansible_distribution == 'Rocky Linux'
Thanks
Hy every one !
pls help .
I have 3 server. Each of them have one systemd service. I need reboot this service by one.
So after i reboot service on host 1 , and this service is up ( i can check tcp port), i need reboot service on host 2 and so on.
How i can do it with ansible
Now i have such playbook:
---
- name: Install business service
hosts: business
vars:
app_name: "e-service"
app: "{{ app_name }}-{{ tag }}.war"
app_service: "{{ app_name }}.service"
app_bootstrap: "{{ app_name }}_bootstrap.yml"
app_folder: "{{ eps_client_dir }}/{{ app_name }}"
archive_folder: "{{ app_folder }}/archives/arch_{{ansible_date_time.date}}_{{ansible_date_time.hour}}_{{ansible_date_time.minute}}"
app_distrib_dir: "{{ eps_distrib_dir }}/{{ app_name }}"
app_dependencies: "{{ app_distrib_dir }}/dependencies.tgz"
tasks:
- name: Copy app {{ app }} to {{ app_folder }}
copy:
src: "{{ app_distrib_dir }}/{{ app }}"
dest: "{{ app_folder }}/{{ app }}"
group: ps_group
owner: ps
mode: 0644
notify:
- restart app
- name: Copy service setting to /etc/systemd/system/{{app_service}}
template:
src: "{{ app_distrib_dir }}/{{ app_service }}"
dest: /etc/systemd/system/{{ app_service }}
mode: 0644
notify:
- restart app
- name: Start service {{ app }}
systemd:
daemon-reload: yes
name: "{{ app_service }}"
state: started
enabled: true
handlers:
- name: restart app
systemd:
daemon-reload: yes
name: "{{ app_service }}"
state: restarted
enabled: true
and all service restart at one time.
try serial and max_fail_percentage, max_fail_percentage value is percent of the whole number of your hosts, if server 1 failed, then the rest server will not run,
---
- name: Install eps-business service
hosts: business
serial: 1
max_fail_percentage: 10
Try with
---
- name: Install eps-business service
hosts: business
serial: 1
By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling update use case, you can define how many hosts Ansible should manage at a single time by using the serial keyword
https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html
create a script that reboots your service and then add a loop that will check whether the service is up or not and once it is executed successfully based on the status it has returned you can go for the next service.
I'm writing a playbook that spins up X number of EC2 AWS instances then just installs some software on them (apt packages and pip modules). When I run my playbook, it executes the shell commands on my local system because Ansible won't run unless I specify a host and I put localhost.
In the playbook, I've tried specifying "hosts: all" at the top-level, but this just makes the playbook run for a second without doing anything.
playbook.yml
- name: Spin up spot instances
hosts: localhost
connection: local
vars_files: ansible-vars.yml
tasks:
- name: create {{ spot_count }} spot instances with spot_price of ${{ spot_price }}
local_action:
module: ec2
region: us-east-2
spot_price: '{{ spot_price }}'
spot_wait_timeout: 180
keypair: My-keypair
instance_type: t3a.nano
image: ami-0f65671a86f061fcd
group: Allow from Home
instance_initiated_shutdown_behavior: terminate
wait: yes
count: '{{ spot_count }}'
register: ec2
- name: Wait for the instances to boot by checking the ssh port
wait_for: host={{item.public_ip}} port=22 delay=15 timeout=300 state=started
with_items: "{{ ec2.instances }}"
- name: test whoami
args:
executable: /bin/bash
shell: whoami
with_items: "{{ ec2.instances }}"
- name: Update apt
args:
executable: /bin/bash
shell: apt update
become: yes
with_items: "{{ ec2.instances }}"
- name: Install Python and Pip
args:
executable: /bin/bash
shell: apt install python3 python3-pip -y
become: yes
with_items: "{{ ec2.instances }}"
- name: Install Python modules
args:
executable: /bin/bash
shell: pip3 install bs4 requests
with_items: "{{ ec2.instances }}"
ansible-vars.yml
ansible_ssh_private_key_file: ~/.ssh/my-keypair.pem
spot_count: 1
spot_price: '0.002'
remote_user: ubuntu
The EC2 instances get created just fine and the "wait for SSH" task works, but the shell tasks get run on my local system instead of the remote hosts.
How can I tell Ansible to connect to the EC2 instances without using a hosts file since we're creating them on the fly?
Can you try this if it works.
- name: test whoami
args:
executable: /bin/bash
shell: whoami
delegate_to: "{{ item }}"
with_items: "{{ ec2.instances }}"
I have a playbook that I run to deploy a guest VM onto my target node.
After the guest VM is fired up, it is not available to the whole network, but to the host machine only.
Also, after booting up the guest VM, I need to run some commands on that guest to configure it and make it available to all the network members.
---
- block:
- name: Verify the deploy VM script
stat: path="{{ deploy_script }}"
register: deploy_exists
failed_when: deploy_exists.stat.exists == False
no_log: True
rescue:
- name: Copy the deploy script from Ansible
copy:
src: "scripts/new-install.pl"
dest: "/home/orch"
owner: "{{ my_user }}"
group: "{{ my_user }}"
mode: 0750
backup: yes
register: copy_script
- name: Deploy VM
shell: run my VM deploy script
<other tasks>
- name: Run something on the guest VM
shell: my_other_script
args:
cdir: /var/scripts/
- name: Other task on guest VM
shell: uname -r
<and so on>
How can I run those subsequent steps on the guest VM via the host?
My only workaround is to populate a new inventory file with the VMs details and add the use the host as a bastion host.
[myvm]
myvm-01 ansible_connection=ssh ansible_ssh_user=my_user ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p someuser#host_machine"'
However, I want everything to happen on a single playbook, rather than splitting them.
I have resolved it myself.
I managed to dynamically add the host to the inventory and used a group:vars for the newly created hosts to use the VM manager as a bastion host
Playbook:
---
hosts: "{{ vm_manager }}"
become_method: sudo
gather_facts: False
vars_files:
- vars/vars.yml
- vars/vault.yml
pre_tasks:
- name: do stuff here on the VM manager
debug: msg="test"
roles:
- { role: vm_deploy, become: yes, become_user: root }
tasks:
- name: Dinamically add newly created VM to the inventory
add_host:
hostname: "{{ vm_name }}"
groups: vms
ansible_ssh_user: "{{ vm_user }}"
ansible_ssh_pass: "{{ vm_pass }}"
- name: Run the rest of tasks on the VM through the host machine
hosts: "{{ vm_name }}"
become: true
become_user: root
become_method: sudo
post_tasks:
- name: My first task on the VM
static: no
include_role:
name: my_role_for_the_VM
Inventory:
[vm_manager]
vm-manager.local
[vms]
my-test-01
my-test-02
[vms:vars]
ansible_connection=ssh
ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p username#vm-manager.local"'
Run playbook:
ansible-playbook -i hosts -vv playbook.yml -e vm_name=some-test-vm-name