Ansible for Openshift Deployment - ansible

I want to deploy the pods on Openshift using Ansible Playbook.
For this, i have written the following play :
- name: Create Deployment Config for the usecase
with_dict: "{{ apps }}"
openshift_v1_deployment_config:
name: "{{ item.key }}"
namespace: "{{ usecaseId }}"
labels:
app: "{{ item.key }}"
service: "{{ item.key }}"
replicas: 1
selector:
app: "{{ item.key }}"
service: "{{ item.key }}"
spec_template_metadata_labels:
app: "{{ item.key }}"
service: "{{ item.key }}"
containers:
- env:
image: "{{ openshift_registry_svc_url }}/{{ usecaseId }}/{{ item.key }}"
name: "{{ item.key }}"
ports:
- container_port: 8080
protocol: TCP
Anyone having idea how can i get the ip-address of the deployed pod using ansible itself.TIA

Shagun , I do not think that you would be able to get the IP address of the pods outside the cluster as the IP are managed by the openshift SAN the outside world can connect to pods are by routes, port forwarding ,Manually assign an external IP to a service
Hope this url helps with the methods to connect to pods https://docs.openshift.com/container-platform/3.5/dev_guide/expose_service/index.html

FInally I found it .
This could be done using following kubernetes module provided by ansible : k8s
e.g. :
- name: Fetch all pods which are running
set_fact:
deployments_pod: "{{ lookup('k8s', kind='Pod', namespace=test) }}"

Related

Ansible Playbook with include_tasks

I have an Ansible playbook called main.yml that builds a Windows VM using an OVF template. It then configures the VM using variables that I pass. Typical Ansible stuff.
What I would like to do is have another task inside the main.yml run at the end that configures a couple things on the VM (configureserver.yml). I pass "{{ vmName }}" as a variable in the main.yml, and want the 2nd playbook (configureserver.yml) to run against that `"{{ vmName }}" variable.
Main.yml (removed some of the customization to shorten this post)
- name: Deploy Virtual Machine from an OVF template in content library
community.vmware.vmware_content_deploy_ovf_template:
name: "{{ vmName }}"
hostname: "{{ vcenterName }}"
username: "{{ vmwareUser }}"
password: "{{ vmwarePassword }}"
template: "{{ templateName }}"
- name: Configure the VM
community.vmware.vmware_guest:
name: "{{ vmName }}"
hostname: "{{ vcenterName }}"
username: "{{ vmwareUser }}"
password: "{{ vmwarePassword }}"
datacenter: "{{ datacenterName }}"
cluster: "{{ clusterName }}"
datastore: "{{ datastoreclusterName }}"
# THIS IS WHAT I WANT TO ADD
- name: Configure the D Drive
include_tasks:
file: configureserver.yml
configureserver.yml
- hosts: "{{ vmName }}"
gather_facts: no
become: yes
vars_files:
- passwords.yml
- vars.test.yml
vars:
- ansible_connection: ssh
- ansible_shell_type: cmd
- ansible_become_method: runas
- ansible_become_user: System
In the code above, I want to run a separate task in a separate yml that will configure the server. I want to run the configureserver.yml playbook against the "{{ vmName }}" variable as the inventory or host.
Glad to provide more details if needed.
I figured it out. I changed the include_tasks to include_playbook and that did the trick

How to deploy an openstack instance with ansible in a specific project

I've been trying to deploy an instance in openstack to a different project then my users default project. The only way to do this appears to be by passing the project_name within the auth: setting. This works fine, but is not really compatible with using a clouds.yaml config with the clouds: setting or even with using the admin-openrc.sh file that openstack provides. (The admin-openrc.sh appears to take precedence over any settings in auth:).
I'm using the current openstack.cloud collection 1.3.0 (https://docs.ansible.com/ansible/latest/collections/openstack/cloud/index.html). Some of the modules have the option to specify a project: like the network module, but the one server module does not.
So this deploys in a named project:
- name: Create instances
server:
state: present
auth:
auth_url: "{{ auth_url}}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project }}"
project_domain_name: "{{ domain_name }}"
user_domain_name: "{{ domain_name }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When having sourced the admin-openrc.sh, this deploys only to your default project (OS_PROJECT_NAME=<project_name>
- name: Create instances
server:
state: present
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When I unset the OS_PROJECT_NAME, but set all other values from admin-openrc.sh, I can do this, but this requires to work with a non-default setting (unsetting the one enviromental variable:
- name: Create instances
server:
state: present
auth:
project_name: "{{ project }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
I'm looking for the most usefull way to use a specific authorization model (be it clouds.yaml or environmental variables) for all my openstack modules, while still being able to deploy to a specific project.
(You should upgrade to the last collection (1.5.3), or perhaps it is compatible with 1.3.0)
You can use the cloud property from the
"server" task (openstack.cloud.server). Here is how you can proceed :
All projects definitions are stored into clouds.yml (here is a part of its content)
clouds:
tarantula:
auth:
auth_url: https://auth.cloud.myprovider.net/v3/
project_name: tarantula
project_id: the-id-of-my-project
user_domain_name: Default
project_domain_name: Default
username: my-username
password: my-password
regions :
- US
- EU1
from a task you can refer to the appropriate cloud like this
- name: Create instances
server:
state: present
cloud: tarantula
region_name: EU1
name: "test-instance-1"
We now refrain from using any environment variables and make sure the project id is set in the configuration of in the group_vars. This works because it does not depend on a local clouds.yml file. We basically build and auth object in Ansible that can be use thoughout the deployment

How to extract individual ips of ec2 instances launch using ansible when count is more than 1

I am launching aws ec2 2 instances using ansible using count:2 please check below playbook
- name: Create an EC2 instance
ec2:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ key }}"
key_name: "{{ keypair }}"
region: "{{ region }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: yes
count: 2
vpc_subnet_id: "{{ vpc_subnet_id }}"
assign_public_ip: "{{ assign_public_ip }}"
register: ec2
- name: Add the newly created 1 EC2 instance(s) to webserver group
lineinfile: dest=inventory
insertafter='^\[webserver\]$'
line="{{ item.private_ip }} {{hoststring}}"
state=present
with_items: "{{ ec2.instances }}"
- name: add newly created remaining ec2 instance to db group
lineinfile: dest=inventory
insertafter='^\[db-server\]$'
line="{{ item.private_ip }} {{hoststring}}"
state=present
with_items: "{{ ec2.instances }}"
Here i want to add one ip to webserver host group & remaining to db host group but its not working with above playbook please help me to achieve same?
i dont wnt to use add_host here.
Since you are using AWS, have you considered using the aws_ec2 plugin for dynamic inventory?
As long as you are tagging your instances correctly, and you set up the yaml file, it would do what you want it.
Otherwise, your register: ec2 has two elements in it. The way (if it worked) you are looping through ec2 would add both to each group. You would need to add a when condition to match the something like the tag/subnet/cidr to know which server to add to which group.
One way to help see what the return is would be do print out the ec2 variable:
- debug: var=ec2

Ansible how restart service one after one on different hosts

Hy every one !
pls help .
I have 3 server. Each of them have one systemd service. I need reboot this service by one.
So after i reboot service on host 1 , and this service is up ( i can check tcp port), i need reboot service on host 2 and so on.
How i can do it with ansible
Now i have such playbook:
---
- name: Install business service
hosts: business
vars:
app_name: "e-service"
app: "{{ app_name }}-{{ tag }}.war"
app_service: "{{ app_name }}.service"
app_bootstrap: "{{ app_name }}_bootstrap.yml"
app_folder: "{{ eps_client_dir }}/{{ app_name }}"
archive_folder: "{{ app_folder }}/archives/arch_{{ansible_date_time.date}}_{{ansible_date_time.hour}}_{{ansible_date_time.minute}}"
app_distrib_dir: "{{ eps_distrib_dir }}/{{ app_name }}"
app_dependencies: "{{ app_distrib_dir }}/dependencies.tgz"
tasks:
- name: Copy app {{ app }} to {{ app_folder }}
copy:
src: "{{ app_distrib_dir }}/{{ app }}"
dest: "{{ app_folder }}/{{ app }}"
group: ps_group
owner: ps
mode: 0644
notify:
- restart app
- name: Copy service setting to /etc/systemd/system/{{app_service}}
template:
src: "{{ app_distrib_dir }}/{{ app_service }}"
dest: /etc/systemd/system/{{ app_service }}
mode: 0644
notify:
- restart app
- name: Start service {{ app }}
systemd:
daemon-reload: yes
name: "{{ app_service }}"
state: started
enabled: true
handlers:
- name: restart app
systemd:
daemon-reload: yes
name: "{{ app_service }}"
state: restarted
enabled: true
and all service restart at one time.
try serial and max_fail_percentage, max_fail_percentage value is percent of the whole number of your hosts, if server 1 failed, then the rest server will not run,
---
- name: Install eps-business service
hosts: business
serial: 1
max_fail_percentage: 10
Try with
---
- name: Install eps-business service
hosts: business
serial: 1
By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling update use case, you can define how many hosts Ansible should manage at a single time by using the serial keyword
https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html
create a script that reboots your service and then add a loop that will check whether the service is up or not and once it is executed successfully based on the status it has returned you can go for the next service.

Ansible - How to loop and get attributes of new EC2 instances

I'm creating a set of new EC2 instances using this play
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
assign_public_ip: yes
aws_access_key: XXXXXXXXXX
aws_secret_key: XXXXXXXXXX
group_id: XXXXXXXXXX
instance_type: t2.micro
image: ami-32a85152
vpc_subnet_id: XXXXXXXXXX
region: XXXXXXXXXX
user_data: "{{ lookup('file', '/SOME_PATH/cloud-config.yaml') }}"
wait: true
exact_count: 1
count_tag:
Name: Demo
instance_tags:
Name: Demo
register: ec2
- name: Add new CoreOS machines to coreos-launched group
add_host: hostname="{{ item.public_ip }}" groups=coreos-launched
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
wait_for: host="{{ item.public_dns_name }}" port=22 delay=60 timeout=320
with_items: "{{ ec2.instances }}"
Now, I need to create an SSL/TLS certificate for every of those new machines. To do so, I require their private IP. Yet, I don't know how to access the "{{ ec2.instances }}" I registered in the previous play.
I tried something, in the same playbook, to do something like this
- hosts: coreos-launched
gather_facts: False
tasks:
- name: Find the current machine IP addresse
command: echo "{{ item.private_ip }}" > /tmp/private_ip
with_items: "{{ ec2.instances }}"
sudo: yes
But without any success. Is there a way to use the "{{ ec2.instances }}" items inside a same playbook but in a different play?
-- EDIT --
Following Theo advices, I manage to get the instances attributes using
- name: Gather current facts
action: ec2_facts
register: ec2_facts
- name: Use the current facts
command: echo "{{ ec2_facts.ansible_facts.ansible_ec2_local_ipv4 }}"
with_items: "{{ ec2_facts }}"
The best way to learn about a return structure (short of the documentation) is to wrap it in a debug task.
- name: Debug ec2 variable
debug: var=ec2.instances
From there, follow the structure to get the variable you seek.
Also (following the idempotence model), you can use the ec2_remote_facts module to get the facts from the instances and call them in future plays/tasks as well.
See Variables -- Ansible Documentation for more info about calling registered variables.

Resources